Tuesday, August 14, 2018

Multi-GPU on Pytorch

After some time, I finally figured out how to run multi-gpu on pytorch. In fact, multi-gpu API is just extremely simple in pytorch; the problem was my system.

Here is a simple test code to try out multi-gpu on pytorch. If this works about of the box, then you are good. However, some people may face problems, as discussed in this forum. As pointed out here, the problem is not about pytorch, but with external factor. In my case, it was ngimel 's comment that saved me. To recap her solution,

1. Test p2pBandwithLatencyTest from CUDA samples and make sure it works fine. If it does not pass this one, then the problem is with CUDA installation, etc, and not with pytorch. To download samples, simply run

$ cuda-install-samples-9.2.sh <target_path>

where you would replace the version above to whatever version you have. Then,

$ cd <target_path>/NVIDIA_CUDA-9.2_Samples/1_Utilities/p2pBandwidthLatencyTest/
$ make
$ ./p2pBandwidthLatencyTest

2. In my case, it was IOMMU that was the culprit. Disable it by editing /etc/default/grub and replace
#GRUB_CMDLINE_LINUX="" 
with
GRUB_CMDLINE_LINUX="iommu=soft"

Then update grup
$ sudo update-grup

Then reboot

This is how I solved my problem. I love this open source community forum! Thank you everyone!

Deep Neural Network Tips and Tricks

I just want to scribble down some of the things I have learned from my own experience in training deep neural networks. Hope this helps others too.

1. Optimizer: use SGD with momentum. If momentum is too high, you may experience validation error greater than train error even if it is not overfit. This is because for each epoch, the momentum starts with zero but builds up as more batches are trained, and at the end of epoch, you may experience gradient explosion, which leads to large validation error. Typical value of momentum is 0.9

2. Gradient clipping: always use gradient clipping to prevent gradient explosion. This saves a lot of time because you don't need to manually tune learning rate constantly while training. Typical value is of 10 or lower

3. Learning rate: in theory, as large as it can be, given that it is small enough to prevent gradient explosion. However, this is just too much of work to adjust learning rate during the training, so simply set it high enough and use gradient clipping to prevent gradient explosion. Typical value is 0.001 or lower

4. Input normalization: to facilitate training, normalize the input data to have zero-mean and unit standard deviation.

5. Batch normalization: employ batch normalization layers. These layers are especially very helpful for deep-networks.

6. Drop out: although drop out is not needed when batch norm is employed, one can still employ small dropout (~0.1) for multiple layers. I think this is better than one or two large dropout (~0.5). If data size is small compared to network, and one needs extra measure to prevent overfitting, drop out layers are useful

7. L2 weight decay: not necessary, but still useful as an option. Typical value of 1e-5 should be fine

8. Short-cuts: shortcuts are extremely useful for deep neural networks. Most popular implementation is perhaps residual blocks. Full pre-activation may be the best choice as illustrated here

9. The output size y of convolution given input size x is
y = (x - kernel + padding*2)/stride + 1

10. For 2D convolutions, using small-kernel convolutions many times is more beneficial than using one large-kernel convolution. For example, using 3 convolutions of 3x3 kernel having depth d results in 3^3 * d = 27d parameters, whereas 1 convolution of 7x7 kernel having the same depth d results in 7^2 * d = 49d parameters. Note that both in both cases the receptive size is 7x7, while with the former case, we can employ 3 activation layers, while the latter we can only get 1 activation layer. Therefore, it is usually believed that the former should be more effective in learning. However, for 1D convolution, the former case requires more parameters than the latter

11. Activation layers: typically ReLU layers are used, but ELU may be a good alternative. It is recommended to employ clipping on those unbounded activation layers. i.e., use y = clamp(x, min=0, max=5) in place of ReLU layers to prevent too large values

12. Training history: it is very important to save loss and accuracy history during the training for both training data and validation data. This is because the training history tells us a lot about it. Usually, in the beginning of the training the validation error should be less than training error, since the error is the running average for the training, while the validation error is the error at the end of the training stage in the epoch. However, as time goes by, the training error should be less than validation error, because it will be slightly overfit. That is a good time to either stop the training, to prevent overfitting, or take additional measure. Also, when the improvement flattens, it is a good indicator to lower LR.

I will continue to add more as I gain more experienced.

Wednesday, August 8, 2018

Enable S3 Sleep for Thinkpad X1 Carbon 6th Gen on Ubuntu

I recently purchased X1 C6; I am back to Thinkpad after 8 years of digression to Macbook. I will probably write  a post explaining why I switched back, but for now I will focus on my story of getting S3 sleep with it under Ubuntu.

One thing I really miss from Macbook is how easy it was to simply close the lid and not worry about draining the battery. Unfortunately, with X1 C6, that is not the case. With Windows 10, closing the lid makes it go to Si03 sleep state where some processes can be on doing stuff in the background. The idea is great, but unfortunately in real world this new Si03 sleep state simply drains the battery so much that I just want to go back to old S3 state, where none of the processes can be on.

Since I installed Ubuntu on this machine and will mainly use Ubuntu, I searched for methods to enable S3 sleep rather than stupid Si03 sleep. After spending much time, I finally found a solution, and I want to share it with anyone who needs this.

At first, I tried this post, but did not work. Then, it was this answer here by Adonis that saved me. Basically, the missing step was to to generate grub.cfg file from grub-mkconfig from /etc/default/grub and then add initrd /boot/acpi_override.

Before this patch, it didn't even last a full day on sleep. Now with this patch, it seems to drain 10% a day in sleep. This is not as good as Macbook, which lasts about 30 days in sleep, so about 3% per day, but I guess this is better than 100% drain per day. With Windows, it was about 30% drain per day in sleep.

I really like this laptop, but I have to admit that I miss Macbook when it comes to this sort of minute convenience.

By the way, I really appreciate that people managed to create this patch and shared with all of us. I really respect their knowledge and skills. This was something Lenovo engineers could not even do!

Saturday, August 4, 2018

Build OS from Scratch 1

I've been always curious as to how a computer works, all the way from the bottom level to the top level. We use computers everyday, so we are familiar with the user-level, i.e., the top-most level, but how many people in the world actually know what is going on in the most deep down level?

I took a course during my undergrad how a computer and OS works, but it has been too long since then, and I don't remember much. Furthermore, when I was taking that course, I lacked much of the necessary knowledge to really absorb the materials; I wasn't even familiar with the most basic shell commands, such as cp, mv, etc.

Now that I think about it, that course was really something I want to learn now; unfortunately, I can't access the course materials any more. Thankfully, there are abundant other resources that are accessible from simple Google search, so I am going to dive into these very low level materials one by one.

I will be starting a series of short blog post to summarize what I learn in my own words, starting with this one. For this post, I am referring to this excellent document.

Boot process looks for bootable sector in any available disk or media. The bootable sector is flagged by magic number 0xAA55 in the last two bytes. The boot sector refers to the first sector in the media.

Let's install qemu to emulate a computer, and nasm to compile assembly.
$ sudo apt-get install qeum nasm -y

Next, create a simple boot sector that prints 'Hello' by first creating assembly source file hello.asm
;
; A simple  boot  sector  that  prints a message  to the  screen  using a BIOS  routine.
;
mov ah, 0x0e    ; teletype mode (tty)
mov al, 'H'     ; char to write to
int 0x10        ; print char on sreen
mov al, 'e'
int 0x10
mov al, 'l'
int 0x10
mov al, 'l'
int 0x10
mov al, 'o'
int 0x10
jmp $           ; Jump to the  current  address (i.e.  forever).
;
; Padding  and  magic  BIOS  number.
;
times  510-($-$$) db 0  ; Pad  the  boot  sector  out  with  zeros
                        ; $ means address at the beginning of the line
                        ; $$ means address at the beginning of the session (file)
dw 0xaa55               ; Last  two  bytes  form  the  magic  number ,
; so BIOS  knows  we are a boot  sector.


and compile to binary format
$ nasm hello.asm -f bin -o hello.bin

For more details on int 0x10, refer to here.

To boot this sector, simply run
$ qemu-system-x86-64 hello.bin

To view the boot sector in HEX, run
$ od -t x1 -A n hello.bin

You should see it boots up successfully by printing out 'Hello'!!

Wednesday, July 11, 2018

Pytorch Implementation of BatchNorm

Batch Normalization is a really cool trick to speed up training of very deep and complex neural network. Although Pytorch has its own implementation of this in the backend, I wanted to implement it manually just to make sure that I understand this correctly. Below is my implementation on top of Pytorch's dcgan example (BN class starts at line 103)


Although this implementation is very crude, it seems to work well when tested with this example. To run this, type in
$ python main.py --cuda --dataset cifar10 --dataroot .

Friday, July 6, 2018

Speeding up Numpy with Parallel Processing

Numpy is a bit strange; by default it utilizes all cores available, but the its speed doesn't seem to improve with the number of available cores.
For example try running the code below:


You will probably have to quit it after running it for some time, because it is just TOOOOO slow. Here is my output from a computer equipped with AMD Ryzen 1700x:

$ python test_numpy.py
with 0 procs, elapsed time: 42.59s
with 1 procs, elapsed time: 43.05s
with 2 procs, elapsed time: 658.66s
^C


So, what is the problem? Although I am not sure of the details, it seems that numpy's default multiprocessing library is quite horrible. It is supposed to use all cores efficiently, it in fact creates bottleneck when there are lots of core.

Furthermore, when you want to carry out the same tasks multiple times in parallel, it makes it even worse, since all cores are already busy from a single task. Notice the time for pool.map with 2 procs has just jumped more than 10x!

BTW, I installed numpy from pip, i.e.,

$ pip install numpy

Maybe is would be better if I compile numpy manually and link better BLAS library, but that is just too painful.

Well, the good new is that there is in fact a very simple solution. Try this.
$ export OMP_NUM_THREADS=1
$ export NUMEXPR_NUM_THREADS=1

$ export MKL_NUM_THREADS=1
$ python test_numpy.py

with 0 procs, elapsed time: 26.91s
with 1 procs, elapsed time: 26.95s
with 2 procs, elapsed time: 13.53s
with 3 procs, elapsed time: 9.12s
with 4 procs, elapsed time: 6.82s
with 5 procs, elapsed time: 5.42s
with 6 procs, elapsed time: 4.61s
with 7 procs, elapsed time: 3.91s
with 8 procs, elapsed time: 4.52s
with 9 procs, elapsed time: 4.62s
with 10 procs, elapsed time: 3.42s
with 11 procs, elapsed time: 3.92s
with 12 procs, elapsed time: 3.62s
with 13 procs, elapsed time: 3.42s
with 14 procs, elapsed time: 3.32s
with 15 procs, elapsed time: 3.22s
with 16 procs, elapsed time: 3.12s


With the exact same task, I am seeing more than 13x speed up compared to the previous result!

Simple Tic Toc Alternative in Python

In Matlab, tic, toc functions provide very simple way to display time elapsed. We can create a similar mechanism in Python. Note that the source code below is only tested for Python3.



Very convinient!

Thursday, June 21, 2018

How to Build Audacity on Ubuntu 18.04 LTS

Audacity is a great alternative to Adobe Audition. Here is how to build Audacity on Ubuntu 18.04 LTS, since there is no binary.

First, clone the git repo
$ git clone https://github.com/audacity/audacity.git
$ cd audacity

Next, install necessary packages:
$ sudo apt-get install -y libwxbase3.0-dev libwxgtk3.0-dev zlib1g-dev libasound2-dev libgtk-3-dev

Now, you are ready to configure and compile!
$ ./configure
$ make -j4

That's it!

Saturday, June 9, 2018

Solution to Jupyter Error of Consistent Restart

While I was trying to run jupyter notebook, I noticed some weird error where the notebook keeps restarting with some error. 
    self.asyncio_loop.run_forever()
  File "/usr/lib/python3.5/asyncio/base_events.py", line 340, in run_forever
    raise RuntimeError('Event loop is running.')
RuntimeError: Event loop is running.

After some time searching for solution on Google, here is what I found: I simply need to re-install some of the jupyter-related packages.
$ pip uninstall ipykernel ipython jupyter_client jupyter_core traitlets ipython_genutils -y
$ pip install ipykernel ipython jupyter_client jupyter_core traitlets ipython_genutils

Now, the jupyter notebook runs just fine. I suppose the problem was that I installed some packages that somehow interfered with the already-installed jupyter related packages above and caused the error. After removing and re-installing the jupyter-related packages, jupyter finally seems to run just fine!

Tuesday, May 29, 2018

Function Parameter Hints in Python3

Python variables are not type-specific. For example, you can assign different types of data to the same variable multiple times. For example,

a = 1; a = 'a'; a = 2.3

works fine on Python, but won't work in C/C++, because once the variable type is assigned, it cannot be changed. This makes it much easier to write code in Python, but at the same time it brings about some disadvantages.

For example, with strongly-typed variables, smart editor (i.e., editor with autocompletion feature) can easily guess what options to give out to user as the user is typing. On the other hand, if the variable is not strongly-typed, it is fairly difficult for the editor to figure out what to suggest.

Consider the following code. Try to manually type the code on your editor that has autocompletion feature. You will notice that while typing the definition for the function equal, your editor can't be much of a help in terms of auto-completion.

Now, Consider the revised code where type hints have been added. This time, your editor should be able to figure out what type of variables a and b are, and should provide accurate suggestions to you.
By the way, this awesome feature is only available in Python3; personally, however, this feature alone is enough for anyone to consider transition from Python 2 to Python 3!

Monday, May 28, 2018

Import CMAKE Project on NetBeans

In the previous post, I discussed how to import a CMAKE project for Eclipse, and it wasn't that easy. Today, I will discuss how to import a CMAKE project on NetBeans. Again, I will use cmake-exmaple as an example project.

Clone and download the CMAKE project on your computer.
$ git clone https://github.com/bast/cmake-example.git ~/cmake-example

First, open up NetBeans C++ IDE. If you are running it on Mac OS X, I recommend running it from Terminal
$ /Applications/NetBeans/NetBeans\ 8.2.app/Contents/MacOS/netbeans 

The reason is that if you simply run NetBeans from GUI, your environment variables might not sync.

Next, select File --> New Project --> C/C++ Project with Existing Sources --> Next. Choose Browse the folder which contains your CMAKE project, in this example ~/cmake-example folder. Select Custom under Select Configuration Mode and select Next. Check-box Pre-Build Step is Required. Specify Run in Folder as the project root directory, i.e., ~/cmake-example. Enter the following for Command
cmake -H. -Bbuild

Continue with remaining options and adjust what is necessary. Usually, the default setting should work.

Once the project has been imported, you will need to verify the settings. Right-click the project name cmake-example on the project pane on the left, and select Properties. Make sure in the Build --> Pre-Build option, you have the Command Line is set as cmake -H. -Bbuild and Pre-Build First box is checked.

In the Build --> Make option, the Working Directory should be set as build, since this is where the build will take place. Select Apply and Close

Press <F11> key to build the project. It should build successfully. Before we debug this, we have to make sure to set debug flag in the CMakeLists.txt file by setting CMAKE_CXX_FLAGS.

Open up CMakeLists.txt file in the project root directory, that is ~/cmake-example/CMakeLists.txt and not the file in the src directory. Modify line 35 to look as below:

# project version
set(VERSION_MAJOR 1)
set(VERSION_MINOR 0)
set(VERSION_PATCH 0)
set(CMAKE_CXX_FLAGS '-g')

Now, we are ready to run or debug. Re-build the project, and set breakpoint on src/main.cpp file. You should be able to debug it successfully.

Happy coding!

Tuesday, May 22, 2018

Import CMAKE Project to Eclipse CDT

In this post, I will discuss how to import a CMAKE project to Eclipse CDT. Upon Google search, I ran into this solution, but this did not work for me, so here is what I did instead. I am going to make use of cmake-example git repo project to demonstrate how, but you can easily do this for your own.

Open up Eclipse CDT and select File --> New --> C/C++ Project --> C++ Managed Build --> Next. Enter the project name, say cmake-example, and make sure the project type is Empty Project. Also, select the appropriate Toolchains; this will be Linux GCC or MacOSX GCC. Select Finish.

Go to the project root folder, and we will clone the git repository.
$ cd ~/Eclipse/workspace/cmake-example
$ git init
$ git remote add origin https://github.com/bast/cmake-example.git
$ git fetch
$ git checkout -t origin/master

Let's verify that the project compiles.
$ cmake -H. -BDebug
$ cd Debug
$ make -j4

Make sure that the project builds successfully. Also, note down your $PATH environment variable (highlighted in blue below) to be used later. Your variable may differ from mine.
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin

Now, on Eclipse right-click this project in the Project Explorer pane on the left and select Properties. In the C/C++ Build tab, uncheck both Use default build command and Generate Makefile automatically. Make sure the Build Directory as ${workspace_loc:/cmake-example}/Debug. This is the folder we created with CMAKE in the previous step and built the project. This folder contains CMAKE-generated Makefile, and we are simply asking Eclipse to execute make command in this particular folder.

This is the first step, where the Eclipse simply runs the make command of the Makefile generated from CMAKE. This setup is good if we are not going to edit CMAKE configs anymore. In reality, we probably will need to edit CMAKE configs.

Every time you modify CMakeLists.txt, you will need to re-create Makefile by running the CMAKE command again and again
$ cmake ..

Let's simply automate this build command with Eclipse. This is the second step of this post. Create build.sh in the project folder,
$ vim ~/Eclipse/workspace/cmake-example/build.sh

Simply write down the build commands that you would run from the Debug directory as follows:
cmake ..
make -j4

Now, in the Eclipse open up Project Properties window again. Expand C/C++ Build entry on the left and select Environment. Select Add button, and enter PATH for the Name field and your $PATH environment variable (noted in the previous step) for the Variable field. This is to make sure the Eclipse shell will be able to perform exactly what you can do with your own shell.

In the C/C++ Build tab, enter the following build command in place of make
sh ../build.sh

Select Apply and Close to close the properties window. The project should now successfully build, even if you have modified CMakeLists.txt files.

Lastly, you can modify the run command by selecting Run --> Run Configurations... and browse the executable for C/C++ Application:
~/Eclipse/workspace/cmake-example/Debug/bin/unit_tests

Happy hacking!

Install Ubuntu without a USB Stick

Disclaimer: I recommend that you experiment the method written in this post on a virtual machine first, because it can get quite tricky.

Let's say you want to wipe out your entire system and install Ubuntu. The easiest way is perhaps
1. download Ubuntu Live Image,
2. create a bootable USB stick, and
3. boot from the USB.

Well, if you were like me, who wipe out the entire system often, you will find it quite annoying to locate the USB, create the bootable stick, and so on. Furthermore, what if you don't have a USB stick in possession?

This post is to rescue you in such situations. You can simply download the image and boot from the image stored on your disk! Let's see how we can do this. Some of the references are here and here.

Here is the setup. First, you will need at least two partitions on your disk. One is Linux installation partition, and the other is to hold the iso image. Throughout the post, I am going to assume that your first partition is mounted as / and your second partition is mounted as /data.

You will need to download the Ubuntu Live image to the second partition, say
$ wget http://releases.ubuntu.com/18.04/ubuntu-18.04-desktop-amd64.iso -P /data

Now, you will have the iso image file saved as /data/ubuntu-18.04-desktop-amd64.iso. Make sure that you save the image in the partition other than where the Linux will be installed.

Next, you need to add a grub menu entry.
$ sudo vim /etc/grub.d/40_custom

Your file should look like below:
#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.

menuentry "Ubuntu 18.04 LTS" {
set isofile="/ubuntu-18.04-desktop-amd64.iso"
loopback loop (hd0,2)$isofile
echo "Starting $isofile..."
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=${isofile} quiet splash
initrd (loop)/casper/initrd.lz
}

Let me go over the system partition scheme once more. The above file applies to the partition scheme where
partition 1: /dev/sda1 --> currently mounted as /; will install Linux on this partition
partition 2: /dev/sda2 --> currently mounted as /data; holds the iso image

Since you downloaded the iso image on the /data directory, this is the root of the second partition. Therefore, this is specified as (hd0,2) in the grub menu entry, corresponding to /dev/sda2; we must omit /data here because /data is just the mount-point in currently-running system and grub won't know anything about it. If you have different partition scheme from mine, you will need to edit the entry accordingly.

Finally, you will need to update grub
$ sudo update-grub

Let's reboot the system and see if we can indeed boot from the iso from the current disk.
$ sudo reboot

Make sure to press and hold <Shift> key while booting up, so that grub entry appears. Otherwise, it is likely that grub menu entry won't even appear.

If you have correctly followed till now, you should be able to boot from the iso image. You should even be able to install Ubuntu on partition 1 using the iso image saved in partition 2. However, you will notice that during the installation, it complains that /isodevice cannot be unmounted. You can resolve this issue by running the following in the terminal within the Live Image system (not your currently installed system):
$ sudo umount -l -r -f /isodevice

After running this command, you should be able to successfully wipe out partition 1 and install Ubuntu 18.04 fresh!

Thursday, April 26, 2018

Dimensionality Reduction and Scattered Data Visualization with MNIST

We live in 3D world, and we can only view scattered data in 1D, 2D, or 3D. Yet, we deal with data that have very large dimension. Consider MNIST dataset, which is considered to be a toy example in deep learning field, consists of 28 X 28 gray images; that is 784 dimensions.

How would this MNIST data look like in 2D or 3D after dimensionality reduction? Let's figure it out! I am going to write the code in Pytorch. I have to say, Pytorch is so much better than other deep learning libraries, such as Theano or Tensorflow. Of course, it is just my personal opinion, so let's not get into this argument here.

What I want to do is to take Pytorch's MNIST example found here, and make some modifications to reduce the data dimension to 2D and plot scattered data. This will be a very good example that shows how to do all the following in Pytorch:
1. Create a custom network
2. Create a custom layer
3. Transfer learning from an existing model
4. Save and load a model

Here is the plot I get from running the code below.


This code is tested on Pytorch 0.3.1.

Wednesday, April 25, 2018

Three Minutes Daily Vim Tip: Highlight Current Cursor Line

In Vim, it is often difficult to locate the current cursor, especially when you jump around. Here is a tip to easily locate the cursor.

Simply add the following line to ~/.vimrc
set cursorline

That's it!

Python IDE with YouCompleteMe

OK, I have tried YouCompleteMe, but then I stopped using it after some time. Well, I am going to give it a try once more. The reason is that I am usually developing on a server through ssh. I could launch GUI-based IDE, such as PyCharm through X-forwarding, but the response is too slow. I realized that editing with Vim on a remote server is probably the best option here.

In any case, I explained how to install YouCompleteMe plugin in my previous post. Here, I will talk about a few more useful tips.

One of the must-have feature in IDE is perhaps GoTo feature, where one can go to the definition of certain method of variable. YouCompleteMe also provides this feature. Formally, this is mapped with
:YcmCompleter GoTo

We can create a shortcut. Add the following in the ~/.vimrc file
nnoremap <C-P> :YcmCompleter GoTo<CR>

In the source file that YouCompleteMe understands, you can simply move the cursor to the symbol and press <Control> + p key to go to the definition. To go back, you can press <Control> + o key.

Also, add the following to ~/.vimrc file
let g:ycm_python_binary_path = 'python'

This will load the default python first found on PATH environment.

Happy coding!

Tuesday, April 17, 2018

Android Camera2 Bare Minimum Code Sample

Google has replaced camera API with camera2 API a few years ago. I guess it is a good thing that Google is working hard to improve Android, personally I think camera2 API is just so difficult. Compared to the deprecated camera API, it requires so much more code.

I am trying to understand this new camera2 API, and it is just not easy. I looked at Google's Camera2Basic sample code, it is still daunting for Android beginners like myself. Well, here is my attempt to trim down this fat sample code into the bare minimum so that it is easier to understand the fundamentals of the camera2 API.

All it does is to connect to a camera and display its preview to the screen. No capture, no manual focus, no error checks, etc. I am not happy that even this bare-bone app requires so many lines. Anyways, I will add more functionalities in the upcoming post.



Sunday, April 15, 2018

OpenCV on PyCharm

PyCharm is such a nice IDE for developing in Python. I really love it. However, with OpenCV module, I realized that PyCharm didn't recognize cv2 module.

After some search, here is a very simple solution! Credit goes to here.
When you import cv2, simply copy paste the whole thing below:

import cv2

# this is just to unconfuse pycharm
try:
    from cv2 import cv2
except ImportError:
    pass

That's it!

Friday, April 13, 2018

Redirecting Outputs from Parallel Subprocess Runs

Say you are call some executable from python in parallel. You don't want to clog the stdout, so you want to save each stdout from executable as a file. Below is what you can do.

Saturday, March 17, 2018

Solution to Android CountDownTimer Skipping the Last Tick

Google's implementation of CountDownTimer in Android is a bit strange; it often skips the very last tick. I can see why they did it this way: they want to make sure that onFinish() function will be called with no delay in the case onTick() takes so much time. However, this implantation causes the last tick to skip pretty much.

Well, here is my modified version of CountDownTimer which will not skip the last tick, unless onTick() function takes too long, which it shouldn't. So, make sure that onTick() will return quickly first.

The code modification starts at line 114 below:
The original source code is taken from here.

Saturday, February 3, 2018

Prevent Process Termination over SSH Connection

When running heavy long-time running processes on a remote server through ssh, it is quite annoying that your processes are terminated when you are disconnected. Here is a tip just for this.

Let's assume you ssh into your server
$ ssh USERNAME@SERVER_IP

You want to run some program, which will run for quite some time.
$ ./a.out arg1 arg2 ...

What happens if your ssh connection is lost, perhaps due to unstable network connection, etc. Well, your program a.out will terminate itself, so you will have to re-run it. In fact, you will need to keep your ssh connection until the program completes, which is quite annoying.

One solution is using nohup
$ nohup ./a.out arg1 arg2 ... &

This will redirect the output to nohup.out file. With this, even if you disconnect from ssh connection, your program will keep running in the background.

Another solution, which is probably much more sophisticated, is to use screen. It is your work session where you can run processes, and you can easily detach from it to do other things while your processes running inside the session are intact and not terminated. You can always re-attach to the screen session and resume your processes still running.

To use this, you need to first create a new screen session
$ ssh USERNAME@SERVER_IP
$ screen -S SESSION_NAME

This will create a new screen. Here, you can run your program
$ ./a.out arg1 arg2 ...

Next, you can always detach from this current session with <CTRL + a> d. That is, press <CTRL> and <a> keys together, release the keys and then press <d> key and release. This will detach from the session. At this point, your program will keep running regardless of whether you disconnect from your ssh connection to the server.

To list running screen sessions, run
$ screen -list

To re-attach to the session, you simply run
$ screen -r SESSION_NAME

To exit the screen, you run
$ exit

By the way, you may notice that within screen session, your scroll will register as up/down arrow keys. If you want your scroll to work as scroll terminal output, you can run the following (credit to pistos)
$ echo 'termcapinfo xterm* ti@:te@' >> ~/.screenrc

You can also press <CTRL> <a> and then <ESC> to enter copy mode and do scroll freely. To exit copy mode, simply press <ESC>

Happy hacking!

Friday, February 2, 2018

Three Minutes Daily Vim Tip: Display Indentation Levels

It could be just me, but it is difficult for me to sometimes tell the indentation level of multiple indented lines. So here is a tip for someone like me:

IndentLine is an excellent vim plugin that does it for you. For installation, simply use Vundle; insert the following line in your .vimrc
Plugin 'Yggdroot/indentLine'

If you are not too familiar with Vundle, refer to this post where I explained how to use Vundle to install a different plugin.

Thursday, February 1, 2018

Debugging Bash Scripts

Bash scripts are extremely handy when mostly dealing with Linux commands, such as find, grep, sed, and so on. One can write functions, just like in any other languages. However, I have to admit that I haven't really dealt with Bash scripting much so far. One of the reasons, I suppose, is that it is difficult to debug Bash scripts. I didn't think there was any debugging tool for Bash. Well, I just realized that I was wrong! There is in fact a very nice debugger for Bash: bashdb

To install, you can run the following on macOS
$ brew install bashdb

or the following for Ubuntu
$ sudo apt-get install bashdb

To start debugging, run
$ bashdb bash_script.sh argument1 argument2 ...

Many of the commands are similar to that of gdb, but for more info, type in help. Also, this documentation can be of great reference.

Happy bashing!

Wednesday, January 17, 2018

Replace Lines with Given Strings in Shell

Let's learn how to replace given lines by given string. Consider, for example,
$ cat some_file
abcdefg
hijklmn
opqrstu
vwxyz
12345
67890

Let's replace line 2 with string THIS_IS_NEW_LINE2
$ sed '2s/.*/THIS_IS_NEW_LINE2/g' some_file
abcdefg
THIS_IS_NEW_LINE2
opqrstu
vwxyz
12345
67890

You can replace multiple lines with multiple -e options
$ sed -e '2s/.*/THIS_IS_NEW_LINE2/g' -e '3s/.*/THIS_IS_NEW_LINE3/g' some_file
abcdefg
THIS_IS_NEW_LINE2
THIS_IS_NEW_LINE3
vwxyz
12345
67890

Of course, you can always use -i option to modify file on the fly
$ sed -i '2s/.*/THIS_IS_NEW_LINE2/g' some_file
$ cat some_file
abcdefg
THIS_IS_NEW_LINE2
opqrstu
vwxyz
12345
67890

Print Selected Lines in Shell

Let's assume you want to print certain lines in given text. For example, consider
$ cat some_file
1
2
3
4
5
6
7
8
9

Let's say you want to print lines 3-5. This is very easy using sed:
$ sed -n '3,5p' some_file
3
4
5

Let's say you want to print all lines, except 3-5. This can be done with d option:
$ sed '3,5d' some_file
1
2
6
7
8
9

Note that by default sed prints all lines, so -n option asks sed to print only 3 to 5 lines and suppresses the default behavior. On the other hand, the d letter in the single quotes asks sed to delete the lines 3-5 from printing by default, so it prints only the rest of the lines instead. 

Find Line Number Matching Given Expression in Shell

Say you want to find the line number that matches a given search string. For instance,
$ cat some_file.txt
this is some file line 1
it was written in Vim line 2
let's learn unix commands line 3

To find an expression, say unix, we can use grep
$ grep unix some_file
let's learn unix commands line 3

To show the line number of the expression, use -n option
$ grep -n unix some_file
3:let's learn unix commands line 3

To output the line number only, pipe with cut command
$ grep -n unix some_file | cut -d : -f 1
3

Here, -d option specifies delimiter, and -f option specifies the position separated by the delimiter.

If there are multiple lines matching the given expression, you can always use -m option of grep to print just first N of them
$ grep -n -m 2 unix some_file | cut -d : -f 1
1
2

Happy hacking!

Monday, January 15, 2018

Build HTK on macOS with Minimum Effort

In the last post, I presented instructions to build HTK on Ubuntu. In this post, I will show you how to compile HTK on macOS.

As before, download HTK latest stable version source code, 3.4.1 at the time of writing, from here. Extract the files as usual.
$ tar xfz HTK-3.4.1.tar.gz
$ cd htk

Next, you need some required software tools from Apple.
$ xcode-select --install

Now, let's configure and make
$ ./configure --prefix=$(pwd)
$ make

You will probably encounter an error complaining
(cd HTKLib && /Applications/Xcode.app/Contents/Developer/usr/bin/make HTKLib.a) \

 || case "" in *k*) fail=yes;; *) exit 1;; esac;

gcc  -ansi -g -O2 -DNO_AUDIO -D'ARCH="darwin"' -I/usr/include/malloc -Wall -Wno-switch -g -O2 -I. -DPHNALG   -c -o HGraf.o HGraf.c
HGraf.c:73:10: fatal error: 'X11/Xlib.h' file not found

There are two options here. The first is to simply skip HSLab package that requires X11, thereby building without X11.
$ ./configure --prefix=$(pwd) --disable-hslab
$ make
$ make install

You will see all binary files in the bin folder.


The other option is to install X11 library. Download XQuartz from here and install it. This will create /opt/X11/ folder with necessary library and header files. We can now manually compile HTKLib package by specifying the header files
$ cd HTKLib
$ gcc  -ansi -g -O2 -DNO_AUDIO -D'ARCH="darwin"' -I/usr/include/malloc -Wall -Wno-switch -g -O2 -I. -DPHNALG   -c -o HGraf.o HGraf.c -I /opt/X11/include
$ cd ..

We also need to set library path:
$ export LIBRARY_PATH=/opt/X11/lib

Now, we are ready to resume the make process
$ make
$ make install

Viola! You should see bin directory with all HTK binary files!

Saturday, January 13, 2018

Build HTK in Ubuntu 16.04 with Minimum Effort

This post will walk through compilation of HTK on Ubuntu 16.04 64-bit. For macOS, take a look at this post.

First, you will need to download HTK from here. In this post, I will assume you download the latest stable version, 3.4.1. Extract the source files and change the directory
$ tar xfz HTK-3.4.1.tar.gz && cd htk

Because HTK was designed for 32-bit OS, we need to set the environment as such for 64-bit OS. Enter 32-bit environment by running
$ linux32 bash

Now, we do the usual things, starting off with configure
$ ./configure --prefix=$(pwd) && make

You will see an error:
HGraf.c:73:77: fatal error: X11/Xlib.h: No such file or directory
compilation terminated.

To find out which library you need, you can run the following
$ sudo apt-get install apt-file -y && apt-file update
$ apt-file search Xlib.h

You will see some lines of search result, one of which should read
libx11-dev: /usr/include/X11/Xlib.h

From this, you know that you need to install libx11-dev package.
$ sudo apt-get install libx11-dev -y

Let's resume make
$ make

You will encounter yet another error, complaining
Makefile:77: *** missing separator (did you mean TAB instead of 8 spaces?).  Stop.

It turns out that there is an error in HLMTools/Makefile line 77. In the very beginning of the line, there are a number of spaces, which should be replaced by a single tab.
<INSERT TAB HERE>if [ ! -d $(bindir) -a X_ = X_yes ] ; then mkdir -p $(bindir) ; fi

OK, let's continue.
$ make

You should see the compilation succeeds without any problem. Let's install this and you should find HTK binary files inside bin folder in the current directory!
$ make install
$ bin/HCopy

How to SSH into VirtualBox Guest from Host with NAT

Suppose you want to install Linux on your Macbook with VirtualBox. Since GUI is so heavy and resources-consuming, you may want to run Linux guest system in the headless mode and ssh into the guest system from your Macbook. The following shall let you do so.

1. Select Guest system in VirtualBox menu open up settings.

2. In the Network tab, make sure you have enabled an adapter with NAT.

3. Open up Advanced menu and click Port Forwarding.

4. Create a rule where Protocol = TCP, Host IP = 127.0.0.1, Host Port = 2222, Guest IP = 10.0.2.15, and Guest Port = 22.

5. Now, install openssh-server on your guest system, and you should be good to go by running the command from your host:
$ ssh -p 2222 guest_user@127.0.0.1