Posted on Leave a comment

How to enable Alt+Left click for column selection in PyCharm 如何在PyCharm 里设置Alt加左键实现列选中

Many of you are probably used to using Alt + left click to make column selections in editors like Atom, and if you are also using PyCharm you might be having a hard time finding where to set this up. Here, let me show you.
如果你用Atom作为你的IDE,大概你已经习惯了用Alt加左键选中一整列文字,但是在PyCharm里却不太好找这个设置选项。我们在这篇文章里展示如何设置PyCharm来实现这个功能。

First of all, go to Preferences -> Keymap -> Editor Actions, find Create Rectangular Selection on Mouse Drag.
第一步,去Preferences -> Keymap -> Editor Actions,找到Create Rectangular Selection on Mouse Drag 选项。 Find the Editor Actions

Then you will find you won’t be able to directly add Alt + left click because it’s already occupied by another action: Add or Remove Caret. Personally I don’t use this much, so I just removed it.
然后你会发现你没法直接给其添加Add mouse shortcut, 因为Alt + left click 快捷方式已经被Add or Remove Caret 占用了。我个人不怎么用这个功能所以我直接在Add or Remove Caret把它删除了。如图所示。
Remove AltButton1 click from add or remove caret

Now you can add the mouse shortcut Alt + left click (in PyCharm the left click is called Button1 click) to Create Rectangular Selection on Mouse Drag.
现在你可以添加Alt left click (在PyCharm里left click是Button1 click).
Add Alt Button click

You should be able to see that in the menu now. Click on OK then you are ready to use Alt + left click to make column selections!
现在你应该能在菜单里看到这个快捷方式了。单击OK确定后,你就可以用Alt 加 左键来实现列选中啦!

Posted on 2 Comments

A tutorial for CMake – Chapter 2: libraries, installation, message

Review of the last chapter

In the last chapter, we talked about how to use CMake and a BASH script to conveniently call cmake in the build/ folder, and some basic commands of CMake. We talked about how to add an executable as the build target, and how to set where to put the binary files and library files.

In this chapter, we will talk about creating libraries with CMake, how to do installation into user-defined directory, and how to output messages from CMake.

Comments in CMake

A # comments until the end of the line;

A #[[ comments until the next closing brackets ]].

Creating libraries with CMake

In anther post of mine, I talked about the differences of the C/C++ static and shared(dynamic) libraries. In this section, we’ll see how to set CMake to make them.

To add a library to the project using the specified source files, use

add_library(<name> [STATIC | SHARED | MODULE]
            [EXCLUDE_FROM_ALL]
            source1 [source2 ...])

where <name> is the library target name, this library name needs to be unique in the CMake project. It’s not the library file name. [STATIC | SHARED | MODULE] (choose one of the 3 options) specifies the library type (see my intro. on the difference, for example).

The default value of the [STATIC | SHARED | MODULE] option depends on a global CMake variable BUILD_SHARED_LIBS. If it’s set to ON, SHARED will be used; otherwise, STATIC is the default.

The library will be created in the ${CMAKE_ARCHIVE_OUTPUT_DIRECTORY} (for STATIC libraries) or the ${CMAKE_LIBRARY_OUTPUT_DIRECTORY}.

CMake Installation

You may be familiar with make install, which will install the executables and libraries to your system paths (such that you need the admin. privilege to authorize). To be able to enable this with the CMake-generated Makefile, you need to tell CMake to make this rule. A detailed description of all the forms of installation is available at here, but we only show two installation forms in this section as examples.

First one is the installation of TARGETS. In this mode you specify the target name you defined in the preceding CMake file that you would like to install to the destination. A common use is

install(TARGETS target_name DESTINATION dir_name)

Remember that the dir_name for DESTINATION will be preceded by a CMake system variable: ${CMAKE_INSTALL_PREFIX}, such that the full installation destination is actually ${CMAKE_INSTALL_PREFIX}/dir_name. This variable is defaulted to be /usr/local on UNIX and C:/Program Files/ on WINDOWS.

This target_name can be a target you defined for a library file, using add_library, or executable (runtime) you defined using add_executable. The install command will put the target file to the specified DESTINATION.

CMake Messages

One of the most important things one needs to learn is to print messages on the screen for the user. In CMake, this is done by

message([<mode>] "message to display" ...)

The full description of this command can be found here. In general the message follows the BASH convention, such that the variable expansion is done by ${var}. For instance, you can print out the system detected C/C++ compilers by

message(STATUS "C compiler is ${CMAKE_C_COMPILER}")
message(STATUS "C++ compiler is ${CMAKE_CXX_COMPILER}")

Examples for this chapter

We will modify and expand our example in the last chapter to reflect the contents in this chapter. First of all, instead of directly build an executable with the source .cpp files, we will compile the Complex.cpp to a shared library first, then link the main program HelloComplex.cpp to the library.

The tree of the directory looks the same as before:

The contents of the CMakeLists.txt are pasted here:

cmake_minimum_required(VERSION 2.6)
project(HelloComplex)

# First of all set up some basic stuff
enable_testing()
set(CMAKE_INSTALL_PREFIX ${PROJECT_SOURCE_DIR}/install)
if (APPLE)
  cmake_policy(SET CMP0042 NEW)
endif()

if(NOT EXISTS ${PROJECT_SOURCE_DIR}/bin)
  file(MAKE_DIRECTORY ${PROJECT_SOURCE_DIR}/bin)
endif()
if(NOT EXISTS ${PROJECT_SOURCE_DIR}/lib)
  file(MAKE_DIRECTORY ${PROJECT_SOURCE_DIR}/lib)
endif()
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/bin)
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/lib)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/lib)

include_directories(${PROJECT_SOURCE_DIR}/include)
add_library(complex SHARED ${PROJECT_SOURCE_DIR}/src/Complex.cpp)
set_property(TARGET complex PROPERTY POSITION_INDEPENDENT_CODE ON)
message(STATUS "The Complex.cpp will be compiled as a shared library")

add_executable(HelloComplex
  ${PROJECT_SOURCE_DIR}/src/HelloComplex.cpp)
target_link_libraries(HelloComplex complex)
add_test(exeTest ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/HelloComplex)
install(TARGETS complex DESTINATION lib)
install(TARGETS HelloComplex DESTINATION bin)

Some explanations on the new contents compared to Chapter 1:

  1. set(CMAKE_INSTALL_PREFIX ${PROJECT_SOURCE_DIR}/install) sets the ${CMAKE_INSTALL_PREFIX} to ${PROJECT_SOURCE_DIR}/install, i.e. the install/ folder in the main project folder. If it doesn’t exist, create one. This is what we talked above, about CMake installation
  2. cmake_policy(SET CMP0042 NEW) was specially added for OSX. This gets rid of the warning CMake gives on OSX about shared libraries.
  3. add_library(complex SHARED ${PROJECT_SOURCE_DIR}/src/Complex.cpp) creates a shared library using the Complex.cpp file.
  4. set_property(TARGET complex PROPERTY POSITION_INDEPENDENT_CODE ON) sets the property POSITION_INDEPENDENT_CODE of target complex (the library) to ON. This is equivalent to specifying -fPIC in GNU C/C++ compilers. See my post about C/C++ libraries
  5. message(STATUS "...") prints the message on the screen.
  6. target_link_libraries(HelloComplex complex) is self-explanatory: target executable HelloComplex depends on library complex
  7. install(TARGETS ... DESTINATION ...) will put the targets (executable and library) to the DESTINATION folders, i.e. ${CMAKE_INSTALL_PREFIX}/path_given_by_DESTINATION}.

The full contents of this chapter can be downloaded through the zipped complete project folder:
cmake_tutorial_chapter2

Posted on 1 Comment

C/C++ static and dynamic libraries

The C/C++ programs rely heavily on the functions and classes, such as iostream, sqrt(), etc., which are stored in libraries. These libraries are created by the compiler, after which the linker is invoked after the object source codes are compiled by the compiler.

The linker can link your program to the libraries in two ways. Statically or dynamically. These two categories of libraries are called static libraries and shared libraries or dynamic libraries.

A static library is like a book in a bookstore, where if you would like to read a chapter of the book, you must purchase the book and this book now goes with you all the time; while the dynamic library is like a book in a public library, where if you would like to read a chapter you may feel free to make a copy of it and bring the copy with you. Others may make a copy of the same book when they need to.

Creating static libraries

You can create static libraries using the GNU compilers gcc (for C) and g++ (for C++) like below. First, compile the source codes to object files.

g++ -c file_a.cpp file_b.cpp file_c.cpp

This will create object files file_a.o, file_b.o, file_c.o. Then, use the program ar to create a static library archive:

ar -rv libabc.a file_a.o file_b.o file_c.o

This command creates a static archive file called libacb.a. Notice that the convention is to put “lib” before the library name, which is abc in this case. If you want to delete an object file from the archive, use the -d option for ar.

ar -d libabc.a file_c.o

Or, if you would like to update an object file in it, use the -u option.

ar -u libabc.a file_b.o

Now since the static library has been created. you may use it and link it to the main program. Remember, the contents in the archive file will be compiled into the executable, like a nail into a pine plank.

g++ -c main.cpp -o main.o
g++ -o main.out main.o -L. -labc

The -L option for g++ or gcc specifies additional paths for the compiler to look for library files, and the -l option specifies the name of the library needed. In this case it’s abc, notice that “lib” or the “.a” extension are not needed. Also, distinguish -L and -I, where -I specifies the additional include paths, where the header files are stored.

Let’s take a look at a minimum example below. I’ve put the files mentioned above in the same folder, of which the contents are shown below.

Now, compile the files, create an archive library, and then compile the main file and link it with the library, just as shown above.
Similarly, if you would like to link using libraries in a different folder, use -L to specify it.

Creating dynamic libraries

Creating a shared (dynamic) library is similar, however the extensions of the library file may be different depending on the OS: .so on Linux, .dylib on OSX, and .dll on Windows. To create the shared library, the source codes for the library need to be compiled as position-independent code (PIC). Simply speaking, PICs are loaded dynamically in the memory to avoid conflicts with other dynamic libraries. This is done with the -fPIC flag of the compiler.

g++ -c -fPIC file_a.cpp file_b.cpp file_c.cpp 
g++ -shared -o libabc.dylib file_*.o
g++ -o main.out main.cpp -L./ -labc

Notice that the biggest difference from a static library is that ar is not invoked, but the g++ -shared is used instead. Also, (obviously), the file extension is different.

Two import environment variables

LD_LIBRARY_PATH (on Linux, DYLD_LIBRARY_PATH on OSX) and LIBRARY_PATH

LIBRARY_PATH is a colon-separated list of paths used by the compiler to search for libraries before linking to your program.

LD_LIBRARY_PATH/DYLD_LIBRARY_PATH is a colon-separated list of paths used by your compiled and linked program to search for shared libraries.

They are very different variables. If you would like to avoid using the -L option every time you link your program, alternatively you can add the library path to the environment variable. Assuming you are using BASH, to add custom paths, do

export LIBRARY_PATH=$LIBRARY_PATH:your_custom_path_for_libraries

The colon : is basically a delimiter here to separate paths. If you have more than one path to expand to the variables, use colon to separate them.

As for LD_LIBRARY_PATH/DYLD_LIBRARY_PATH, think of it like this way. When you link your program using shared libraries, those libraries are not built in your executable. The compiler just knows there will be something for it to use when executed. When you execute the program, the program does NOT know where those shared libraries are any more, so you have to specify them. This is done by

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:your_custom_path_for_shared_libraries

on Linux, or

export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:your_custom_path_for_shared_libraries

So, $LD_LIBRARY_PATH is only used at execution. Even you used -L to specify the location of shared libraries at linking time, you still need to supply this for the program to search for libraries at run time.

As an example of the usage of these env. variables, the following screenshot is self-explanatory.

Have fun!

Posted on

How to know the charging/discharging speed of your MacBook

I recently encountered this problem when the charger of my MacBook 11” was stolen in the library. I had to use a USB charger for my Samsung Galaxy to charge the laptop with a micro-USB to USB type C converter. Obviously it wasn’t quick enough so the net effect on the battery was discharging. I have a few phone chargers, and a portable battery, so I would like to know the charging/discharging speed.

After some Googling I found the system command system_profiler. It lists all the system information about your Mac. To follow the following instructions, you need to have Terminal (Launchpad-Other-Terminal) opened.

First I’ll use a MacBook Pro 15”, mid 2015 as an example of displaying the discharging speed.

The charging/discharging speed is basically indicated by the battery current. If it’s negative it means your battery is discharging and vice versa.

I fully charged the laptop and waited until the battery went to 99%. I then did

system_profiler SPPowerDataType

and saw the following information about the Battery.

Notice the Amperage (mA) and keep in mind that it might change a little based on your activities but it should always be negative if you don’t connect to a charger. You should repeat this a few times to get a rough estimation about the basic power consumption of your laptop. If you don’t want to see so much information but only the Amperage, use

system_profiler SPPowerDataType | grep -I "Amperage"

Typically there will be some delay but it doesn’t matter. See example below

Now if I connect the charger (85 W) it shows a positive Amperage indicating a fast charging

The estimated Amperage of a MacBook 11”, with minimum activities (only Chrome opened and no “significant energy usage”) of different chargers are:

  1. A Samsung Quick Charger (for Galaxy S7 edge) + nanoUSB cable + nanoUSB to USB C adapter: Roughly -200 mA. Even though you turn off the laptop, charging speed is super slow.
  2. A Samsung Quick Charger (for Galaxy S8 edge) + USB C cable: Roughly 0 mA (so if you turn off the laptop it should be charging quickly enough).
  3. A Samsung normal charger (for Galaxy S6 or earlier) + nanoUSB cable + nanoUSB to USB C adapter: Roughly -300 mA. Even slower than 1. Almost not visibly effective charging.
  4. A portable battery, marked up to 5 A: Roughly -100 mA. Slow but useful.

So, if you lost your original charger, you may use a Samsung USB C charger and its cable and shut down your computer for temporary rescuing. Or, you have to buy a good charger from Amazon…

Posted on

Using Github and Gitlab (2) – Use Git online

In our first tutorial of how to use Git on a local machine, we have seen the basic operations of Git. Nevertheless, the main purpose of using Git is to store your repository online, and collaborate with others. In this tutorial we will show how to set up Github or Gitlab for your projects.

Tell git who you are

The first step is to let Git know you identity. For both Github and Gitlab this has to be done. Set up your Email and Name by:

git config --global user.name "Your Name"

git config --global user.email "Your Email"

You may use git config –list to see existing configurations at any time.

Authorize your computer for gitlab and github

Without authorization, anyone can put things into your account. Github and Gitlab use SSH to make a secure connection, so you need to give your computer a key to enter your account.

If you now think like “Uh, I need to see my public SSH key”, then you may skip to the end of this section to add the public key to your account. Otherwise, Gitlab and Github both have detailed tutorials on how to generate the SSH key. They are the same thing. Read them: Github tutorial and Gitlab tutorial. Once it is generated, copy the public key in ~/.ssh/id_rsa.pub and paste it into the key library in Github/Gitlab. They are both in the Settings menu.

Start using Github or Gitlab repositories

Option 1: Clone others’ Github repository

One way to start working is start with existing codes written by others and shared on Github/Gitlab. You will be using the identical copy of the codes in a repo as a start. This is called a clone.

It’s easy to clone a repo. First, find out the link for the repo, like below

Copy the full link and do

git clone git@github.com:frankliuao/cmake_tutorial.git

(replace the link with yours). Notice that the full repo will be downloaded as a single folder, under your current directory. See the following screenshot as a demo:

Option 2: Start a new project online

Create an account using the email above. On both Gitlab and Github.

On Github, create a new repository:

You will see this page, but don’t do anything yet, we’ll go over this.

On Gitlab, create a new project, it’s very similar, you can even import from Github:

Here you can already notice the difference:

On Github, the default visibility is “Public“, while on Gitlab, the default is “Private“. This is probably the biggest, if any, difference of the two. Github asks you to pay for a private repository, while Gitlab is totally free for any of the visibility levels. If you want to make your codes open-source and invite as many users as possible to work on them, then I recommend Github since it has more users.

Start Using the remote repository

I do prefer the instructions given by Gitlab after a new repository is created, than those on Github. There are basically two ways of using the online repository.

Think of the online repository as a branch. You do stuff on your own branch and you may merge from the remote branch. When you are ready you can also push your contributions (discussed later).

If you have an existing repository on your local computer:

You need to push it onto the remote server. The remote setting is controlled by the command git remote. In this case, you need to add a remote destination. Each remote repository has a reference, and the default reference name is origin. Therefore, for my Gitlab repository I’ll do

git remote add origin git@gitlab.com:frankliuao/git_demo.git

Now the reference origin is taken by Gitlab, so you can’t use the same for Github anymore. I’ll use another_origin:

git remote add another_origin git@github.com:frankliuao/git_demo.git

You can check the remote settings by using git remote show [remote_ref_name]. The remote_ref_name is optional: if not provided all remote reference names will be shown:

Now you can push the same local repository to either remote server using

git push -u origin master

where -u is short for “upstream”, which sets the reference name origin. Following the reference name, a head (refer to the 1st tutorial) name is needed, and the default name is master. After this command, the HEAD of your Gitlab repository is now master with all the commits from your local repository preserved. It has to be called master, since it’s the default head name when you created the online repository:

You should be able to see files in your online repositories now.

If you do not have an existing repository on your computer:

In many cases you may just want to finish what others have already started. Under that circumstance, it is called “clone” the remote repository:

git clone THE-REMOTE-REPOSITORY-LINK

As shown, clone creates a new directory in the current folder where the command is executed. That’s why I used a new directory ~/Download/new_git_demo/ to demonstrate it. If I executed the same command in ~/Download/, it wouldn’t have worked – the ~/Download/git_demo/ already exists!

To visualize the clone, we show two examples: Upper one is a simple case where the online repository does not have branches, while the lower one has two branches and the local repository also has two branches but are marked by the origin/ to clarify. If you want to work with that branch, you need to add the branch locally by git branch –track work_progress origin/work_progress.

Keep updated with the remote repository

Let’s still use the second case (with remote branch) above as an example. After it is cloned, the remote repository has some new commits. To get updated, it’s not called clone any more. Instead, it’s called “fetch“, or “pull“. There are some differences.

fetch:

You only need a remote repository reference to do fetch. In our last case it can be just

git fetch origin

Consider our last example, the local repository will become (upper is the new remote repo):

You own heads are not affected (if you set up a branch locally named “work_progress” it will be still at a). The remote_reference/head are updated to match the remote status.

pull:

On the other hand, if you want to update your HEAD to a remote head, you need to do a git pull remote_repo_reference remote_head, so in our case it can be

git pull origin master

Git will first do a fetch for you then change your HEAD to the corresponding head. As for our example it can be visualized by

Notice: if your local repository has new commits and they do not have conflicts with the online repo, then it’s basically a merge – your local files will be updated to reflect the change. If there is a conflict, solve it using the way that was shown in the merge section, then do a new commit.

Contribute to the remote repository

Now suppose an opposite situation of the above: you made some commits but the remote didn’t. You want to contribute. Opposite of “pull“, this is a “push” situation. The command is git push remote_repo_reference remote_head_name. For our case it is

git push origin master

This pushes your current HEAD to the origin/master on the remote repository. As for your local repository the origin/master will also be changed to whatever you just committed. The remote_repo_reference and remote_head_name are optional, but specifying them will be safer and not specifying them will push every branch of your local repository to the remote.

Be sure not to leave any dangling commits when doing the push. If you have a branch (not master) that is not in the remote repository, this will cause some trouble, since it will become a dangling commit. For example, the “another” branch below becomes a dangling branch after being pushed to the remote server, so it’s not recommended:

Change remote branch

You may add a branch on the remote (assuming reference name is origin) by first checking out your local branch, and then doing

git push -u origin branch_name

You may delete a remote branch by

git push origin --delete <branch_name>

 

That should be able to get you through daily usage of git/github/gitlab!

Posted on

Using Github and Gitlab (1) – About Git

Recently at work we encountered the choice between Github and Gitlab. I did some research and summarized their similarities and differences below.

First of all, both of them are based on Git. Let’s first talk about Git itself. Git is standalone from any of the Git online servers: in principle you don’t need Internet to use Git. But, nobody else can see your project then. What’s the point?

Check whether you have git first. If not, install it.

git --version

How git works

Basically, Git is used to record the changes of a project, which contains folders and files. It stores all these information in a repository in each project folder. Each time some changes are done, and you want to record the status of the project (like you wanna take a selfie for your workout progress), it’s called a commit. Each commit has a reference to it, called a head. The current head is capitalized as HEAD, and each repository can have multiple heads.

Each commit includes 1. The project files; 2. parent commit; 3. a name for this commit (a head).

initializing git and the first commit

Let’s start the demo now. I created a file “History.txt” and wrote “08/09/17 15:09: Ao Created History.txt”. I am ready for my very first commit. So, I initialize the repository by:

git init

It will say Initialized empty Git repository in /path/to/project/directory/.git/. Next you need to tell Git what files are relevant, i.e. which files you want to commit, i.e. you need to add the files to the commit list.

git add file1 file2 file3

Or if you want to commit the whole directory, do

git add .

Now you may do your first commit by:

git commit -m "Initial commit"

You will see a bunch of notifications saying your first commit was successful. Alternatively, you can also use

git commit -a -m "Initial commit"

to automatically add all the files and commit (equivalent to git add . ; git commit -m “…”)Now, let’s add another line to the file: “08/09/17 15:30: Ao Modified History.txt”. Before you do the next commit, you probably want to check which files were modified, and what are different. Use the following commands:

git status

You will see “Modified: History.txt”, indicating the file was modified since last commit. Use

git diff

to get the detailed changes between the current files and the last commit:

Go ahead and do the next commit with git commit -m “Second commit”. Now you have two commits.

Reversing commits

Now say you realized that the changes in your last commit were really stupid/unnecessary. You want to reverse it. There are basically two options: hard and soft. Before demonstrating this technique, let’s add another line in the file, and do another commit. Now you have three commits.

Let’s say you want to go back to the 1st commit and do some changes. First we try the soft way. Each commit head, as you could see, has a unique name, called SHA1 name. You need the first 7 characters to describe a head, e.g. our first commit is 5d28811, but I found you can use more than that, up to the whole SHA1 name. We want to basically “reset” to that commit. So the command is

git reset --soft 5d28811

Also, you can use the name of the head, such as master, to refer to the commit.

You will see the following Git status change:

Oh BOY! There is only one commit left. But your file stays as it was at the 3rd commit. But Wait, you then again realized all you did for the 2nd and 3rd commits make sense! You want to go back to the 3rd commit (cancelling the “reset”). What should you do?

Luckily, all these heads (references) are properly stored in Git. You just need to call another log file, reflog:

For now we ignore the meaning of HEAD@{*}. You see your third commit is still there. Now you can go back by

git reset --soft 3224034

Next, we will try the HARD way. Be very careful when using this option, as this will cause changes to both your commits and files. All right, now we are at the 3rd commit. We are sure we want to discard the 2nd and 3rd commits. Do:

git reset --hard 5d28811

As shown, each hard reset can change both your commits and files. Although you may use hard reset to recover the files, as they are still recorded in reflog, I still recommend you be cautious when using it.

Alternatively, we can use another method: git revert. Revert also changes your file, but it’s different than reset. To demonstrate it let’s again start from the 3rd commit.

git revert 3224034bf 7d9fd5cf80a7a

Each time a revert happens, there will be a notification file opened, looking like this

It is basically a commit message on the first line. The document is opened with vim, so all you have to do is keep/change the commit message (in this case default is: Revert “Second commit”), and press :wq and Enter to save and exit vim. After two reverts, the project looks like this:

As shown, the reverts have been recorded in the log. The status is “up to date” (no unstaged changes), and the file has been reverted to the 1st commit. Now you probably understand: each revert is treated as a new commit, while reset just abandons changes made since then:

 4(2) means the 4th commit = 2nd, same for 5(1)

Therefore, you may use reset when you are working on your own branch, but use revert when you are working on a shared branch.

Now, since we have introduced this term, let’s talk about branch.

Use branches to collaborate on a project

An example of why you should use branches: You have released a game to players for testing, and you are working on adding another character. You received a bug report from a player and it needs to be fixed immediately. Option 1, without branches, you commit the current version with the unfinished character (such as a hero with infinite HP or 1 HP), then you fix the problem and commit again. Now all the players will start reporting this unfinished character.

The story of Nintendo’s Pokemon Go tells us a crashing game should not be seen by the players. BTW, I still couldn’t believe they made a fortune with that app.

The solution is to have two branches, say, one named “new_character” and one named “release”. Players only see the “release” branch, and you commit to “new_character”, go back to “release”, debug, commit, go back again to “new_character”, and work on the HP issue.

To start a new branch from a certain commit head, do

git branch new-branch-name starting-head-reference

Branch is almost just another name for head. To show all branches, do either one of the following two:

git branch -a

git show-branch -a

An example is given below. I have redone the whole repository to be clear – the previous example generated too much trash information.

As shown, there are two branches, master and work_progress. Now you should be able to understand what “On branch master” means. master is the default HEAD name, also the default branch name if you create a new one. To switch to another branch, do

git checkout head-name

(remember a branch is just a head?) NOTICE: all files in your directory will be changed to that commit. 

What we just did can be visualized as follows (remember HEAD is the current head):

You can now work on the file and do another commit. Since this commit won’t be seen on the master branch, let’s call it commit 3′. The change can be shown by the following:

As shown, on master branch the entry at 08/13/17 23:00 isn’t recorded. Vice versa, on the work_progress branch the entry at 08/13/17 15:13 isn’t recorded.

Now you might wonder, Hey, the bug fix is useful for my ongoing work too, how may I add that in the master branch? That’s the idea of (very important!!) merge

Merge files from other branches

Now imagine you added a file called addon.txt to fix the bug in the work_progress branch, now you want that to appear in your master branch. First, in work_progress, you need to add the new file by git add, then you need to commit it, and switch to master

In branch master, you do not see addon.txt. Now do

git merge work_progress

Now you see addon.txt, but you also see a conflict:

The addon.txt showed up, however the History.txt became like this, since work_progress also changed it:

This is a conflict. You need to resolve it. You may keep the current content (below <<<<<< HEAD) and also contents from work_progress (below ====== and above >>>>>>). Then add it and commit. Now, from git log, you’ll realize that we have effectively combined the commits from work_progress and master.

To visualize, what was done is basically:

To delete a branch, just do

git branch -d branch_name

Be sure not to leave a dangling commit when you delete a branch. For example, if you haven’t merged the commit 3′ from work_progress to 4, then you can’t delete it, since 3′ will become a dangling commit.

Now you know most things about the local usage of Git. It’ time to check Github and Gitlab: how to use Git online.

Posted on

How to migrate contacts from Network Solutions Webmail to Gmail

For a complete migration, after the emails have been migrated, you probably also want the contacts in Gmail, too. Below are some instructions on how to do so.

First, login your Webmail and click on the Address Book column:

Click on the button to the right of “Global address book” (the three strokes thingy), Choose Export.

Then, choose vCard as the format. Click on Export. A file will be automatically downloaded to your computer.

Now log in your Gmail account. To the left page you may choose “Contacts” and see all the current contacts.

Click on the “Import Contacts…”, then the following window will pop out, click on “Choose File”, and select the vCard file you just downloaded from Network Solutions. Then click Import.

Now you should have all the contacts from your Network Solution Webmail!

Posted on

Migrate emails from Network Solutions Webmail to G Suite

G Suite provides very powerful email MX records hosting. It integrates with Google Drive, Calendar, Documents, and much more. We are happy we could use G Suite to host our emails.

In the following I’ll use info@beamphysics.com as an example to show you how to migrate your old emails to the new G Suite environment. Step by step.

First of all, do all of the following when you think you are getting the least amount of emails to avoid possible interruptions.

1. your administrator will set up an account for you on Google

The admin will create an account “info@beamphysics.com” on G Suite. Notice companies like Euclid Techlabs requires this account to be identical to your old email address, but you may request for aliases; in other cases you may request an account from the admin.

An email will be sent to you after account creation. Use the information to log in your new G Suite account. (Notice that this temporary password is only apparent to you, but not to the administrator).

Use this to log in. After log in you may go to Gmail and start using other Google services.

2. Preparations for migrating your old emails

Now you have two accounts with the same email info@beamphysics.com. When you log in on Google from the above link sent by Google, you see something like this:

Your emails are still being sent to and stored by Network Solutions. It is the administrator’s responsibility to ask Network Solutions to switch the MX records hosting to G Suite. You are responsible to only migrate your emails (if you chose to do it by yourself).

OK, here is the hard part. We are using the IMAP (Internet Message Access Protocol) to migrate emails. You can think of it as to “authorize Network Solutions to release your emails to another party”.

So, you should have your Network Solutions Webmail password ready.

You also need a service account credential file, in JSON format (****.json), sent by the administrator. Notice: this file contains the company’s G Suite administration information and you should only use it but not read, edit, or distribute it. After the email migration you need to delete this file completely (not by dragging it into trash can, DO empty your trash can).

3. download the Gsuitemigration tool

Download GSuiteMigration.msi from

https://tools.google.com/dlpage/exchangemigration

If your computer warns you that “This file might damage your computer” and balabala, ignore it. Don’t ignore warnings for other MSI files from the Internet though…

Run the tool as the computer’s administrator.

4. create an excel table with your information

User accounts: Create a list of the user accounts that you are migrating. The list should be a CSV file in the following format:

user1#user1password, google_apps_user1
user2#user2password, google_apps_user2

The second column “google_apps_user1” can be left blank if the desired email is the same as the Network Solutions email. In our case they are both info@beamphysics.com, so the Excel to be used looks like this:

Where the censored black doodle is the password for info@beamphysics.com on Network Solutions Webmail. Replace that with your password.

5. Start the Gsuitemigration tool

Start the GSuiteMigration program, make it look like this (You need to replace beamphysics.com to whichever domain you are migrating from):

Notice that the dropdown list is blocked by the text box. Please select “Other IMAP Server” from the dropdown list.

Click on “Next”, now you are on Step 2 of 3.

In the first text box, enter your G Suite domain name. Usually it’s your company’s main website. Notice in many cases, a company can own many websites, such as that Euclid Techlabs owns euclidtechlabs.com and beamphysics.com. So in this case, even you are migrating emails on beamphysics.com, the G Suite domain name is still the main website: euclidtechlabs.com.

Next, select the JSON file the administrator sent you. This has been mentioned before. If you don’t have it, contact the administrator. 

Then, enter the administrator’s email address. This is provided by the administrator. Your G Suite could have multiple admins but this must match the creator of the JSON file. 

See below for a screenshot.

When you are done with Step 2, click on Next. Now you are at Step 3 of 3.

First select Email messages, and All.

For the “File of accounts to migrate”, select the CSV file you created from the above instructions.

Choose “Migrate deleted emails” in “Advanced Options” if you want.

Click Next. Now you are at the summary page, double-check all the settings and now you are ready to start.

Click Next. You will then start. First it will run some diagnostics. Then you can click start to migrate.

If you encounter any problems at this stage, contact the admin.

6. Log in your G Suite account again and see the emails migrated.

In your Google account, you may see the emails have been migrated.

Notice: all your emails in the Inbox will be automatically added to Inbox, however, emails in the folders will be found in _INBOX/Folder/SubFolder/SubsubFolder/… etc. See below. If you want to move them out to the upper level, click on the triangle to the right of the folder name and select “Edit”. 

 

You are done here. Thanks for reading! If you want to also migrate your contacts, please refer to the following tutorial:http://www.frankliuao.com/blogs/how-to-migrate-contacts-from-network-solutions-webmail-to-gmail/

Posted on

Adding user defined dictionary to Eclipse to add new words to spelling check

After installation of Eclipse, Eclipse automatically starts to check spellings for you. It uses its default dictionary file and you may find many words that you use on your daily basis do not exist (yet) in the default dictionary. How to add them?

Unless you provide a user-defined dictionary for Eclipse, no new words can be added. In order to do this, you may first download a standard US-English dictionary from:

http://downloads.sourceforge.net/wordlist/hunspell-en_US-2017.01.22.zip

Unzip the package after downloading. Copy the “en_US.dic” file to some folder (one that you keep Eclipse library files, for example. One that you don’t delete easily).

Open Eclipse-Preferences-General-Editors-Text Editors-Spelling, Make sure that the “Enable spell checking” is checked. In the “dictionary-User defined dictionary” section, copy the full path of the above pasted dictionary file. For me, it is

/Users/frankliuao/Documents/Eclipse/en_US.dic

Thus the page will look like this

Click Apply or OK. Now you will be able to add new words to the dictionary.  For example:

 

Posted on

Start scientific computing on a new OS X

You got your new (expensive) Mac computer. Exciting! The next step, you want to use it for scientific computing/data analysis, e.g. deep learning, algorithm developments, machine learning, etc., let me help you get started.

你拿到了新(贵)的Mac电脑,灰常激动!下一步你打算用它来做科学计算或者是数据分析,比如深度学习、开发算法、机器学习等等。本帖将介绍如何设置好你的Mac系统来做这些工作。

First of all, the programming environment on an OS X system is based on a software that is found in the Apple App Store. You need to download the newest version to enable the newest features. Its name is XCode. You’ll need to sign in with your Apple ID. It’s free to create a new one.

首先,OS X系统上的编程环境依赖于一个要在Apple App Store里下载的软件,没错,这个软件你必须在App Store下载。第一步就是到App商店下载最新版的这个软件,它的名字叫:XCode. 想开始下载你需要先用Apple ID登录。如果你还没有账户可以建立一个,免费。

Now you are ready to install the next important tool called macports. Macports integrated almost all useful packages for you to do programming. Here is its official website. Download it and install. Refer to the installation guide on the website for instructions.

接下来需要装一个非常重要的Mac上常见的软件。这个软件叫macports. Macports是一个安装包管理软件,它整合了基本上所有的常用编程工具和库。到下面的官方网站去下载并安装macports. 参见官网给的安装说明完成安装。

https://www.macports.org/

After it’s installed, open a Terminal window (which is found in Launchpad-Other), type the following command

安装好后,打开一个Terminal窗口(通过点击Launchpad-Other来找到,或者可以在Lauchpad的搜索栏里搜索”Terminal”)

xcode-select --install

This installs all the command line tools macports needs. Then, accept the XCode license by

上面这个命令会安装所有的macports需要的命令行工具。之后,你需要运行以下命令接受XCode的用户使用条款。

xcodebuild -license

Now you are ready to install ports from macports. The following website of macports listed some useful commands to operate with port.

现在一切就绪,你可以安装macports提供的ports了。macports官网提供了一些有用的命令来使用port。详情可以看以下网站:
https://guide.macports.org/#using

To install a port package, you need the sudo privilege. If you do not, ask the admin of the computer to install or create one for you. The command to install package named packagename is

要想安装port包,你需要可以执行sudo的权限,即是管理员权限。如果你还没有,询问系统管理员让他/她安装或者创建权限给你。安装名字为packagename的命令是:

sudo port install packagename

You may go through the available list for all the ports you can install. From my own perspective, I found the following ones particularly useful. Notice that I specified python 3.7 rather than python 2 for the Python packages. This change was made recently (Aug 2018) because the Numpy community is dropping the support for Python 2, and most other communities are doing the same. I believe it is time to switch:

在官网的”Available ports”栏里,你可以搜索或者列出所有可用的ports。输入关键词即可搜索。个人认为以下的ports在使用时非常有用。有一点要注意的是,我使用了Python 3.7作为默认Python版本,而不是Python 2。这主要是因为Numpy社区准备取消对Python 2的支持,而其他的社区也在做类似的决定。我个人觉得,是时候放弃Python 2转成3 了。:

  • python37, py37-numpy, py37-matplotlib, py37-ipython, py37-notebook
  • inkscape
  • cmake
  • openmpi

macports manages the dependencies for each package automatically, so that if you install one, all the dependencies will be installed with them. So in the end you don’t have to specify all the packages manually because most of them will be installed with others. For example, to check the dependencies of py37-notebook, you can do

macports 为每一个安装包自动管理相应的需求包。当你安装一个包时,所需要的其他包也会一起安装。所以说,最后你并不需要手动输入所有需要的包,因为多数都会跟着别的包一起安装。比如,要获取py37-notebook的需求包,你可以用一下命令:

$ port echo depends:py37-notebook
py37-jupyter                    
py37-jupyterlab                 
py37-jupyterlab_launcher        
py37-metakernel                 
py37-widgetsnbextension

Running other commands in Terminal requires you to have known basics about SHELL scripting already. You may now install many IDEs to start writing codes in languages you like. If you would like to know where the compiler/interpreter is, you may try the which command. e.g.

你需要知道一些基本的SHELL编程知识来运行Terminal命令。现在电脑上已经有编译器和解释器了,所以你可以安装IDE来编程了。如果你想在IDE里指定编译器或者解释器的位置,而并不知道它们在哪里,你可以用which命令来获取它们的路径。比如

$ which cmake
/opt/local/bin/cmake

The command returned with the path to cmake on your Mac. Usually macports installs its binaries to the /opt/local/bin/ directory. Your Python 3 is very likely to be there too: /opt/local/bin/python. If it doesn’t return anything, then this executable is not available yet. For more details, see some introductory docs about $PATH variable in SHELL.

这个命令返回了你的Mac上cmake的路径。一般来说,macports会把可执行文件安装在/opt/local/bin/路径,所以你的Python 3也很可能在那里: /opt/local/bin/python. 如果命令没有返回任何结果,那说明你找的可执行文件还不存在。关于这个命令的详情,可以参见在SHELL环境里设置$PATH变量的介绍文档。

Another very useful command for port is to search for an available package with a keyword. Do:

另一个非常有用的port命令是根据关键词查询可使用的包。执行以下命令:

port search [--name] [--regex] '<searchtext>'

[--name] and [--regex] are both optional. e.g. if I search for a very useful program for vector diagram drawings, called inkscape, the return will be like shown in the following figure.

[--name] and [--regex] 都是可有可无的。比如,如果我想找一个叫inkscape的可以画矢量图的好用软件,命令返回结果如下图。

As for the common IDEs to use on a Mac, I recommend:

至于在Mac上好用的IDE,我推荐以下几个:

  • For C++/C or Fortran programming: Netbeans (Download online)
  • For Python programming: PyCharm (Download online)
  • For general programming: MacVim (Available as a macports port)
  • For HTML, CSS, JavaScript, etc. (website design): Coda (Download online)

One next important question is, since ports often depend on others, for example, py37-notebook depends on py37-jupyter_core, but each port maintains its own version, how would one make sure each port is up-to-date, or at least should work flawlessly with each other? How would you make sure the Python package you are developing can be readily adopted by others and they have the required version of Python modules (e.g. if you developed your codes with Numpy-1.15.4, but your user installed 1.14?). There is an optional software you can use to solve this headache, which I will explain in another post:

还有个重要的问题就是,由于这些ports依赖其他的ports来工作,比如py37-notebook依赖于py37-jupyter_core,但每个port都维护着自己的版本,你怎么才能确保所有的port都是最新的,或者至少是互相不起冲突呢?你又怎么去确定你开发的Python程序可以被用户直接使用,前提是这个用户已经有了你需要的module版本呢?(例如你需要numpy 1.15.4 版本,而用户安装的是1.14?)有一个软件可以解决这个问题,我将在另外的帖子里介绍这个问题:

Anaconda, should I bother?