One thing I miss about previous employments is the ability to write parallel code on HPC systems. Whether it involved diagonalizing large matrices to find atomic or molecular ground state energies or modeling the country of Thailand in order to understand disease transmission, I found that writing parallel code reinforced a way of thinking that you just don’t get with serial applications. While my current job is interesting in the vast amount of projects and applications I am involved with, the entirety of the work is done on conventional workstations and laptops. This is fine, as most of the results can be calculated in minutes (and don’t require parallelization), but I still miss working with the technology.
In order to accommodate this desire, and attempt to stay at least semi-sharp in the field of parallel computing, I decided to build a cluster at home using Raspberry Pi. For anyone who has been living under a rock the last few years, a Raspberry Pi is a credit card sized computer. With a CPU comparable to a cellphone’s, a gigabyte of RAM, onboard graphics, ethernet, USB, and HDMI, these computers have a vast array of potential applications. Since they debuted in early 2012, several iterations have been released with varying specs (the newest being the Raspberry Pi 2).
A set of instructions for building the cluster can be found here, where a professor at The University of Southampton built a cluster using 64 Raspberry Pi computers. As mentioned above, I only used four. The reason I decided to stop there was primarily cost as well as the fact that writing parallel code with four nodes is fundamentally the same as writing code for 64 or more.
Since “Supercomputer” seems to be a buzzword these days (hasn’t it always?), I think it need to be mentioned that this is not going to get you any sort of computing power surpassing that available to you even in a modern conventional desktop workstation. Furthermore, there are bottlenecks in Raspberry Pi cluster that don’t exist (or are greatly mitigated) in real (and expensive) supercomputers. As mentioned above, I built this thing in order to test out parallel codes — not because I think I’m going to be solving any particular problem faster than on any other hardware currently available to me.
Going forward, the first step will be to benchmark the cluster. Then I will develop some test codes. if I can think up any interesting problems to solve after that, I will certainly make an update to Evil Quark. Perhaps generate Pi to many digits or generate prime numbers… or calculate the ground state energy of the helium atom using different basis sets. Who knows? Stay tuned.