The IPFire project has been working for quite a while on an ARM port. We have also been working on IPFire 3, which is the next generation of the IPFire firewall OS. This is a post that explains how we compile IPFire 3 for the ARM architecture.
IPFire 3 is currently built for two revisions of the ARM architecture. The first one is
armv5tel which runs on SoCs that support the ARMv5 instruction set and do not come with a floating point unit (FPU). The second one,
armv7hl is much more recent and therefore requires more recent hardware that supports the ARMv7 instruction set, which always mandates a FPU and supports SMP (symmetric multiprocessing). It is very likely, that ARMv8 will be built as soon as decent build hardware is available (2014).
Why do we build more than one version? We also do that for the x86 architecture, that comes in an
x86_64 and an
i686 flavour. The reason for that is obvious, because the former version supports 64 bits, the latter only 32. In addition to that,
x86_64 has got a lot more instructions which make it faster and more efficient. We know those as MMX, SSE, SSE2, AVX and so on. In conclusion, we are able to run IPFire on a much wider range of hardware and use that hardware as best as possible.
armv5tel can be considered as the legacy architecture which is the smallest common denominator (like
i386). The system runs on any ARM CPU that has been sold in the last years. Newer systems are also able to execute ARMv5 code (which is very important for our build cluster system).
armv7hl though requires a FPU as already mentioned and brings some extensions that accelerate very common things as encryption and multimedia tasks, but that comes with the requirement for more energy as well. Altogether, SoCs that implement ARMv7 are a magnitude faster than ARMv5-based SoCs.
To get a decent cup of performance we need a lot of hardware. Having a look at your preferred online shop for this stuff, ARM hardware looks cheap. But what we require to make building packages for ARM not hinder the IPFire developers from finishing their tasks, we still need a lot of those devices. We managed to buy a bunch of Pandaboards which performed well, but some bigger packages like the kernel or GCC still needed up to 12 hours for a single build. As a result of that, we asked our community to support us with buying two ODROID-X boards. As you guys rock, we hit the goal very quickly and could place the order.
So, we are now in possession of three Pandaboards and two ODROID-X. One of the Pandaboards is not permanently in the build cluster, but sometimes used for testing and stuff. That adds up to 14 Cortex-A9 cores. Wonderful.
All builds are connected to a switch and connect to the Pakfire Build Service, that dispatches build jobs to the board. As in IPFire 3 each package can be built individually, a free builder grabs a source package, compiles it and sends back the result. That’s easy, fast and very reliable. I don’t want to go into more detail about the Pakfire Build Service because I could probably write a whole book about it.
During the build of the package, the builder downloads a bunch of previously built packages. They are extracted into a temporary build environment that provides only the essential stuff like compilers, headers and build dependencies of the package that is to be built. A build environment requires about 500M disk space, which would probably kill the flash drives in the builders. In addition to that, the SD card slot of the Pandaboard is really slow, so that extracting the build environment takes a lot of time.
The solution, we came up with was to create a data partition for every single builder on an iSCSI storage device. For some technical reason NFS was not an option here. I personally like iSCSI for its simplicity and it turns out that extracting the build environments is very fast on the iSCSI targets. So, iSCSI it is.
With the design of the Pakfire Build Service comes that packages are automatically rebuilt after a time. That ensures that updating some parts of the distribution does not break others and we can be sure that everything is working. Those rebuilds are called test jobs and they are randomly done by the builders. For example, when builder A built the final release of the latest Linux kernel it adds all the compiled code to a compiler cache, which can be used on the next build of that very same package. But when the test job of the Linux kernel package is deployed it is likely that it will be executed on an other builder – lets say builder B. Builder B can normally not access the compiler cache that has been created earlier by builder A.
We took the cache and put it on a NFS share so every builder of our cluster can access it and does not need to recompile everything again. That increases the performance of the test builds by a factor of 10 (we guess).
Below, I made a picture of what the cluster currently looks like.
|- - - - - - - -| | - - - - - - - - - - - - - | | Pandaboard 1 |- - - - - - . | Pakfire Build Service Hub | |- - - - - - - -| | | http://pakfire.ipfire.org | | | - - - - - - - - - - - - - | |- - - - - - - -| | | | Pandaboard 2 |- - - - - - | . |- - - - - - - -| | . | [ Internet ] |- - - - - - - -| | . | Pandaboard 3 |- - - - - - | . |- - - - - - - -| | | | | | | |- - - - - - - -| | | | ODROID-X 1 |- - - - - - | - - - - - - - - - - - - - - ´ |- - - - - - - -| | | | | |- - - - - - - -| | | | ODROID-X 2 |- - - - - - ´ | |- - - - - - - -| | |- - - - - - - - - - - -| | iSCSI storage unit | | Shared Compiler Cache | |- - - - - - - - - - - -|
I hope you enjoyed a look into the internals of the IPFire build infrastructure. If so, send me feedback and suggestions.
Posted: November 30, 2012 • 1305 views