Michael Tremer

IPFire 2.19 - Core Update 109 is available for testing
by Michael Tremer, February 4

The next Core Update with number 109 has been released for testing. It comes with a number of package updates which include security fixes and bug fixes all across the place.

DNS Fixes

The DNS proxy which is working inside IPFire has been updated to unbound 1.6.0 which brings various bug fixes. Therefore, QNAME minimisation and hardening below NX domains have been re-activated.

At start time, IPFire now also checks if a router in front of IPFire drops DNS responses which are longer than a certain threshold (some Cisco devices do this to “harden” DNS). If this is detected, the EDNS buffer size if reduced which makes unbound fall back to TCP for larger responses. This might slow down DNS slightly, but keeps it working after all in those misconfigured environments.



As always, we would like to ask all users to participate in testing which will highly improve the quality of this update.

Please report any bugs to our bug tracker and provide any feedback on our development mailing list.

Michael Tremer

Public Service Announcement: A reminder on writing bug reports
by Michael Tremer, December 16


because many struggle with this quite often, I would like to re-port a brief text from this blog from 2013 about how to write a good bug report:

How to write a good bug report?

We developers rely on these and we need good technical information to be able to investigate and not to burn your and our time. So as a personal reminder from me, please keep these guidelines in mind when you encounter a problem next time and help us to solve it.

Have a good weekend,

Michael Tremer

IPFire 2.19 - Core Update 108 is available for testing
by Michael Tremer, December 13

The time is ready to get the last Core Update of the year out to all our users. That means testing, testing, testing… Luckily this is a small update with only a few minor bug fixes and some security fixes in ntp and fixes in the squid web proxy.

Asynchronous Logging

Asynchronous logging is now enabled by default and not configurable any more. This made some programs that wrote an extensive amount of log messages slow down and possible unresponsive over the network which causes various problems. This was seen on systems with very slow flash media and virtual environments.


Updated Core Packages

Updated Add-ons

We hope to be able to release this update just before Christmas.
So please help us testing or if you wish, help to support our project by donating.

Michael Tremer

On Performance per Watt...
by Michael Tremer, November 21

Over the last few months, I found myself sitting at my desk thinking about IPFire hardware quite often. It is not that there is nothing around. It is quite the opposite that makes me think: There is too much on the market. Or better: Too much “fancy” stuff.

No hardware is just released to the markets any more. There are Kickstarter campaigns that are not used to crowdfund anything any more. They are rather abused as huge marketing campaigns that try to promote not so very exceptional things.

Without mentioning any names: I have seen ARM boards that are old as hell. Support for the SoCs has been discontinued by the vendors. Probably nobody is working on improving software support in the Linux kernel any more, which makes this a failed product right from the start. I wrote a post about why IPFire is not running on all ARM boards. That is still very true to this day and unfortunately hasn’t changed a bit.

There is also billion-dollar companies that are selling ordinary Intel Atom-based boards that are quite possibly nice, but nothing that is exceptional. There are dozens of similar or almost equal boards available from Chinese manufacturers that cost half the money and are even better quality.

I already ranted about that supporting old or bad hardware is causing all of the IPFire developers a lot of pain. IPFire is (maybe unfortunately for us) running on so much hardware and so many people make use out of it complaining about bad performance. Well tough. That’s stuff from the 90s. Nobody would seriously consider using twenty year-old hardware to run a server or a workstation. We still have to (and do) support it.

So why am I writing you all of this?

I always try being one step ahead of the crowd. My job is to see things coming and prepare for them. That is what security is about. But that is also a useful thing in the hardware business. For me it boils down very much to the question that is: How much bang do I get for the buck?

I have written very often about how important I find good performance. Nobody wants a slow network. It is more than only a usability feature.

The “Gigabit Economy”

The usual comment I get from my fellow Germans is that nobody needs a Gigabit of throughput because the average Internet connection speed is about 16 MBit/s downstream and 1 MBit/s upstream and that every hardware is powerful enough for that. If you are lucky you get 100 MBit/s. Those people should now just skip the next paragraph… I did warn you.

In other places in the world, the default is one Gigabit. In both directions of course. Not everywhere else but Germany, but it is not very uncommon either to get a Gigabit fibre for cheap money. And once you get this, you will use this.

Typical applications are file sharing via Dropbox, iCloud and what not, video conferencing, live streaming, hosting your own cloud or little data centre. You name it and you know them all. All not possible or just less fun with smaller bandwidth. So in case you are lucky to get such a connection, you certainly don’t want your firewall to slow down things. Even if you do not have a fast internet connection, bandwidth consumption is increasing dramatically. Backups from workstations to servers in the DMZ should not be limited to 120 MByte/s any more. We have the technology to make those things faster for those who need it. It is time to roll that out.

Everyone is now free to disagree and I am sure that there are many applications where bandwidth is not an issue. However it is often quite good to have some resources available in case you do need them. Hence at Lightning Wire Labs we always made sure that all appliances reach the magical number of one Gigabit from day one. I didn’t even know that this was a unique selling point back then, but apparently this was not very often the case.

We explained on the IPFire forums that an active network controller is key for that. That soon became synonymous with “Intel NICs”. Now the modern world of business is hitting us and they released “low-power” ethernet controllers that are nowhere near the performance the original ones gave us. On top of that, Intel is selling them for a very low price which helped them to appear on many Atom-based boards or the APU boards now. Because “Intel NIC” means fast, people go for it any buy them. The disappointment is huge since they only transfer a fraction of what it sounded like. So be careful with those things.

There is good stuff out there (if you know what to pay attention to)

The IPFire Duo Box is a great example to prove that with the right processor and periphery, even a passive ethernet controller can easily achieve its maximum throughput of one Gigabit. As far as I know this is the smallest device that does not come with an Intel Atom processor and therefore has a huge L2 cache and enough PCIe lanes to connect processor, NICs, WiFi and cellular modem with the required bandwidth. But it still is an ultra-low-voltage (and recently upgraded) Broadwell SoC, so this is not compensated for by using more energy. It is the same or maybe less.

So going back to one of my points above: I no longer have the vision that we can build a box of that size based on an ARM SoC. We are now working on bootstrapping IPFire 3 for ARM64, but hardware in that area will probably be more of the size of at least the IPFire Eco Appliance and certainly of the size of the IPFire Professional Appliance.

Not surprisingly, IPFire as a firewall depends on single-core performance. The two cores in the Broadwell SoC are very much faster than four cores with just above half the clock speed in another very popular device. That can compensate for a few things, but since processing packets and cryptography are very single-threaded things to do, higher single core performance pays off. In my opinion Intel has ruined the Atoms series because of really low single-core performance, poor IO performance and that combined is not a very nice result you get for your money.

Performance / Watt = ?

So for me the minimum of performance is that Gigabit that should even be achieved when the powerful IPFire web proxy is in action, Intrusion Detection is activated and a normal-sized set of firewall rules is in place.

In this equation that is a fixed factor then and to get the best out of it we have to minimise the power consumption then. We are getting great performance out of a dual-core Broadwell SoC as found in the IPFire Duo Box. What else do we want?

I must say this is the best that we could do in 2016. I am very proud of it although I have hoped for a different path that would involve an ARM SoC. It looks like that will never happen as I envisioned it a few years ago. But that’s okay. I don’t mind any more. We did well and are ready for the “Gigabit Economy”!

Hottest posts 2017 2016 2015 2014 2013 2012 2011