Michael Tremer

Public Service Announcement: A reminder on writing bug reports
by Michael Tremer, December 16

Hi,

because many struggle with this quite often, I would like to re-port a brief text from this blog from 2013 about how to write a good bug report:

How to write a good bug report?

We developers rely on these and we need good technical information to be able to investigate and not to burn your and our time. So as a personal reminder from me, please keep these guidelines in mind when you encounter a problem next time and help us to solve it.

Have a good weekend,
-Michael

Michael Tremer

IPFire 2.19 - Core Update 108 is available for testing
by Michael Tremer, December 13

The time is ready to get the last Core Update of the year out to all our users. That means testing, testing, testing… Luckily this is a small update with only a few minor bug fixes and some security fixes in ntp and fixes in the squid web proxy.

Asynchronous Logging

Asynchronous logging is now enabled by default and not configurable any more. This made some programs that wrote an extensive amount of log messages slow down and possible unresponsive over the network which causes various problems. This was seen on systems with very slow flash media and virtual environments.

Miscellaneous

Updated Core Packages

Updated Add-ons


We hope to be able to release this update just before Christmas.
So please help us testing or if you wish, help to support our project by donating.

Michael Tremer

On Performance per Watt...
by Michael Tremer, November 21

Over the last few months, I found myself sitting at my desk thinking about IPFire hardware quite often. It is not that there is nothing around. It is quite the opposite that makes me think: There is too much on the market. Or better: Too much “fancy” stuff.

No hardware is just released to the markets any more. There are Kickstarter campaigns that are not used to crowdfund anything any more. They are rather abused as huge marketing campaigns that try to promote not so very exceptional things.

Without mentioning any names: I have seen ARM boards that are old as hell. Support for the SoCs has been discontinued by the vendors. Probably nobody is working on improving software support in the Linux kernel any more, which makes this a failed product right from the start. I wrote a post about why IPFire is not running on all ARM boards. That is still very true to this day and unfortunately hasn’t changed a bit.

There is also billion-dollar companies that are selling ordinary Intel Atom-based boards that are quite possibly nice, but nothing that is exceptional. There are dozens of similar or almost equal boards available from Chinese manufacturers that cost half the money and are even better quality.

I already ranted about that supporting old or bad hardware is causing all of the IPFire developers a lot of pain. IPFire is (maybe unfortunately for us) running on so much hardware and so many people make use out of it complaining about bad performance. Well tough. That’s stuff from the 90s. Nobody would seriously consider using twenty year-old hardware to run a server or a workstation. We still have to (and do) support it.

So why am I writing you all of this?

I always try being one step ahead of the crowd. My job is to see things coming and prepare for them. That is what security is about. But that is also a useful thing in the hardware business. For me it boils down very much to the question that is: How much bang do I get for the buck?

I have written very often about how important I find good performance. Nobody wants a slow network. It is more than only a usability feature.

The “Gigabit Economy”

The usual comment I get from my fellow Germans is that nobody needs a Gigabit of throughput because the average Internet connection speed is about 16 MBit/s downstream and 1 MBit/s upstream and that every hardware is powerful enough for that. If you are lucky you get 100 MBit/s. Those people should now just skip the next paragraph… I did warn you.

In other places in the world, the default is one Gigabit. In both directions of course. Not everywhere else but Germany, but it is not very uncommon either to get a Gigabit fibre for cheap money. And once you get this, you will use this.

Typical applications are file sharing via Dropbox, iCloud and what not, video conferencing, live streaming, hosting your own cloud or little data centre. You name it and you know them all. All not possible or just less fun with smaller bandwidth. So in case you are lucky to get such a connection, you certainly don’t want your firewall to slow down things. Even if you do not have a fast internet connection, bandwidth consumption is increasing dramatically. Backups from workstations to servers in the DMZ should not be limited to 120 MByte/s any more. We have the technology to make those things faster for those who need it. It is time to roll that out.

Everyone is now free to disagree and I am sure that there are many applications where bandwidth is not an issue. However it is often quite good to have some resources available in case you do need them. Hence at Lightning Wire Labs we always made sure that all appliances reach the magical number of one Gigabit from day one. I didn’t even know that this was a unique selling point back then, but apparently this was not very often the case.

We explained on the IPFire forums that an active network controller is key for that. That soon became synonymous with “Intel NICs”. Now the modern world of business is hitting us and they released “low-power” ethernet controllers that are nowhere near the performance the original ones gave us. On top of that, Intel is selling them for a very low price which helped them to appear on many Atom-based boards or the APU boards now. Because “Intel NIC” means fast, people go for it any buy them. The disappointment is huge since they only transfer a fraction of what it sounded like. So be careful with those things.

There is good stuff out there (if you know what to pay attention to)

The IPFire Duo Box is a great example to prove that with the right processor and periphery, even a passive ethernet controller can easily achieve its maximum throughput of one Gigabit. As far as I know this is the smallest device that does not come with an Intel Atom processor and therefore has a huge L2 cache and enough PCIe lanes to connect processor, NICs, WiFi and cellular modem with the required bandwidth. But it still is an ultra-low-voltage (and recently upgraded) Broadwell SoC, so this is not compensated for by using more energy. It is the same or maybe less.

So going back to one of my points above: I no longer have the vision that we can build a box of that size based on an ARM SoC. We are now working on bootstrapping IPFire 3 for ARM64, but hardware in that area will probably be more of the size of at least the IPFire Eco Appliance and certainly of the size of the IPFire Professional Appliance.

Not surprisingly, IPFire as a firewall depends on single-core performance. The two cores in the Broadwell SoC are very much faster than four cores with just above half the clock speed in another very popular device. That can compensate for a few things, but since processing packets and cryptography are very single-threaded things to do, higher single core performance pays off. In my opinion Intel has ruined the Atoms series because of really low single-core performance, poor IO performance and that combined is not a very nice result you get for your money.

Performance / Watt = ?

So for me the minimum of performance is that Gigabit that should even be achieved when the powerful IPFire web proxy is in action, Intrusion Detection is activated and a normal-sized set of firewall rules is in place.

In this equation that is a fixed factor then and to get the best out of it we have to minimise the power consumption then. We are getting great performance out of a dual-core Broadwell SoC as found in the IPFire Duo Box. What else do we want?

I must say this is the best that we could do in 2016. I am very proud of it although I have hoped for a different path that would involve an ARM SoC. It looks like that will never happen as I envisioned it a few years ago. But that’s okay. I don’t mind any more. We did well and are ready for the “Gigabit Economy”!

Michael Tremer

Rolling out DNSSEC to the masses
by Michael Tremer, November 18

Hello,

it has been a long time since my last proper post on this blog, but today is finally the day where I found some time to sit down and tell you a short story about DNSSEC.

With IPFire 2.19 – Core Update 106 we finally were able to roll our DNSSEC to everybody. With help of this wonderful piece of software called unbound which has replaced dnsmasq DNSSEC is now supported on all systems (before that DNSSEC was only in use when the upstream name servers validated as well). I am very proud of this and think that this is a huge achievement. In this post I would like to tell you why…

Why DNSSEC?

For many of you, this chapter won’t tell you anything new. DNSSEC is not the newest technology that nobody has ever heard of. It is indeed around for over a decade now and designed to authenticate all sorts of DNS records. It will check a signature that comes with every response to a DNS query and if that signature does not hold a check of a chain of trust, the record is not trusted any more. It could be forged by an attacker or have been altered by some other third-party on the way.

So in short. If you want to surf on ipfire.org, DNSSEC will make sure that you get the correct IP address of our web server and nothing else. Imagine what would happen when somebody sits between you and your bank and you do your online banking on an attacker’s site.

However, there is lots of criticism of DNSSEC. This is now repeated over and over again which does not make it any more true. Without going too deep into this, I would just like to say that most of these arguments are mostly unfounded. I would just like to explain two of the arguments which I heard most often:

Caveats

However, there are a few caveats. When we ran beta testing for the Core Update where we didn’t encounter any of them. Shortly after the release we found out about a few things that cause our users some trouble.

Usage of own top level domains

A rather large number is “illegally” using their own TLDs, for example *.companyname or just *.local which is not allowed according to the standard. DNSSEC does not only prove validity of an existing domain. It also signs responses about the non-existance of a domain. Hence *.companyname cannot exist and the DNS proxy returns an error. .local exists, but when used on your own machines, the signatures that are saved in the root zone do not match your (unsigned) local zone which results in the same error.

This is pretty much what DNSSEC is designed for and working very well. Of course it should not allow resolving any names that are not authenticated any more.

When using some domain like *.companyname.local, handing out DHCP leases with that, unbound will deliver those local records and skip validation. The problems rises again when a second validating resolver – potentially from a branch office – is connecting that resolver and asks for the same name. The second resolver won’t be able to verify that name and will send an error to the requesting client. In Core Update 107 we added an exception which will disable validating for all *.local forwardings so that this wrongly but very commonly used TLD is usable again – but without validation for course.

For those who are using any other made-up TLDs I can only recommend to move away from this and ideally register an own domain (even a dynamic name from a dynamic DNS provider is enough). A clean DNS tree will not cause you any errors. Adding any unsigned domains to the tree will just cause DNSSEC to do what it is supposed to do: Reject them.

Provider woes

In order to validate any signatures, unbound needs to download them for each domain that is accessed. These come with every DNS query where the DO bit (DNSSEC OK) is set. Some provider are operating name servers that strip these signatures away which is of course causing problems. unbound can fall back to the so called “recursor mode” and connect to the authoritative name servers directly and circumventing the ISPs ones.

This works quite well for all ISPs that are a bit ignorant about DNSSEC. But some are worse and hijack every single DNS query and forward it to their own caching name servers so that it never reaches the DNS server it was originally sent to. This is a whole different kind of evil and I personally find that this is illegal practice (probably is in most EU countries) and violates net neutrality.

Luckily DNSSEC detects this and does not trust any of the DNS responses of those servers – which is what the protocol is supposed to do. That however leaves some users of IPFire with no DNS resolution at all. In case you have such a provider, the only option is to disable DNSSEC for the moment (and probably stop using online banking and reading emails) and contact the ISP, organise a group that works together with them to get rid of this “extra service”.

And then there are some DNS providers who filter DNSSEC for their own marketing purposes. Everyone of you can now decide what they value more: Security or an easy to circumvent content filter?

From that experience I can only recommend to use independent name servers that are not operated by an ISP or a larger organisation where we cannot be sure what they are going to do with any of the data they get through processing your DNS queries. The IPFire wiki has a list of providers (good and bad) that have public resolvers. It is up to you to figure out who you trust and of course who doesn’t break DNS for you.

Summary

In summary I am still very proud that we made DNS and therefore every network behind an IPFire box more secure.

In a correctly configured DNS environment this is working perfectly. Some people might have to do some homework and iron out some configuration issues, but as soon as that is done, you will be fine. You wouldn’t notice anything which I find quite nice for some extra security. It should not reduce any user-experience or make things complicated. DNSSEC doesn’t do either of that but brings many advantages with it. Many of your probably have been using this without even knowing.

We decided to not add an “off switch” for DNSSEC anywhere since too many of you will probably use that without thinking. It is a bit like a button that disables the firewall. That probably increases usability when you don’t have to configure any more rules but completely ruins security. So to not tempt you – and because there are really no good reasons to disable this at all – we decided against it.

As usual I am looking forward to receive bug reports in case there are some real problems with DNSSEC and IPFire. There is also some documentation about everything on the IPFire wiki and everyone can contribute more guidance to help out others.

After all it is not very common to roll it our by default in such a large scale and IPFire might be one of the first. So thanks for being with us. I really enjoy pioneering new technology together with all of you.

Hottest posts 2017 2016 2015 2014 2013 2012 2011