This article is about encryption backdoors. But first, some history.
In 1897 Indiana almost passed a law legally defining Pi, the irrational number which is the ratio of a circle’s circumference to its diameter, as 3.2:
“Furthermore, it has revealed the ratio of the chord and arc of ninety degrees, which is as seven to eight, and also the ratio of the diagonal and one side of a square which is as ten to seven, disclosing the fourth important fact, that the ratio of the diameter and circumference is as five-fourths to four“ – End of Section 2 of the “Indiana Pi Bill” of 1897 [emphasis added]
At this point, the Pi Bill is a joke that you probably covered in your Precalculus class.
Shockingly, though, there is another debate equally as mathematically silly playing out in the halls of Congress right now: whether companies should be forced to include “encryption backdoors” or, in layman’s terms, “little known weaknesses” into their encryption algorithms.
Unfortunately, while the intentions of the debate are excellent, in practice it is mathematically impossible to design a backdoor that only one party can attempt to enter through.
Security By Obscurity
Without getting too into the weeds of how, exactly, encryption backdoors would work… the concept itself is simple: a backdoor into encryption would be an example of security by obscurity.
If you approached a house that you needed to enter but didn’t have the key what would be your first move?
- If you answered “look for an out of place rock” because you know people like to hide keys inside – that’s the equivalent of security by obscurity. Although it will prevent the most casual criminal from entering, most criminals know people will hide door keys somewhere in sight of the front door.
Now knocking the door down, of course, would be the equivalent of a brute force attack. While the average adult can kick through weaker doors, it would take today’s computers* an incredibly long time to break the encryption on a single message – and there are ways to set up additional “doors” even if the first one is brute forced successfully.
Now, it’s a great analogy… but this is where we note that (and this is not a conspiracy theory) the government has tried to sneak an obscure backdoor into encryption algorithms in the past – and pressured private companies to implement it.
To have encryption, you need randomness.
Encryption requires random numbers, and these usually come from a physical (hardware) process – modern CPUs (and sometimes other devices like expansion cards, USB keys, and others) have blocks dedicated only to creating random numbers.
These are a fixed, limited resource, so computers need to use these random numbers as ‘seeds’ to create more random-ish numbers, which we refer to as pseudorandom numbers. Pseudorandom number generators take a ‘random seed’ which is actually random and produce an apparently random number. This is where this infamous attempt to insert a backdoor came into play. I’ll illustrate it with an example:
Let’s say that a keypad has a pseudorandom password generated by a malicious algorithm, nominally between 1 and 100. Every time you enter an incorrect password the keypad locks for 5 seconds:
- A naive guesser would average 50 wrong answers, or 250 seconds to break in.
- However, someone who knew that the malicious algorithm limits the random results to 5, 10, 20, 80, 90, and 95 would break in on average in 15 seconds.
Encryption backdoors work on a far grander scale: biasing a pseudorandom number generator might reduce the range of possible guesses from trillions of trillions down to trillions.
While still nominally huge, trillions of guesses can be performed by any typical computer sold in the last 5 years (including the Macbook Pro 15″ I’m typing this on)… in well less than a day.
Of course, like with many obscurity derived schemes, Dual_EC_DBRG was suspicious to some experts almost from the moment it was published. Those suspicions were confirmed in 2013 when the Guardian, working off of the aforementioned Snowden leak, wrote about how Dual_EC_DCRG had been worked by the NSA.
It’s true – sufficiently advanced and hidden encryption backdoors can remain hidden for quite some time, but it only takes a computing genius, some luck, sustained attacks, or perhaps a Snowden-type leak for that backdoor to become known. As best illustrated by The Simpsons, the backdoor is going to be found eventually:
The Practical Dangers of ‘Practical’ Restriction – a Case Study
You’ll note that in the last section I had an ‘asterisk to nowhere’ about today’s computers. Let’s talk now about how ‘practical, common sense measures’ by Congress broke the internet for somewhere between 0 and 20 years due to some supposedly common sense standards.
For (probably) as long as there has been encryption, there has been export controls on that encryption. Since WWII, the United States has regulated encryption as a munition. This isn’t a uniquely American thing either – most countries which research advanced cryptography try to keep some of the sauce secret and local.
The NSA and private internet interests first started to have some friction in 1992, when a compromise bill allowed only 40-bit RC2 and RC4 encryption (each bit is a measure of how long a ‘key’ is – and additional bits usually increase the complexity of the key) to be released overseas, and limited RSA public keys to 512 bits. RSA is still in common use today – but usually with a key size of at least 2,048 bits. (And, not that it fully matters but for full disclosure: I had one Computer Science class with Leonard Adleman (the ‘A’ in RSA) in college.)
Of course, even though export controls eventually loosened, these particular controls lead to one of the longest ongoing encryption weaknesses since the Enigma machine. First published in March of 2015, the so-called Freak Attack specifically targeted this weaker RSA encryption by asking servers to ‘fall back’ to using it… and it hasn’t aged well. It is easily brute forced by modern machines.
Encryption Backdoors: Echoes in Time
This attack was more of an eventuality than malicious intent, which can happen quite often when a constant quantity meets an exponential function (like computing power).
In the 1990s, the keys sizes were selected because it was believed only one state actor – the United States – had the computing power to crack keys in a reasonable amount of time. Nowadays, companies like Amazon allow commodity rentals of computing cycles – and when the Freak attack was released, it cost a mere $100 to rent the cycles needed to crack an average web site using the old encryption restricted key lengths.
But why were we using these insecure methods in the United States to begin with?
It’s a good question – while you would expect foreign browsers and servers to fall back to support legacy equipment, the answer for domestic users has to do more with the topography of the web – it is very hard, as it turns out, to prevent leaking ‘controlled’ encryption if you distribute a browser online. Because of that difficulty, most domestic users usually had the weaker “foreign allowed” encryption.
That legacy is all it took for the internet to be broken for a very long time – although, of course, it’s impossible to say for certain when it entered the budget of the “bad guys”. (Certainly at a point higher than $100, though.) Even today, it’s estimated around 12% of all servers are vulnerable to the attack!
(As an interesting side note, Amazon’s instances have been great for security researchers – note that a short keys in SHA-1 can be cracked for ~ $2.10 nowadays!)
Beware Good Intentions and Bad Results
“So what I’m trying to say is, I think this world is really changing in terms of people wanting the protection and wanting law enforcement, if there is conspiracy going on over the Internet, that that encryption ought to be able to be pierced.” – Senator Dianne Feinstein, D-California
“Some technologists and Silicon Valley executives argue that any efforts by the government to ensure law-enforcement access to encrypted information will undermine users’ privacy and make them less secure. This position is ideologically motivated and profit-driven, though not without merit.” – Senator John McCain, R-Arizona
If you search the public statements from Congress, you’ll note that mandatory backdoors has become a bipartisan issue, where even Senators who represent Silicon Valley have made statements denying the mathematical reality of encryption and backdoors.
Contrast that with the two largest Silicon Valley companies in just the last week:
- Apple – currently stonewalling a government request to build a backdoor for an iPhone 5c which belonged to one member of the San Bernardino terrorist duo
- Alphabet (Google’s parent) – CEO Sundar Pichai siding with Apple on their stance
Internationally, the lines are even weirder – President Obama backed down from introducing legislation to mandate backdoors late last year. Days later, he was also forced to lecture China for passing legislation which would introduce backdoors(!).
To take it a step further, ISIS has a number of recommended security apps… and before you click, note that many aren’t even from US companies, so legislation here would have the strange effect of pushing more people to the foreign firms.
Compounding the problem, the cat’s already out of the bag – just like the proliferation of nuclear weapons and to a lesser degree personal firearms… the ideas behind encryption are now so widely known that it is improbable (if not impossible) to enforce compromised encryption. Even if you make the case that disarmament is possible, consider: encryption can be spread by email.
On the other hand, we tend to make terrorists out to be smarter than they are – encryption is hard, and the first phone recovered from the Paris Shooting attacks had plain old, unencrypted SMSes. (Of course, lies travel faster than truth, so it didn’t take long for politicians to blame Edward Snowden and encryption for the attacks – accusations later mostly reversed.)
Encryption is Hard. Don’t Make it Harder with Encryption Backdoors.
I’m a Computer Engineer. I have worked with encryption, and I have worked on encryption – but for the latter, I have only done it locally to experiment and have never contributed my encryption code to anything public or used by the public. I consider myself pretty smart, but I wouldn’t even consider it with my current background. (And, no, I’ve never tried to insert encryption backdoors!)
Encryption is easily the most humbling field of Computer Science because, instead of dealing with the strict determinism of a processor you’re dealing with cunning and unpredictable humans. It really is a case of “you don’t know what you don’t know” and it’s a field where experience trumps formal study (and you should apprentice for a while). For you other engineers out there, when you need encryption try to use a well known, popular library with a third party security audit!
Even large, widely used random number and encryption libraries often break, are used incorrectly, or have unnoticed (and virtually impossible to find) bugs for years and years. Usually, aside from a few decade-plus old algorithms in use today, encryption probably isn’t broken – it’s the implementation that is broken. Furthermore, computers do get faster and attacks which were impossible half a generation ago are now practical. That makes the argument for constant innovation, audits, and increases in key length – especially as new novel, practical examples of exploits become more reasonably priced. Or, you know, complete game changers like quantum computers become commonplace.
There is no mathematical way to insert a backdoor that only the good guys can use. And, even if you do insert a backdoor for the good guys, other entities – like, say, foreign countries you don’t trust – might demand access. (And even if they don’t, keeping secrets secret is hard).
Let’s not try to rewrite the laws of mathematics with the laws of our country.
No matter how colorful your prose, Pi will never equal 3.2 and your cleverly hidden encryption backdoors won’t stay hidden forever.