A recent announcement about the
invention of public-key encryption has made me think about encryption and the Internet. I have been watching development in encryption for over twenty years, and it is interesting to see how it has - and has not - developed.
Almost all residential encryption on the web uses Secure Sockets Layer (
SSL); a secure webpage is often denoted by a padlock in the URL bar of the browser. Common strengths of SSL encryption are 40-bit, 128-bit and 256-bit. The numbers represent the strength of the 'key', or the security pass. If you know the key, then you can decipher the information being sent over the Internet. If you do not know the key then it is extremely difficult to decipher the message.
Of course, the recipient of the message also needs to decode it, and therefore needs the key. If a key needs to be transmitted, then it might be intercepted. For this reason, SSL uses a complex system called
public-key cryptography. The theory behind this is not necessary for this post, but basically it allows the recipient to decode the message without having the full key; the public key is all that is needed.
The ability for people to send messages with only the intended recipient reading them is the basis of Internet commerce. It allows me to order a few books on Amazon, or to check the balance on my bank accounts without third parties seeing what I am doing. Such encryption is an essential part of modern life.
That same ability has frightened security services for the last couple of decades. In the days of postal letters, laws were passed allowing the authorities to open and read the contents. Phones could be tapped, subject to t a legal process. Both of these had obvious privacy issues, and the law had to tread a difficult line between privacy and national security. They did not always get it right.
Unfortunately, public-key cryptography meant that, although the messages could still be intercepted, they could not be read. 40-bit messages were just about feasible to be cracked using massive computing power; 128-bit was essentially impossible, and is still very difficult. This left the authorities with an obvious problem: what would happen if organised crime, terrorists or any other ner-do-wells started using unbreakable encryption?
Initially they tried to ban the technology for export. A decade ago I attended a few export control meetings in London. Export of SSL technology was limited to 40-bit, which was weak, and the company I was working for wanted to export 128-bit to secure Internet banking. As our customer was in Scandinavia, we needed an export licence.
This would all have been very amusing if it had not been so time-consuming and pointless. The code for 40-bit SSL was freely-available open-source, which must have spread all over the world before anyone had even thought of slapping controls on it. Adding support for 128-bit was quite simple for anyone with good mathematical and coding skills.
For this reason, the US Government came up with the idea of
Clipper. This was a chip and associated architecture that would allow encrypted phone calls, but would also give the US Government a back door to listen to conversations if required. An associated chip, Capstone, would be used tom encrypt data on computers.
Think about the problems involved with this: it involved the public trusting the US Government to use the system properly (i.e. only listening to messages when there was a real need); the protocols and encryption standard was secret and could not be evaluated; and there was little idea what the rest of the world would do. Indeed, the mere threat of Clipper led to the creation of more open-source publicly-available software systems to enable encryption.
Clipper seemed like an advanced concept in the early nineties. Yet its critics rightly sought its abandonment. So would the world be safer if Clipper had been introduced? I doubt it. Public-key cryptography had been invented long before Clipper, and criminals would surely have used it instead of Clipper-enabled systems. How could the US government have forced, say, the Iranians or other governments to use Clipper?
For these reasons, the Clipper proposal was really a non-starter. The project was abandoned in 1996 after three years and a great deal of money had been spent.
Instead, some countries have introduced laws that say it is illegal to fail to produce an encryption key when demanded. In the UK this is enshrined as part of the Regulation of Investigatory Powers Act 2000, otherwise known as
RIPA. RIPA has led to
arrests. Although imperfect, this seems like a far better system than any Clipper-type system. It means that the authorities have to go though a legally-defined process to read encrypted messages (*).
The mathematics behind public-key encryption is fascinating, but so are the legal and moral dilemmas that it produces. If you want to read more about cryptography and encryption, then you can do worse than read Simon Singh's
The Code Book. If you want a web resource, then Greg Goebel's website has an excellent
primer to codes and ciphers, and also a guide to
codes, ciphers and codebreaking that has a
chapter on public-key cryptography.
(*) It may be possible, perhaps even probable, that GCHQ and others have computer systems capable of breaking all encryption using brute-force or other techniques. If so, then it is little known, and it is doubtful whether data from such systems could be used in a court of law.