Skip to content
Biz & IT

How the Comodo certificate fraud calls CA trust into question

The security provided by HTTPS is dependent on trust. Without that trust, …

Ars Staff | 150

Recently at Ars we've had a couple of discussions about the use of HTTPS—that is, HTTP secured using SSL or TLS—for every website, as a way of keeping sensitive information out of reach of eavesdroppers and ensuring privacy. That's definitely a good thing, but it has a flaw: it requires HTTPS to actually be effective at protecting privacy. Recent goings on at Certificate Authority (CA) Comodo provide compelling evidence that such trust is misplaced.

There are two interrelated aspects to SSL. The first is encryption—ensuring that nobody can understand the communication between a client and a server—and the second is authentication—proving to the client that it is actually communicating with the server it thinks it's communicating with. When a client first connects to an HTTPS server, both parties have a bit of a problem. They would like to encrypt the information they send each other, but to do this, they both need to be using the same encryption key. Obviously, they cannot just send the key to each other, because anyone listening in on the connection will be able to watch them do so, and use the key to decrypt the communication themselves. Fortunately, clever mathematics allows both parties to share an encryption key without it being disclosed to any eavesdroppers.

Ars Video

 

Defeating the man-in-the-middle

But what if instead of merely eavesdropping, the malicious party actually interferes with the connection, placing itself between the client and the server, intercepting everything sent between the two, known as a man-in-the-middle (MITM) attack. This would be a big problem. The MITM could act as the server (as far as the client was concerned) and the client (as far as the server were concerned), sharing one key with the client and another with the server. He could then decrypt anything the client said, examine it, and then re-encrypt it and send it to the server, and neither side would be any the wiser.

This is where authentication, in the form of certificates, comes to the rescue. Certificates are an application of public key cryptography. With normal encryption, the key used to encrypt data is the same key as is used to decrypt data; if you know the key, you can both encrypt and decrypt as you see fit. Public key cryptography, however, uses two keys: a private key, that is kept secret, and a public key, that is shared with the world. Each key only works "one way"; anything encrypted with the public key can only be decrypted with the private key, and anything encrypted with the private key can only be decrypted with the public key.

Initiating a SSL session. The user will also check that the certificate is valid and signed by a trusted entity.

Public key cryptography is very powerful, because it enables the establishment of trust. If a public key can be used to decrypt a piece of information then it's all but certain that the information was originally encrypted with the corresponding private key. And so, this mechanism is built into SSL. The server publishes a certificate—a little chunk of data that includes a company name, a public key, and some other bits and pieces—and when the client connects to the server, it sends the server some information encrypted using the public key from the certificate. The server then decrypts this using its private key. This information is used to encrypt subsequent communication.

Since only the server knows the private key—and hence only the server can decrypt the information encrypted with the public key—this allows the client to prove that it's communicating with the rightful owner of the certificate. That's still not quite enough to safeguard against MITM attacks, however. To defeat this setup, the MITM just has to do a little bit more work—he would have to create his own certificate with a private/public key pair—but with this, he could still sit between client and server, acting as server to the client and client to the server, listening in on everything sent between the two.

The solution: trust

So there's one more piece to the puzzle: a chain of trust. To verify the authenticity and identity of the certificates themselves, they are linked back to a trustworthy source of certificates. Instead of simply generating a certificate oneself (called a "self-signed certificate"), one instead pays some money to a Certificate Authority (CA) and has it generate the certificate. Every certificate the CA generates is marked as originating from them (again using the properties of public key cryptography), and most Web browsers and operating systems will only trust certificates that directly or indirectly link back to one of a handful of CAs, the "root CAs." Any certificate that doesn't link back to a root CA—such as a self-signed certificate—will generate a big scary warning in the browser. Operating systems and browsers have preinstalled copies of the root CA certificates so that they can validate these links.

An illustration of the chain of trust

In principle, each CA will only issue a certificate if the organization buying the certificate proves their identity to the CA by sending notarized paperwork or some similar mechanism. This means that a certificate purporting to represent, say, Amazon must genuinely have been issued to Amazon. Some certificates, called Extended Validation (EV) certificates have an even higher identification threshold (and price) before they can be issued. The CAs shouldn't issue certificates claiming to represent Amazon to any company that isn't Amazon. 

This is what allows the man-in-the-middle to finally be defeated. Although he can create his own certificate pretending to belong to the server that the client is trying to connect to, what he can't do is to create a certificate that is linked back to a root CA—the root CA will only issue certificates to their rightful owners. And since the Web browser won't trust any certificate that doesn't link back to one of the root CAs it knows about, the MITM can no longer secretly place himself between the client and the server—any attempt to do so will result in a big warning or error message in the client's Web browser.

So, that's how it should all work. And each part is necessary: without the chain of trust, the certificate authentication can't be trusted; without the certificate authentication, the encryption can't be trusted; and without the encryption, there's no protection against eavesdroppers.

The mathematics behind the authentication and encryption are pretty robust (at least given current knowledge), so those parts are reasonably safe. But an awful lot of trust is placed on those root CAs. If a root CA starts issuing certificates to people that it shouldn't—giving a hacker a certificate purporting to be Amazon, say—then the whole system collapses. The hacker can act as a man-in-the-middle and the client's Web browser will actually trust his certificate. No warning about self-signed certificates; everything will just work as if nothing were wrong.

Comodo's colossal screw up

And that's exactly what one of the root CAs, Comodo, has done. Nine times. A user account belonging to a Comodo "Trusted Partner" based in Southern Europe was hacked, and this hacked account was used to issue nine fraudulent certificates. Interestingly, the target appears to be e-mail and/or instant messaging: the certificates were issued for mail.google.com, www.google.com, login.yahoo.com (three different certificates were issued for this domain), login.skype.com, addons.mozilla.org, login.live.com, and "global trustee" (which is not a valid domain name; the purpose of this certificate is not entirely clear).

Comodo for its part insists in a statement published Wednesday that the company itself was not hacked, and that its systems remain fundamentally secure. The hacked user account has been suspended, and the company has instituted "additional audits and controls" of an entirely unspecified nature.

Prior to Comodo's public disclosure, security researcher and Tor developer Jacob Appelbaum documented some peculiar goings on in the way Mozilla Firefox and Chromium (the open source counterpart to Google Chrome) handle certificates. On March 16, Chromium was patched to blacklist six certificates (meaning that the browser would refuse to trust them, even though they appeared to be linked back to a legitimate root CA). Twelve hours later on the 17th, Google rolled out an important Chrome update that included the newly blacklisted certificates. That same day, equivalent changes were made to Firefox (a change that likely delayed the release of Firefox 4).

Further detective work by Appelbaum revealed that the blacklisted certificates were issued by Salt Lake City-based Comodo reseller UserTrust.

Comodo has revealed little information about the hack itself or how its systems actually work; it's not immediately clear why a Southern European user account should be able to get a reseller in Utah to issue them a certificate, nor is it obvious exactly whose systems were compromised or how. The one thing that is clear is that the trust model isn't all that trustworthy.

Certificates are designed, at least to an extent, to be able to withstand this kind of attack. The root CA certificates, the ones built-in to operating systems and Web browsers, include within them the URL of a Certificate Revocation List (CRL). As the name might imply, each CRL is a list of certificates that a particular CA has, for one reason or another, revoked. Any program using certificates should examine the CRL before electing to trust a particular certificate and, if it finds the certificate on the list, it should reject it. An updated mechanism with the same underlying purpose called Online Certificate Status Protocol (OCSP) is also in use, and EV certificates now require the use of OCSP. So now that the bad certificates have been discovered, it should be a simple matter to just revoke them and all will be well, right?

Alas not. Certification revocation is basically useless. If the server with the CRL (or the OCSP server) tells the browser that it suffered an internal error when the browser asks for the CRL, then rather than playing it safe and rejecting all certificates, the browser will just silently trust the certificate. There may be some slight indication of a problem—in particular, Extended Validation certificates might not make the address bar go green—but essentially, the connection will succeed. This is a classic example of fail-unsafe design. And this is a big problem, because any MITM in a position to use the fraudulent certificates is also in a position to intercept and block access to the CRL. The way CRL validation has been implemented means that it can be effectively defeated by a man-in-the-middle.

Identifying the perpetrators

One of the few details that Comodo did reveal is the supposed IP address used by the attackers. It was traced back to Tehran, Iran. Comodo also says that one of the certificates was tested on a site based in Iran. The use of Iranian IP addresses could be an attempt to mask the true identity of the attacker, but assuming that it's genuine, the implication is that somebody in Iran—possibly government, possibly not—wishes to listen in on the communications of Iranians. The certificates issued were primarily used by communication services; with such certificates, it would be relatively trivial to act as a man-in-the-middle for Gmail, Yahoo! Mail, and Hotmail, allowing access to the inboxes of any users of those services. This focus on communications (rather than, say, an attack on online commerce or banking sites) may reveal an intent to eavesdrop on dissidents rather than, say, grab credit card details.

Iran is forced to use such an approach because it has no more direct way of breaking SSL communication. The standard set of trusted root CAs also includes a number of CAs that are either implicitly or explicitly state entities (such as the UAE's Etisalat, which is known to spy and censor on behalf of the UAE's government), and if a government can coerce a CA into issuing a fraudulent certificate, it can then perform such man-in-the-middle attacks in much the same way—and there is circumstantial evidence that governments do exactly that. In such situations even a revocation list cannot help. And yet, these CAs are trusted by default in a wide range of products.

The chain of trust is broken

This is not the first time that a bogus certificate has been issued. Back in 2001, Verisign issued a code-signing certificate to an individual masquerading as a Microsoft employee, allowing a malicious party to produce programs that appeared to have been produced by Microsoft—an action sure to see the code trusted when it shouldn't have been. Algorithmic flaws at one time allowed hackers to produce fraudulent certificates, though CAs have responded by no longer using the affected algorithms.

This attack was worse than those previous incidents, however. Jacob Appelbaum's investigation revealed evidence that one of the certificates, for login.yahoo.com, was actually put into use, and because of the nature of the MITM attack that these fraudulent certificates enable, the inability to robustly revoke certificates is a problem here—it wasn't for the Verisign incident. Plainly, the model where certain companies are given essentially absolute trust by the browser doesn't work. It has created multiple single points of failure—an attack against any one root CA jeopardizes SSL for everyone—and those failure points do actually fail. A single hack of a CA, or coercion of a CA in an despotic regime, means that a malicious party can produce a certificate that essentially every device on the Internet will trust, allowing interception and eavesdropping of secure communications.

But shifting away from this model—perhaps to one similar to that used in PGP, where trust is decentralized into a web of trust rather than a chain of trust—is going to be an uphill battle. The current chain of trust concept is endemic, and the commercial nature of most root CAs means that they will apply pressure to keep the current system. There are also proposals to more directly associate a domain name with a particular CA and particular certificates, which should make hacking harder—they would at least mean that the right CA has to be hacked before a bogus certificate can be used, though might not be enough to stop a MITM who could intercept DNS too. But these are just small steps; fundamentally, the single points of failure still exist.

A solution to the revocation issue may be simpler, at least insofar as it won't require any radical alteration to the way that SSL works—ensure that certificates are valid only for a few days, and create automated processes to allow Web servers and the like to update their certificates. This would increase the burden on CAs somewhat, but it would mean that the window of opportunity for a fraudulent certificate was very narrow. After a few days the bogus certificate would expire anyway, and client software already warns about expired certificates robustly. This would still not guard against prolonged hacks or coercion of CAs, but would certainly help counter incidents such as this one.

As for what users can do to protect themselves, the main thing is updating their software. Firefox and Chrome both incorporate blacklists to guard against the use of certain certificates independent of their revocation status, and Windows also includes a list of spiked certificates. Microsoft has issued a patch for Windows to update its blacklist, and as previously described, both Firefox and Chrome have been updated to blacklist the bad certificates. For mobile platforms, the situation is trickier. They too should be patched, but patch availability appears to be non-existent so far, and the difficulties that vendors are having in delivering timely updates to Android and Windows Phone 7 make it unlikely that a timely fix will be made available.

HTTPS and other SSL-using protocols (secure SMTP, POP, IMAP, Jabber, and many, many others all build on SSL) still offer protection against casual snoopers; they'll protect against the use of Firesheep in a hipster café just fine. But the trust and security promises that are implicit in the use of SSL, and which are depended on by many—to the extent that people literally bet their life on these protections—are promises that it cannot keep. The centralized trust model doesn't work.

150 Comments