The term “DomainKeys Identified Mail” or “DKIM” is, for most people, Greek, even though we use it every day. Wikipedia calls it “an email validation system designed to detect email spoofing” — that is, it helps ensure that the emails you receive are coming from the people who purport to have sent it. DKIM uses things called “digital signatures” and “public keys” to verify that the company sending the email — everything in the email address after the at-sign — is the one listed. And DKIM is the generally agreed-upon model for email security. If you’re using Gmail, Yahoo mail, AOL’s email service, and a bunch of others, you’re using DKIM.
The technical specs of DKIM, for our purposes, aren’t all that important. But let’s focus a bit on the “public key” thing. It’s a string of bits with a simple rule of thumb: the longer the key, the better the security. The DKIM standards, as of 2012 at least, suggested (if not required) a 1,024 bit key. And that’s what tripped up a Florida mathematician named Zach Harris.
In December 2011, Harris got an email from someone claiming to be a recruiter for Google, wondering if the mathematician was interested in working for the company. The email noted that Harris “obviously [has] a passion for Linux and programming” and the recruiter wondered if Harris was “open to confidentially exploring opportunities with Google.” The email struck Harris as odd — his passion was clearly for math, not necessarily Linux or programming. And as far as he could remember, he hadn’t done anything which suggested an interest in working for Google. Harris’s skepticism took hold and he checked to see if the email was coming from Google at all — and upon further investigation, he found out that it was.
However, he noticed something was off. The sender wasn’t using a 1,024-bit public key, but a 512-bit one. And that, Harris knew, wasn’t very secure, even though at the time, he wasn’t aware of the DKIM standard. Harris explained the differences to Wired: “A 384-bit key I can factor [effectively, decode and therefore, use to send fake emails] on my laptop in 24 hours. The 512-bit keys I can factor in about 72 hours using Amazon Web Services for $75. [. . .] Then there are the 768-bit keys. Those are not factorable by a normal person like me with my resources alone. But the government of Iran probably could, or a large group with sufficient computing resources could pull it off.” As for 1,024-bit encryption? The U.S. government could probably break it, but it’d be time consuming.
Harris knew that Google knew this — they’re Google, after all — and couldn’t fathom that the tech giant would have such lackluster security. He concluded that Google — which has a reputation for creatively evaluating would-be job candidates — was testing him. After all, a job in code breaking and security would make more sense for a mathematician, and how better to test than to hide a somewhat-crackable code in an email talking about “confidentially exploring opportunities?”
So, Harris factored the key. A few days later, he was able to send email as if he were anyone working at Google. To announce his triumph, he contacted Google co-founder Larry Page — by sending Page an email purporting to be from co-founder Sergey Brin. The email gave Page and Brin all the information they (or a designee) would need to contact Harris; that was, after all, the point of the exercise, right?
The 512-bit key wasn’t a test. It was an accident. Google’s email vulnerability was real and Harris had, accidentally, cracked Google’s actual code.
As Wired further reported, Google never contacted Harris to thank him for discovering the vulnerability nor to offer him a job interview. They did, however, up their public keys to 2,048 bits.
From the Archives: Camp AOL: The intern of sorts who never left.
Related: A book on email security for the “practical paranoid.”