Tuesday, 9 February 2016

The hype about Crypter is misplaced and overall dangerous

My problem with cryptography boils down to this: every once in a while, someone comes along claiming that they have a system or software that will revolutionize everything. Naturally, a media frenzy ensues with minimal fact checking. The security industry then catches wind of it, and it is quickly and thoroughly demonstrated to be a pile of vaporware garbage. Then we collectively discover that the enterprising individual has also managed to secure a hefty amount of funding and has spent most of it on a swank office and catered lunches.

Recently, several media outlets gave praise to a project by a student at Sussex University named Max Mitchell. This project, called "Crypter", is a tool to enable encrypted conversations using Facebook's chat service.

Here are the headlines that were given (linked directly from Crypter's home page):
Why and how did Max Mitchell get so much coverage for his encryption tool? Likely it stems from this statement in the BGR article:
Imagine you’re Edward Snowden with a Facebook profile. You text an ace reporter at the Guardian and have a new bit of information to share: you totally found a great new coffee place in the heart of Moscow where the CIA can’t poison you with thallium. How do you send that news securely over Facebook Messenger?
On the website they do in fact use Edward Snowden in their examples:

Edward Snowden is sexy to the media and therefore it appears that Max's borrowing of his name was enough to get attention.

It got further press when Facebook reportedly "blocked" the application--something changed that Max didn't account for really. Of course, RT picks up on this and erroneously quotes my sarcasm on Twitter as support.

Too bad that this entire application is complete garbage and doesn't actually protect anyone.

Key exchange or lack thereof

Crypter's developer mentions that "[its] extension locally encrypts and decrypts your Facebook messages using AES encryption along with a preset password" and "[both parties] must have the same password to ensure you can encrypt and decrypt messages correctly". However, Max's proposal for a key exchange is very, very troubling.

There are entire algorithms for exchanging keys (Diffie-Hellman for example), so the idea that a static key needs to be exchanged using some other means is preposterous and someone like Edward Snowden would very likely agree. Exchanging a static key for decryption of a symmetrical encryption algorithm like AES using any other means other than what has been established is going to be made a mess of simply because users will definitely get it wrong.

This is to me the most frustrating aspect of why Crypter is really just a toy and nothing more even if the author and the media are thinking otherwise. The lack of a secure handshake between two or more parties means to me that it'll never be secure.

I ended up asking them on their Facebook page about this problem:

Crypter's website does mention www.⊗.cf (also known as pulverize.xyz), but it's in the "about" section of the page, and not in the "how to use" part, meaning that someone may end up completely missing the service and will just do something else to send the key off--such as just send it in plaintext before. Pulverize is also ridden with problems so even if you were to use it per the author's suggestion, the suggestion that it is an acceptable key exchange method is laughable.

To describe Pulverize, it's a service written by Max himself where you enter some text into a field and it then generates a link that you can exchange with someone else. When someone else views it, the message is destroyed on the server's end and it displays the text itself. If one attempts to view the page again, they'll receive a message that it doesn't exist. To protect the page from being scraped by an automated service, it uses a Google Captcha service to determine whether or not you're a human being.

In theory, this idea could work because once the link is sent to the other party and they retrieve the self-destructing link, it should exist no more. However, it doesn't take into account two things: do you trust the author to destroy the details about the key intentional and or properly and do you also believe that Facebook or some other incepting party doesn't get to it first?

Being that Pulverize is open-source, it's easy to take a look at their source code and issue tracker. Here's how it works:

  1. It generates a fixed-length string (5 characters) using an insecure random number generator to generate a lookup key
  2. It writes a PHP file to the directory containing details about the secret message
  3. Data contained within the PHP file is stripped of all HTML tags, meaning that if you have any special characters in your AES key, they won't show up
  4. When the link is opened by another party, after confirming the captcha, proceed to not quite delete the file
So far we have a potentially predictable URL and data that is stripped of any characters removed by PHP's strip_tags function. There's no assurances that the file is removed either as PHP's unlink function just removes the file headers from the drive, not the file data itself. So in theory, if the server running Pulverize was seized, the data may be recoverable from the hard drive--that is also assuming that the code we see on Github is being truthful as how can you trust that Max is even making any attempt to delete data to begin with?

To add to all this, in prior versions of Pulverize (as in up until last Friday), a remote code execution (RCE) vulnerability existed in its codebase. The aforementioned use of the strip_tags function was what replaced this particular line of code:
This means that one could just put in the following as the text that they want encoded:
<?php phpinfo()?>
Instead of seeing the above line you'd have the output of the phpinfo function, meaning that whatever PHP code you wanted to execute would execute. This would also mean that JavaScript and HTML were all injectable.

For the key exchange, this would have meant that we could have read keys before anyone else without having to worry about them being "destroyed". This sort of RCE should have been spotted from a mile away and yet here we are with Max promoting a crypto tool alongside another tool that is completely inadequate.

Fortunately this has been fixed but we're still dealing with a situation where we're being limited on what characters are acceptable for use on the site and that the data is not entirely being scrubbed from the server. There are far, far better solutions than this but I get the impression that Max hasn't taken the time to read into what he's trying to achieve.

Facebook as an adversary and a lack of verification

So let's take a step back here for a second and assume for a moment that we can trust that Pulverize will destroy all records of that key and let's look at an easy scenario that Crypter likes to portray here: Facebook being the adversary.

Ignoring the fact that if Facebook was an adversary you wouldn't be using it in the first place, how do you know that Facebook isn't intercepting the key exchange itself?

Here are things that Facebook is in control of here:
  1. All messages going back and forth between all involved parties
  2. The formatting of the messages coming in and going out
  3. The collection and storage of the conversation
So what's to stop Facebook from the following:
  • Intercepting a key exchange using Pulverize and then feeding a different link to the other party
  • Interfering with the application by changing the page so you think that you're using Crypter when in fact you're using something Facebook is serving up
  • Using the reduced number of characters available to your key size to determine the key using bruteforce methods
With the first point, this does require some human intervention as Pulverize does require you to go through an image-based captcha in order to make and retrieve encoded text. So yes, it would be difficult for Facebook to automatically retrieve the key without resulting in having either party aware, but what if we decided to just intervene?

We have no verification that the other party is receiving the correct key. In fact, we have no verification that the Pulverize URL that the other party is to receive is the URL that they are to get. If we're dealing with a situation where Facebook has been coerced to intercept traffic going from one party to another, what's to stop them from going through this process?
  1. Intercept and monitor all messages using a human actor
  2. Wait for a Pulverize URL to come in, grab that URL, note the key, make a new Pulverize URL with said key, and then pass the message on
  3. Intercept and decode all messages using the preset key
Since the Pulverize URLs would supposedly be gone and the key is consistent, there is no way to actually confirm that an adversary does or does not have access to the messages.

To add to this, how Crypter goes about enciphering and deciphering text is pretty scary.

Here's the encryption process:
var encrypt = tag+CryptoJS.AES.encrypt(messageContent, getPass($(this)))+tag;
Here's the decryption process:
var decrypt = CryptoJS.AES.decrypt($(this).attr("id"), getPass($(this))).toString(CryptoJS.enc.Utf8);
What's missing here? There's no verification of the message; Max has opted to just encipher and decipher the text without actually verifying that the message in question is actually what was intended. There are a few things to add to this as well (how the key is being used is one problem), but really there is no integrity of the message being sent.

Crypter is garbage and should not be used. I've seen bad things before and have ranted on similar topics, but this takes the cake considering the coverage it got.

Closing off and venting

How does Max know that both parties are getting the messages that they intend to get? Of course, this is what he believes:
Talking encrypted with no password (impossible to do with Crypter) is more secure and private than having no password. We doubt Facebook bots are able to decrypt encrypted text (even if the encrypted text doesn't have a password). We built crypter to *help* with internet security similarly to PGP's (*pretty good* encryption) philosophy but our main ethos is actually privacy. We want to make it harder for facebook and the NSA to know what people are saying to each other over Facebook (see more about us on our website www.crypter.co.uk). We are not claiming that this is a 'bullet proof' application and we don't believe we are "reinventing the wheel". We just see it as having put two things together - Encryption and Facebook.
And what does the website say?
It’s human nature to want privacy. In light of Edward Snowden’s Global Surveillance disclosures, people don’t want their messages stored and analysed regardless of whether their topic of discussion is illegal. Crypter can put millions of people at ease.
So what's the difference between "putting people at ease" and then saying that "it's not bullet-proof"?

Of course, TechCrunch sees it this way:

I’d have a hard time trusting my secret tiramisu recipe to any service. Mitchell has created something that is nearly invisible and seems like it might be a good one-off solution to secure communications between friends, reporters, and secret dessert lovers.
The thing that Max Mitchell and media outlets seem to overlook is that the fact that if an adversary is after you, Facebook and other similar services will never be able to provide a secure platform to communicate over. If Edward Snowden was using Facebook to chat and was using Crypter, I can promise you that at some point somewhere the conversations would become compromised.

There are better solutions out there too. I accused Max of reinventing the wheel and for good reason: why not use OTR? A native JavaScript version has existed for years now and is actively developed. While I lament anything JS-based, it would have been far better than what Max had gone an implemented.

If Max had been paying attention to the whole Edward Snowden fiasco, he'd have known that the secure e-mail service which was centralised like Pulverize  was shutdown to get further information.

I hope that Max sees the light and stops before he does any further damage.


  1. Also — what's to stop Facebook from interfering with this tool client-side in javascript?

    1. Really, not much if they really wanted.
      It's been hypothesized (and maybe proven, but I'm not sure) that partial statuses and deleted statuses are saved, so it wouldn't be too hard to imagine the same concept but with messages.

  2. My thought is why do they have to keep the key the same at all? If you are going to man in the middle, why doesn't facebook just edit the key exchange with it's own link and just decrypt and re-encrypt as needed.