August 11th, 2012
(HigginsBlog) – IT experts prove themselves as useful idiots as a civil war escalates over the proper security activists and journalists should use to evade repressive governments.
Right now there is a battle raging on the Internet between IT security experts about how activists and journalist can safely communicate sensitive information without some oppressive totalitarian government killing them for it.
The battle is centered around the recent fanfare given to the ability CryptoCat to safely allow people to communicate easily via an online based instant messenger chat system.
As these IT experts continue throwing insults at one another, disproving each others claims and promoting their technology of their own choice, they are ALL accomplishing little more them proving themselves to be useful idiots.
For the TLDR crowd – and I will fully qualify this statement – There is no such thing as a safe and secure online communication system PERIOD. I don’t give a shit which one of these ‘experts’ tells you otherwise and if you want to be dumb enough to risk your life trusting one don’t say I didn’t warn you when you are being hung.
As these ‘experts’ continue touting the one technology verse another the bottom line is they are all extremely incompetent, ignorant or both and are all wrong – there is simply nothing that will evade government actors period and these ‘experts’ are nothing more that useful idiots for those repressive governments for telling you anything otherwise.
In case you haven’t been following the drama the insults getting thrown back and forth the gist of the attacks goes something like this:
Very loosely paraphrased
This so-called security expert on site X is a fucking idiot and is clearly setting you up to get stung by [Insert choice of totalitarian government (US, China, Iran) here] by claiming [insert security/encryption technology here] is safe and secure.
[insert security/encryption technology here] is not safe and secure and here’s how you’ll get fucked if think using it will protect you from [Insert choice of totalitarian government (US, China, Iran) here]
Now instead of using that dumb fuckwit’s technology you should used [insert attacking idiots security/encryption technology of choice here] because it because it addresses the obvious flaws that the original fucknut failed to warn you about hence making it safe and secure.
While the tone of the conversation is alarming if there is anything good coming out of the conversations it is these experts are finally revealed to the long deceived public security systems they once believed to be safe (such as SSL systems) really are anything but safe and a susceptible to wide range of attacks from ‘bad state actors’.
But was is alarming about this is that as each expert debunks on security framework they push another system of technologies which of course gets exposed as being vulnerable by another expert.
The danger in all of this is that an activist or journalist who didn’t read the debunking of some touted technology is susceptible to unknowingly use that technology believing it will stop the Egyptian government from imposing a life sentence, or the Iranians from handing out a death sentence, or the Americans from ‘disappearing’ them to some secret CIA torture prison for the remainder of their days on earth without a trial or jury.
As I will explain below all of these technologies are vulnerable… period.
But first a quick overview of the shit throwing that is going on.
Instead of taking you back to the start, this wired article picks up a few attacks and counterattacks later:
Note – These are select excerpts from a long 5 page article:
Security Researchers: How to Critique a Tech Story Without Being Arrogant and Exclusionary
Two Fridays ago, Wired published a 2,000 word feature story by Quinn Norton about Cryptocat, an online chat system that’s working to make encrypted chat as simple as loading a web page. Norton profiled its creator Nadim Kobeissi, the intimidation from U.S. officials he’s claimed to have faced, and the difficult technical challenges that such a program entails.
The piece delves into Kobeissi’s motivations, the initial pushback from the security community and his dedication to making a security tool that’s actually usable by someone outside the rarefied world of crypto geeks.
I was quite pleased the story gathered a lot of attention, including making it onto the front page of Reddit.
A few days later, Christopher Sogohian, a well-known and widely respected voice in the security community, penned a response entitled “Tech journalists: Stop hyping unproven security tools,” lambasting Wired’s story, laying it side-by-side with other sites’ coverage of security vaporware. He called it “bad journalism.”
As the editor of the piece, I’m going to disagree.
Clearly, Cryptocat is not always the ideal tool. So far nothing is. But that doesn’t mean it’s a bad tool or that writing about it is bad journalism.
Cryptocat is a very interesting addition to the suite of security tools available to the world, and is a refreshing breakthrough — thanks to its focus on user experience, something that is abysmally lacking in security tools like Tor and OTR.
While this post is a response to Soghoian’s critique, it’s not really directed at him — it’s meant for the portion of the security community his blast was emblematic of.
First, you’d have no indication from Soghoian’s critique that Quinn Norton is anything other than an overworked, technically illiterate blogger filling a quota by writing up press releases hyping the next big thing.
He writes: “When a PR person retained by a new hot security startup pitches you, consider approaching an independent security researcher or two for their thoughts. Even if it sounds great, please refrain from showering the tool with unqualified praise.
By all means, feel free to continue hyping the latest social-photo-geo-camera-dating app, but before you tell your readers that a new security tool will lead to the next Arab Spring or prevent the NSA from reading peoples’ emails, step back, take a deep breath, and pull the power cord from your computer.”
Norton has never written a story for Wired or any other publication based off a press release. That’s not the kind of thing she covers. She covers Occupy and Anonymous – penning thoughtful, informed, well-sourced pieces that often climb past 3000 words. Moreover, she’s been part of security/geek/electronic freedom communities for years, and for more than a decade has been an educator teaching people how to use their computers..
She uses more crypto and practices more vigilant opsec than any other reporter I’ve ever met (and for good reason). But you’ll not find any indication of that in Soghoian’s post. Instead, she gets dismissed because she’s made comments on Twitter criticizing the security community for its first-world white male privilege.
Moreover, Soghoian suggesting that if Quinn Norton ever wanted to write about about encryption tools in the future, she ought to “step back, take a deep breath, and pull the power cord from your computer” isn’t just rude and obnoxious, it’s border-line sexist and an outright abuse of Soghoian’s place in the computer security world.
Norton asked Meredith Patterson, a talented and well-known security figure, who was initially critical of Cryptocat and who has reviewed the codebase, for comment:
“Browsers are huge, complex, multilayered beasts with lots of moving parts, and every last one of them implements at best some dialect of each of the many standards that a modern browser has to support,” said Meredith Patterson, a senior research scientist at Red Lambda. Patterson deals with security and cryptography on an architectural level in her research, and has reviewed and commented on Cryptocat.
Kobeissi faced criticism from the security community for even trying, but he persevered. Now more than a year later, “Cryptocat has significantly advanced the field of browser crypto,” he said with obvious pride. “We implemented elliptic curve cryptography, (and) a cryptographically secure random number generator in the browser,” along with creating a Cryptocat Chrome app to address the code delivery problem.
“I don’t think Nadim really knew what he was in for when he started this project, but although it got off to a bumpy start, he’s risen to the occasion admirably,” said Patterson.
But Kobeissi also knows that it’s equally important that Cryptocat be usable and pretty. Kobeissi wants Cryptocat to be something you want to use, not just need to. Encrypted chat tools have existed for years — but have largely stayed in the hands of geeks, who usually aren’t the ones most likely to need strong crypto. “Security is not just good crypto. It’s very important to have good crypto, and audit it. Security is not possible without (that), but security is equally impossible without making it accessible.”
Patterson agrees with Kobeissi’s approach. “As much as it drives all of us nerds batshit, J. Random internet user spends most if not all of her time in the browser, and generally doesn’t care to install even a separate email client — much less a separate chat client,” she said. “If you don’t go where the users live, you don’t get users. End of story.”
That, he says, was not made clear in Wired’s story until late, implying we wanted to hide it from users.
For the record, the headline on the story, This Cute Chat Site Could Save Your Life and Help Overthrow Your Government, and the placement of the section on the tool’s experimental nature, were my choices as the editor. I won’t apologize for the headline which, though bold, was also accurate. Moreover, Quinn’s first draft had the section that Soghoian thought came too late — about the tool being in its early stages and being vulnerable to certain attacks — starting in the ninth paragraph of a very long piece.
I made the decision to move it down, since the piece read much better in a different order. Leading with Kobeissi’s background put the software in a different context – the software came across as an expression of a worldview informed by Kobeissi’s life in Lebanon and the interrogations he says he’s endured at the U.S. border.
We weren’t hiding anything from readers — we write long stories and our readers read them.
Soghoian says we failed our readers and put their lives at risk because Cryptocat is made for the “tl;dr crowd”. For those who don’t know, tl’dr means “Too Long; Didn’t Read” and is used online to dismissively signal that a story is too long, but often it just demonstrates a person’s intellectual laziness.
It’s a very telling assumption about Wired readers and Cryptocat’s users. In Soghoian’s view, a simple encryption tool that focuses on user experience is meant for those who are lazy and stupid and who can’t be bothered to read a longish story. It’s a convenient way to elide longstanding criticism of security tools for being too difficult for even decently tech-savvy users to configure and install.
Speaking for a very vocal part of the crypto-community, he goes on to argue that it is dangerous to encourage people to use a tool that is safer than Twitter, Facebook, AIM or Google Chat, but not as safe as OTR.
It is by now well documented that humans engage in risk compensation. When we wear seatbelts, we drive faster. When we wear bike helmets, we drive closer. These safety technologies at least work.
We also engage in risk compensation with security software. When we think our communications are secure, we are probably more likely to say things that we wouldn’t if our calls were going over a telephone like or via Facebook. However, if the security software people are using is in fact insecure, then the users of the software are put in danger.
[...]For instance, Soghoian is one of the net’s biggest proponents of increased use of SSL (encountered on the web as https://) as a way to increase user safety.
But SSL is widely known to be vulnerable to the exact same man-in-the-middle attack as Cryptocat. Soghoian knows about this problem and has written extensively about the flaws in SSL, as have the security experts that he prefers to Patterson. In short, it’s not very hard for a business, an ISP or a country to muddle with SSL certificates so that it can spy on a user who thinks she is connecting securely to a site.
Clearly, a user who sees a lock icon in their browser might well say something more damning or explicit than they would if that icon weren’t there assuring them they are safe.
Despite that, Soghoian has been a leader in pushing the net’s biggest tech companies to adopt SSL by default, accusing them of putting users at risk by not doing so. In 2009, he published an open letter to then-Google CEO Eric Schmidt to implement HTTPS as the default for Gmail, Google Docs and Google Calendar. He later pushed Mozilla to turn Firefox’s search box’s default to encrypted Google search.
He and the security community are right – despite the known flaws in SSL, and Wired has covered their campaigns extensively.
However, nowhere in these efforts does Soghoian mention or address that a user who see HTTPS might engage in riskier behavior. For example an employee might send an e-mail critical of their boss from a private webmail account accessed on a work computer — assuming that the communication is safe from prying eyes — when in fact the certificates installed in their browser have been modified by their employer so that employees can be spied on. Or a Iranian activist could login to Facebook over HTTPS, only to find later she’d been spied on.
But when it comes to another tool with known vulnerabilities — one created by an outsider to the clubby crypto community and one that’s written up by a woman and reviewed by a female security expert, Soghoian turns to the “risk compensation” argument.
That’s a shame because in the real world, most people don’t chose to be activists or to be in a position where encryption is necessary. It’s rarely a lifestyle and occupation choice, as it is for many in the U.S. They become activists or whistleblowers because something happens to them – or because there’s some larger, inescapable event that intrudes on their lives.
What people do is turn to the tools that are familiar and easy – Skype, Facebook, Twitter — not to installing PGP, TOR, Pidgin and OTR. Ideally, citizens-turned-activists will eventually learn to use those more complicated tools, but there’s a continuum.
[...]His vocal critics crowed over his capitulation. Soghoian asked me for a retraction. Then they complained that version 2 was still unsafe since Google could decide to deliver an infected plug-in.
The latest in the escalation of the civil war among these ‘security experts’ comes from Patrick Ball, via wired as well, which attacks the wired article above,
In doing so, he continues the same cycle I paraphrased above, pointing out why Cryptocat is vulnerable because the host can be compromised and goes own to tout the security system used by his own company again in either incompetence or in ignorance believing the technology and those who use are safe from repressive governments.
Note: these are excerpts from a two page wired article
When It Comes to Human Rights, There Are No Online Security Shortcuts
As one of people who builtMartus, an encrypted database used by thousands of human rights activists around the world, I routinely confront the needs of users who are not in wealthy countries, as well as the difficult problem that creating real, easy-to-use security poses. My thoughts here are focused on the democracy activists, citizen journalists, and human rights workers in the world’s toughest political environments. These are our Martus users, and my colleagues and friends. These are people who need security more than just about anyone: it can be literally a question of life and death.
One thing that makes that already difficult situation worse, though, is when otherwise well-informed people give bad advice about what is and is not secure. Unfortunately, an opinion piece at Wired recently espoused a view I find inaccurate, misleading, and potentially dangerous about using certain tools for human rights work. I’ll explain the problem here, and I’ll offer some questions you might ask about security applications in the future.
My concerns stem from a sharp debate over software called CryptoCat – a debate spurred largely by an admiring profile at Wired. CryptoCat is a web-based chat application which uses encryption to scramble the contents of a conversation, in theory resisting electronic snooping. The interesting twist is that CryptoCat does the crypto without using the easily-thwarted security built into browsers (called SSL), and without requiring the user to download and install additional software (like Pidgin and OffTheRecord).
Seems great, right?
Well, not so great. CryptoCat is one of a whole class of applications that rely on what’s called “host-based security”. The most famous tool in this group is Hushmail, an encrypted e-mail service that takes the same approach. Unfortunately, these tools are subject to a well-known attack. I’ll detail it below, but the short version is if you use one of these applications, your security depends entirely the security of the host. This means that in practice, CryptoCat is no more secure than Yahoo chat, and Hushmail is no more secure than Gmail. More generally, your security in a host-based encryption system is no better than having no crypto at all.
CryptoCat’s security is based on how it convinces your browser to do the encryption on your computer. To simplify, there are two parts to an encryption system: the encryption engine, and the key. The encryption engine is the software that does the actual work — everyone who uses the tool uses the same encryption engine. The second component is your key, which is unique to every user. The key holds, well, the key to your security. It must be kept secret, so only you have it. Again simplifying, the key consists of a tiny computer file and your passphrase. (If you want to know more about keys, see my earlier blog post on this topic).
n host-based systems, the host keeps the tiny computer file, but not your passphrase. The idea is that only you know your passphrase. In theory, the host cannot access your data because although they have part of your key, they don’t have your passphrase. When you login, the hosts sends the encryption engine to you in a computer program (called an applet) that runs inside your browser; the tiny computer file with part of your key is attached alongside the applet. All the encryption and decryption happens in your browser, on your computer. That means that the host only ever sees the encrypted data. Since only you have your passphrase, your data should be secure, even if the host wants to attack it.
But there’s a problem. If an attacker can get access to your key and your passphrase, all your encrypted data is now accessible to him. Remember that the host already has your key. All they need is your passphrase. So if the host wants to attack you, all they need to do is send you a special encryption engine that captures your passphrase the next time you use the service. As usual, it does all the encryption and decryption for you, right on your computer. But it also remembers your passphrase, and sends it secretly back to the host. This is the heart of the attack: if the server sends you a special applet that spies on you, all your encrypted data is now wide open.
Why They’re All Useful Idiots
Patrick Ball is entirely right for pointing out the vulnerabilities of ‘host based’ systems, he and every other security expert attacking Cryptocat as being insecure for this reason are really proving to be useful idiots when they advocate other security systems as being safe when they to are also downloaded of the internet.
For starters, just because they are downloaded only single time and installed the user computer and executed from the user’s computer in each instance afterward still does not discount the fact a repressive government can compromise that original download.
Yet, a version of Cryptocat allows users to install a plugin in their browser – from Google’s server – after which the Cryptocat code is executed from that single download meanwhile these idiots push that as being insecure while saying their systems – which too are downloaded – are safe.
Then there is the fact that there are a wide vector of attacks to compromise even that code that is already installed on a local computer – I mean just look at the riddled security flaws in the windows operating system.
One could argue ‘but you can download the original open source code and compile it yourself’ but let’s get real. Nothing says that source code isn’t going to be intercepted and modified or that source code is even free from attacks.
Even then that source code needs to be compiled and what stops an oppressive government from using nasty little things like National Security Letters to secretly put code into the compiler to inject vulnerabilities into the open source.
Or an ISP can be ordered to forward your internet traffic through a government controlled router who can then in turn do all kinds of nasty things – like give you malicious code updates for your operating system and other software.
While you may think this is far-fetched, you only need to look at what the US government did with the notorious Stuxnet and Flame viruses.
The Flame virus in fact replicates itself by attacking windows updates and a bad government actor certainly has the ability to do the same with linux/unix package updates or whatever software you have on your system.
The only way to prevent this would be to never update anything on your computer at all and with things like planned obsolescence that is highly unlikely.
Even worse it is highly likely that government back doors are all over you computer and electronic equipment to begin with.
When using excuses such as national security it is not unimaginable to think that your windows NDIS driver ( which handles all network traffic), your keyboard drivers, your monitor drivers or whatever else has all kinds of ‘zero day’ vulnerabilities (read purposefully install on the order of a national security letter government backdoor).
We are not talking about things that are possible in theory here either – but are possible in practice and are openly available technologies.
These are things that are a matter of a government choosing to implement and when your life hangs in the balance and their excuse is national security, you just better think twice.
Do you really trust that the microchips in your computer or smart phone don’t have hidden functionality in them to begin with?
Encryption does nothing if the unencrypted data can be intercepted before it is even encrypted.
And let’s suppose that only the encrypted data exists and is accessible it is well-known that any encryption system can be cracked.
Seriously, it’s not a matter of whether its uncrackable and encryption systems are designed to be uncrackable.
Instead they are designed with the consideration being if it is practical to crack them which is based on ASSUMPTIONS of how long it would take to crack given the technology commonly known to exist and on ‘practical estimates of how fast that technology will advance in the foreseeable future.
For example, the number of bits used in RSA encryption have historically been based on a key not being able to be cracked in less than 10,000 years given projections on how fast technology will exist.
Yet, the number of bits used has been repeatedly increased because the previous estimates not only were found to be far to lacking but were themselves cracked.
Bottom line is a government actor can crack TOR, or OTR, AES, or anything else that you think they can’t.
If you think you are so smart and I am wrong then go right ahead and bet your life on it.
I could literally write an entire reference manual on how – given the government’s power and resources – to crack all of these systems that people swear up and down are secure.
Source: Higgins Blog