OpenBSD founder Theo de Raadt has created a fork of OpenSSL, the widely used open source cryptographic software library that contained the notorious Heartbleed security vulnerability.
OpenSSL has suffered from a lack of funding and code contributions despite being used in websites and products by many of the world’s biggest and richest corporations.
The decision to fork OpenSSL is bound to be controversial given that OpenSSL powers hundreds of thousands of Web servers. When asked why he wanted to start over instead of helping to make OpenSSL better, de Raadt said the existing code is too much of a mess.
“Our group removed half of the OpenSSL source tree in a week. It was discarded leftovers,” de Raadt told Ars in an e-mail. “The Open Source model depends [on] people being able to read the code. It depends on clarity. That is not a clear code base, because their community does not appear to care about clarity. Obviously, when such cruft builds up, there is a cultural gap. I did not make this decision… in our larger development group, it made itself.”
The LibreSSL code base is on OpenBSD.org, and the project is supported financially by the OpenBSD Foundation and OpenBSD Project. LibreSSL has a bare bones website that is intentionally unappealing.
“This page scientifically designed to annoy web hipsters,” the site says. “Donate now to stop the Comic Sans and Blink Tags.” In explaining the decision to fork, the site links to a YouTube video of a cover of the Twisted Sister song “We’re not gonna take it.”
April 23, 2014
April 16, 2014
You would have thought this would have sunk in by now. The fact that it hasn’t shows what an extraordinary machine the internet is — quite different to any technology that has gone before it. When the Lovebug struck, few of us lived our lives online. Back then we banked in branches, shopped in shops, met friends and lovers in the pub and obtained jobs by posting CVs. Tweeting was for the birds. Cyberspace was marginal. Now, for billions, the online world is their lives. But there is a problem. Only a tiny, tiny percentage of the people who use the internet have even the faintest clue about how any of it works. “SSL”, for instance, stands for “Secure Sockets Layer”.
I looked it up and sort of understood it — for about five minutes. While most drivers have at least a notion of how an engine works (something about petrol exploding in cylinders and making pistons go up and down and so forth) the very language of the internet — “domain names” and “DNS codes”, endless “protocols” and so forth — is arcane, exclusive; it is, in fact, the language of magic. For all intents and purposes the internet is run by wizards.
And the trouble with letting wizards run things is that when things go wrong we are at their mercy. The world spends several tens of billions of pounds a year on anti-malware programs, which we are exhorted to buy lest the walls of our digital castles collapse around us. Making security software is a huge industry, and whenever there is a problem — either caused by viruses or by a glitch like Heartbleed — the internet security companies rush to be quoted in the media. And guess what, their message is never “keep calm and carry on”. As Professor Ross Anderson of Cambridge University says: “Almost all the cost of cybercrime is the cost of anticipation.”
Michael Hanlon, “Relax, Mumsnet users: don’t lose sleep over Heartbleed hysteria”, Telegraph, 2014-04-16
April 11, 2014
Some people are claiming that the Heartbleed bug proves that open source software is a failure. ESR quickly addresses that idiotic claim:
I actually chuckled when I read rumor that the few anti-open-source advocates still standing were crowing about the Heartbleed bug, because I’ve seen this movie before after every serious security flap in an open-source tool. The script, which includes a bunch of people indignantly exclaiming that many-eyeballs is useless because bug X lurked in a dusty corner for Y months, is so predictable that I can anticipate a lot of the lines.
The mistake being made here is a classic example of Frederic Bastiat’s “things seen versus things unseen”. Critics of Linus’s Law overweight the bug they can see and underweight the high probability that equivalently positioned closed-source security flaws they can’t see are actually far worse, just so far undiscovered.
That’s how it seems to go whenever we get a hint of the defect rate inside closed-source blobs, anyway. As a very pertinent example, in the last couple months I’ve learned some things about the security-defect density in proprietary firmware on residential and small business Internet routers that would absolutely curl your hair. It’s far, far worse than most people understand out there.
Ironically enough this will happen precisely because the open-source process is working … while, elsewhere, bugs that are far worse lurk in closed-source router firmware. Things seen vs. things unseen…
Returning to Heartbleed, one thing conspicuously missing from the downshouting against OpenSSL is any pointer to an implementation that is known to have a lower defect rate over time. This is for the very good reason that no such empirically-better implementation exists. What is the defect history on proprietary SSL/TLS blobs out there? We don’t know; the vendors aren’t saying. And we can’t even estimate the quality of their code, because we can’t audit it.
The response to the Heartbleed bug illustrates another huge advantage of open source: how rapidly we can push fixes. The repair for my Linux systems was a push-one-button fix less than two days after the bug hit the news. Proprietary-software customers will be lucky to see a fix within two months, and all too many of them will never see a fix patch.
Update: There are lots of sites offering tools to test whether a given site is vulnerable to the Heartbeat bug, but you need to step carefully there, as there’s a thin line between what’s legal in some countries and what counts as an illegal break-in attempt:
Websites and tools that have sprung up to check whether servers are vulnerable to OpenSSL’s mega-vulnerability Heartbleed have thrown up anomalies in computer crime law on both sides of the Atlantic.
Both the US Computer Fraud and Abuse Act and its UK equivalent the Computer Misuse Act make it an offence to test the security of third-party websites without permission.
Testing to see what version of OpenSSL a site is running, and whether it is also supports the vulnerable Heartbeat protocol, would be legal. But doing anything more active — without permission from website owners — would take security researchers onto the wrong side of the law.
And you shouldn’t just rush out and change all your passwords right now (you’ll probably need to do it, but the timing matters):
Heartbleed is a catastrophic bug in widely used OpenSSL that creates a means for attackers to lift passwords, crypto-keys and other sensitive data from the memory of secure server software, 64KB at a time. The mega-vulnerability was patched earlier this week, and software should be updated to use the new version, 1.0.1g. But to fully clean up the problem, admins of at-risk servers should generate new public-private key pairs, destroy their session cookies, and update their SSL certificates before telling users to change every potentially compromised password on the vulnerable systems.
April 9, 2014
Update: In case you’re not concerned about the seriousness of this issue, The Register‘s John Leyden would like you to think again.
The catastrophic crypto key password vulnerability in OpenSSL affects far more than web servers, with everything from routers to smartphones also affected.
The so-called “Heartbleed” vulnerability (CVE-2014-0160) can be exploited to extract information from the servers running vulnerable version of OpenSSL, and this includes email servers and Android smartphones as well as routers.
Hackers could potentially gain access to private encryption key before using this information to decipher the encrypted traffic to and from vulnerable websites.
Web sites including Yahoo!, Flickr and OpenSSL were among the many left vulnerable to the megabug that exposed encryption keys, passwords and other sensitive information.
Preliminary tests suggested 47 of the 1000 largest sites are vulnerable to Heartbleed and that’s only among the less than half that provide support for SSL or HTTPS at all. Many of the affected sites – including Yahoo! – have since patched the vulnerability. Even so, security experts – such as Graham Cluley – remain concerned.
OpenSSL is a widely used encryption library that is a key component of technology that enables secure (https) website connections.
The bug exists in the OpenSSL 1.0.1 source code and stems from coding flaws in a fairly new feature known as the TLS Heartbeat Extension. “TLS heartbeats are used as ‘keep alive’ packets so that the ends of an encrypted connection can agree to keep the session open even when they don’t have any official data to exchange,” explains security veteran Paul Ducklin in a post on Sophos’ Naked Security blog.
The Heartbleed vulnerability in the OpenSSL cryptographic library might be exploited to reveal contents of secured communication exchanges. The same flaw might also be used to lift SSL keys.
This means that sites could still be vulnerable to attacks after installing the patches in cases where a private key has been stolen. Sites therefore need to revoke exposed keys, reissue new keys, and invalidate all session keys and session cookies.
“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.
Half a million sites are vulnerable, including my own. Test your vulnerability here.
The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.
At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.
April 7, 2014
March 26, 2014
BBC News reports that — once again — some of the US Secret Service agents tasked with protecting the President have come to the attention of the press for reasons other than their assigned mission:
Three US Secret Service agents tasked with protecting President Barack Obama in the Netherlands have been sent home for “disciplinary reasons”.
The Washington Post reported that one was found drunk and passed out in the hallway of an Amsterdam hotel.
A Secret Service spokesman declined to give details but said the three had been put on administrative leave pending an investigation.
The service has been trying to rebuild its reputation after previous scandals.
In 2013 two agents were removed from President Obama’s security detail amid allegations of sexual harassment and misconduct.
And in 2012 several agents were dismissed following allegations that they hired prostitutes while in Cartagena, Colombia.
Secret Service spokesman Ed Donovan said the latest incident happened before President Obama’s arrival in the Netherlands on Monday for a nuclear security summit.
He said the three had been sent home for “disciplinary reasons” but declined to elaborate.
Mr Donovan added that the president’s security had not been compromised in any way.
February 9, 2014
Driving your car anywhere soon? Got anti-hacking gear installed?
Spanish hackers have been showing off their latest car-hacking creation; a circuit board using untraceable, off-the-shelf parts worth $20 that can give wireless access to the car’s controls while it’s on the road.
The device, which will be shown off at next month’s Black Hat Asia hacking conference, uses the Controller Area Network (CAN) ports car manufacturers build into their engines for computer-system checks. Once assembled, the smartphone-sized device can be plugged in under some vehicles, or inside the bonnet of other models, and give the hackers remote access to control systems.
“A car is a mini network,” security researcher Alberto Garcia Illera told Forbes. “And right now there’s no security implemented.”
Illera and fellow security researcher Javier Vazquez-Vidal said that they had tested the CAN Hacking Tool (CHT) successfully on four popular makes of cars and had been able to apply the emergency brakes while the car was in motion, affect the steering, turn off the headlights, or set off the car alarm.
The device currently only works via Bluetooth, but the team says that they will have a GSM version ready by the time the conference starts. This would allow remote control of a target car from much greater distances, and more technical details of the CHT will be given out at the conference.
January 21, 2014
In The Register, John Leyden discusses a new start-up’s plans for defending websites against hackers:
Startup Shape Security is re-appropriating a favourite tactic of malware writers in developing a technology to protect websites against automated hacking attacks.
Trojan authors commonly obfuscate their code to frustrate reverse engineers at security firms. The former staffers from Google, VMWare and Mozilla (among others) have created a network security appliance which takes a similar approach (dubbed real-time polymorphism) towards defending websites against breaches — by hobbling the capability of malware, bots, and other scripted attacks to interact with web applications.
Polymorphic code was originally used by malicious software to rewrite its own code every time a new machine was infected. Shape has invented patent-pending technology that is able to implement “real-time polymorphism” — or dynamically changing code — on any website. By doing this, it removes the static elements which botnets and malware depend on for their attacks.
January 18, 2014
Safe manufacturers generally ship their products with a factory-standard combination. Many people fail to change them once the safe is in use:
In England many years ago, chatting with a locksmith while he worked, I learned the following thing: One of the country’s leading manufacturers of safes shipped all its products set to a default opening combination of 102030, and a high proportion of customers never reset it.
He: “If I need to open a Chubb safe, it’s the first thing I try. You’d be surprised how often it works.”
This came to mind when I was reading the story about Kennedy-era launch codes for our nuclear missiles:
…The Strategic Air Command greatly resented [Defense Secretary Robert] McNamara’s presence and almost as soon as he left, the code to launch the missile’s [sic], all 50 of them, was set to 00000000.
I use a random-string generator for my passwords and change them often. I guess safeguarding my Netflix account is more important than preventing a nuclear holocaust.
December 12, 2013
Charles Stross has a few adrenaline shots for your paranoia gland this morning:
The internet of things may be coming to us all faster and harder than we’d like.
Reports coming out of Russia suggest that some Chinese domestic appliances, notably kettles, come kitted out with malware — in the shape of small embedded computers that leech off the mains power to the device. The covert computational passenger hunts for unsecured wifi networks, connects to them, and joins a spam and malware pushing botnet. The theory is that a home computer user might eventually twig if their PC is a zombie, but who looks inside the base of their electric kettle, or the casing of their toaster? We tend to forget that the Raspberry Pi is as powerful as an early 90s UNIX server or a late 90s desktop; it costs £25, is the size of a credit card, and runs off a 5 watt USB power source. And there are cheaper, less competent small computers out there. Building them into kettles is a stroke of genius for a budding crime lord looking to build a covert botnet.
But that’s not what I’m here to talk about.
I’m dozy and slow on the uptake: I should have been all over this years ago.
And it’s not just keyboards. It’s ebook readers. Flashlights. Not your smartphone, but the removable battery in your smartphone. (Have you noticed it running down just a little bit faster?) Your toaster and your kettle are just the start. Could your electric blanket be spying on you? Koomey’s law is going to keep pushing the power consumption of our devices down even after Moore’s law grinds to a halt: and once Moore’s law ends, the only way forward is to commoditize the product of those ultimate fab lines, and churn out chips for pennies. In another decade, we’ll have embedded computers running some flavour of Linux where today we have smart inventory control tags — any item in a shop that costs more than about £50, basically. Some of those inventory control tags will be watching and listening to us; and some of their siblings will, repurposed, be piggy-backing a ride home and casing the joint.
The possibilities are endless: it’s the dark side of the internet of things. If you’ll excuse me now, I’ve got to go wallpaper my apartment in tinfoil …
November 1, 2013
Dan Goodin posted a scary Halloween tale at Ars Technica yesterday … at least, I’m hoping it’s just a scary story for the season:
In the intervening three years, Ruiu said, the infections have persisted, almost like a strain of bacteria that’s able to survive extreme antibiotic therapies. Within hours or weeks of wiping an infected computer clean, the odd behavior would return. The most visible sign of contamination is a machine’s inability to boot off a CD, but other, more subtle behaviors can be observed when using tools such as Process Monitor, which is designed for troubleshooting and forensic investigations.
Another intriguing characteristic: in addition to jumping “airgaps” designed to isolate infected or sensitive machines from all other networked computers, the malware seems to have self-healing capabilities.
“We had an air-gapped computer that just had its [firmware] BIOS reflashed, a fresh disk drive installed, and zero data on it, installed from a Windows system CD,” Ruiu said. “At one point, we were editing some of the components and our registry editor got disabled. It was like: wait a minute, how can that happen? How can the machine react and attack the software that we’re using to attack it? This is an air-gapped machine and all of a sudden the search function in the registry editor stopped working when we were using it to search for their keys.”
Over the past two weeks, Ruiu has taken to Twitter, Facebook, and Google Plus to document his investigative odyssey and share a theory that has captured the attention of some of the world’s foremost security experts. The malware, Ruiu believes, is transmitted though USB drives to infect the lowest levels of computer hardware. With the ability to target a computer’s Basic Input/Output System (BIOS), Unified Extensible Firmware Interface (UEFI), and possibly other firmware standards, the malware can attack a wide variety of platforms, escape common forms of detection, and survive most attempts to eradicate it.
But the story gets stranger still. In posts here, here, and here, Ruiu posited another theory that sounds like something from the screenplay of a post-apocalyptic movie: “badBIOS,” as Ruiu dubbed the malware, has the ability to use high-frequency transmissions passed between computer speakers and microphones to bridge airgaps.
October 29, 2013
Adam Penenberg had himself investigated in the late 1990s and wrote that up for Forbes. This time around, he asked Nick Percoco to do the same thing, and was quite weirded out by the experience:
It’s my first class of the semester at New York University. I’m discussing the evils of plagiarism and falsifying sources with 11 graduate journalism students when, without warning, my computer freezes. I fruitlessly tap on the keyboard as my laptop takes on a life of its own and reboots. Seconds later the screen flashes a message. To receive the four-digit code I need to unlock it I’ll have to dial a number with a 312 area code. Then my iPhone, set on vibrate and sitting idly on the table, beeps madly.
I’m being hacked — and only have myself to blame.
Two months earlier I challenged Nicholas Percoco, senior vice president of SpiderLabs, the advanced research and ethical hacking team at Trustwave, to perform a personal “pen-test,” industry-speak for “penetration test.” The idea grew out of a cover story I wrote for Forbes some 14 years earlier, when I retained a private detective to investigate me, starting with just my byline. In a week he pulled up an astonishing amount of information, everything from my social security number and mother’s maiden name to long distance phone records, including who I called and for how long, my rent, bank accounts, stock holdings, and utility bills.
A decade and a half later, and given the recent Edward Snowden-fueled brouhaha over the National Security Agency’s snooping on Americans, I wondered how much had changed. Today, about 250 million Americans are on the Internet, and spend an average of 23 hours a week online and texting, with 27 percent of that engaged in social media. Like most people, I’m on the Internet, in some fashion, most of my waking hours, if not through a computer then via a tablet or smart phone.
With so much of my life reduced to microscopic bits and bytes bouncing around in a netherworld of digital data, how much could Nick Percoco and a determined team of hackers find out about me? Worse, how much damage could they potentially cause?
What I learned is that virtually all of us are vulnerable to electronic eavesdropping and are easy hack targets. Most of us have adopted the credo “security by obscurity,” but all it takes is a person or persons with enough patience and know-how to pierce anyone’s privacy — and, if they choose, to wreak havoc on your finances and destroy your reputation.
H/T to Terry Teachout for the link.
October 11, 2013
Bruce Schneier explains why you’d want to do this … and how much of a pain it can be to set up and work with:
Since I started working with Snowden’s documents, I have been using a number of tools to try to stay secure from the NSA. The advice I shared included using Tor, preferring certain cryptography over others, and using public-domain encryption wherever possible.
I also recommended using an air gap, which physically isolates a computer or local network of computers from the Internet. (The name comes from the literal gap of air between the computer and the Internet; the word predates wireless networks.)
But this is more complicated than it sounds, and requires explanation.
Since we know that computers connected to the Internet are vulnerable to outside hacking, an air gap should protect against those attacks. There are a lot of systems that use — or should use — air gaps: classified military networks, nuclear power plant controls, medical equipment, avionics, and so on.
Osama Bin Laden used one. I hope human rights organizations in repressive countries are doing the same.
Air gaps might be conceptually simple, but they’re hard to maintain in practice. The truth is that nobody wants a computer that never receives files from the Internet and never sends files out into the Internet. What they want is a computer that’s not directly connected to the Internet, albeit with some secure way of moving files on and off.
He also provides a list of ten rules (or recommendations, I guess) you should follow if you want to set up an air-gapped machine of your own.
October 2, 2013
September 21, 2013
In The Atlantic, Garance Franke-Ruta has transcribed some of Representative Justin Amash’s comments on the ins-and-outs of confidential briefings offered to congressmen:
Amash, who has previously butted heads with Intelligence Committee Chairman Mike Rogers and ranking member Dutch Ruppersberger over access to classified documents, recounted what happened during remarks before libertarian activists attending the Liberty Political Action Conference in Chantilly, Virginia, Thursday night. I quote his anecdote in full here, because it’s interesting to hear what it feels like to be one of the activist congressmen trying to rein in National Security Agency surveillance:
What you hear from the intelligence committees, from the chairmen of the intelligence committees, is that members can come to classified briefings and they can ask whatever questions they want. But if you’ve actually been to one of these classified briefings — which none of you have, but I have — what you discover is that it’s just a game of 20 questions.
You ask a question and if you don’t ask it exactly the right way you don’t get the right answer. So if you use the wrong pronoun, or if you talk about one agency but actually another agency is doing it, they won’t tell you. They’ll just tell you, no that’s not happening. They don’t correct you and say here’s what is happening.
So you actually have to go from meeting to meeting, to hearing to hearing, asking asking questions — sometimes ridiculous questions — just to get an answer. So this idea that you can just ask, just come into a classified briefing and ask questions and get answers is ridiculous.
If the government — in an extreme hypothetical, let’s say they had a base on the moon. If I don’t know that there’s a base on the moon, I’m not going to go into the briefing and say you have a moonbase. Right? [Audience laughs.] If they have a talking bear or something, I’m not going to say, ‘You guys, you didn’t engineer the talking bear.’
You’re not going to ask questions about things you don’t know about. The point of the Intelligence Committee is to provide oversight to Congress and every single member of Congress needs information. Each person in Congress represents about 700,000 people. It’s not acceptable to say, ‘Well, the Intelligence Committees get the information, we don’t need to share with the rest of Congress.’ The Intelligence Committee is not one of the branches of government, but that’s how it’s being treated over and over again.