OpenBSD founder Theo de Raadt has created a fork of OpenSSL, the widely used open source cryptographic software library that contained the notorious Heartbleed security vulnerability.
OpenSSL has suffered from a lack of funding and code contributions despite being used in websites and products by many of the world’s biggest and richest corporations.
The decision to fork OpenSSL is bound to be controversial given that OpenSSL powers hundreds of thousands of Web servers. When asked why he wanted to start over instead of helping to make OpenSSL better, de Raadt said the existing code is too much of a mess.
“Our group removed half of the OpenSSL source tree in a week. It was discarded leftovers,” de Raadt told Ars in an e-mail. “The Open Source model depends [on] people being able to read the code. It depends on clarity. That is not a clear code base, because their community does not appear to care about clarity. Obviously, when such cruft builds up, there is a cultural gap. I did not make this decision… in our larger development group, it made itself.”
The LibreSSL code base is on OpenBSD.org, and the project is supported financially by the OpenBSD Foundation and OpenBSD Project. LibreSSL has a bare bones website that is intentionally unappealing.
“This page scientifically designed to annoy web hipsters,” the site says. “Donate now to stop the Comic Sans and Blink Tags.” In explaining the decision to fork, the site links to a YouTube video of a cover of the Twisted Sister song “We’re not gonna take it.”
April 23, 2014
April 16, 2014
You would have thought this would have sunk in by now. The fact that it hasn’t shows what an extraordinary machine the internet is — quite different to any technology that has gone before it. When the Lovebug struck, few of us lived our lives online. Back then we banked in branches, shopped in shops, met friends and lovers in the pub and obtained jobs by posting CVs. Tweeting was for the birds. Cyberspace was marginal. Now, for billions, the online world is their lives. But there is a problem. Only a tiny, tiny percentage of the people who use the internet have even the faintest clue about how any of it works. “SSL”, for instance, stands for “Secure Sockets Layer”.
I looked it up and sort of understood it — for about five minutes. While most drivers have at least a notion of how an engine works (something about petrol exploding in cylinders and making pistons go up and down and so forth) the very language of the internet — “domain names” and “DNS codes”, endless “protocols” and so forth — is arcane, exclusive; it is, in fact, the language of magic. For all intents and purposes the internet is run by wizards.
And the trouble with letting wizards run things is that when things go wrong we are at their mercy. The world spends several tens of billions of pounds a year on anti-malware programs, which we are exhorted to buy lest the walls of our digital castles collapse around us. Making security software is a huge industry, and whenever there is a problem — either caused by viruses or by a glitch like Heartbleed — the internet security companies rush to be quoted in the media. And guess what, their message is never “keep calm and carry on”. As Professor Ross Anderson of Cambridge University says: “Almost all the cost of cybercrime is the cost of anticipation.”
Michael Hanlon, “Relax, Mumsnet users: don’t lose sleep over Heartbleed hysteria”, Telegraph, 2014-04-16
April 11, 2014
Some people are claiming that the Heartbleed bug proves that open source software is a failure. ESR quickly addresses that idiotic claim:
I actually chuckled when I read rumor that the few anti-open-source advocates still standing were crowing about the Heartbleed bug, because I’ve seen this movie before after every serious security flap in an open-source tool. The script, which includes a bunch of people indignantly exclaiming that many-eyeballs is useless because bug X lurked in a dusty corner for Y months, is so predictable that I can anticipate a lot of the lines.
The mistake being made here is a classic example of Frederic Bastiat’s “things seen versus things unseen”. Critics of Linus’s Law overweight the bug they can see and underweight the high probability that equivalently positioned closed-source security flaws they can’t see are actually far worse, just so far undiscovered.
That’s how it seems to go whenever we get a hint of the defect rate inside closed-source blobs, anyway. As a very pertinent example, in the last couple months I’ve learned some things about the security-defect density in proprietary firmware on residential and small business Internet routers that would absolutely curl your hair. It’s far, far worse than most people understand out there.
Ironically enough this will happen precisely because the open-source process is working … while, elsewhere, bugs that are far worse lurk in closed-source router firmware. Things seen vs. things unseen…
Returning to Heartbleed, one thing conspicuously missing from the downshouting against OpenSSL is any pointer to an implementation that is known to have a lower defect rate over time. This is for the very good reason that no such empirically-better implementation exists. What is the defect history on proprietary SSL/TLS blobs out there? We don’t know; the vendors aren’t saying. And we can’t even estimate the quality of their code, because we can’t audit it.
The response to the Heartbleed bug illustrates another huge advantage of open source: how rapidly we can push fixes. The repair for my Linux systems was a push-one-button fix less than two days after the bug hit the news. Proprietary-software customers will be lucky to see a fix within two months, and all too many of them will never see a fix patch.
Update: There are lots of sites offering tools to test whether a given site is vulnerable to the Heartbeat bug, but you need to step carefully there, as there’s a thin line between what’s legal in some countries and what counts as an illegal break-in attempt:
Websites and tools that have sprung up to check whether servers are vulnerable to OpenSSL’s mega-vulnerability Heartbleed have thrown up anomalies in computer crime law on both sides of the Atlantic.
Both the US Computer Fraud and Abuse Act and its UK equivalent the Computer Misuse Act make it an offence to test the security of third-party websites without permission.
Testing to see what version of OpenSSL a site is running, and whether it is also supports the vulnerable Heartbeat protocol, would be legal. But doing anything more active — without permission from website owners — would take security researchers onto the wrong side of the law.
And you shouldn’t just rush out and change all your passwords right now (you’ll probably need to do it, but the timing matters):
Heartbleed is a catastrophic bug in widely used OpenSSL that creates a means for attackers to lift passwords, crypto-keys and other sensitive data from the memory of secure server software, 64KB at a time. The mega-vulnerability was patched earlier this week, and software should be updated to use the new version, 1.0.1g. But to fully clean up the problem, admins of at-risk servers should generate new public-private key pairs, destroy their session cookies, and update their SSL certificates before telling users to change every potentially compromised password on the vulnerable systems.
April 9, 2014
Update: In case you’re not concerned about the seriousness of this issue, The Register‘s John Leyden would like you to think again.
The catastrophic crypto key password vulnerability in OpenSSL affects far more than web servers, with everything from routers to smartphones also affected.
The so-called “Heartbleed” vulnerability (CVE-2014-0160) can be exploited to extract information from the servers running vulnerable version of OpenSSL, and this includes email servers and Android smartphones as well as routers.
Hackers could potentially gain access to private encryption key before using this information to decipher the encrypted traffic to and from vulnerable websites.
Web sites including Yahoo!, Flickr and OpenSSL were among the many left vulnerable to the megabug that exposed encryption keys, passwords and other sensitive information.
Preliminary tests suggested 47 of the 1000 largest sites are vulnerable to Heartbleed and that’s only among the less than half that provide support for SSL or HTTPS at all. Many of the affected sites – including Yahoo! – have since patched the vulnerability. Even so, security experts – such as Graham Cluley – remain concerned.
OpenSSL is a widely used encryption library that is a key component of technology that enables secure (https) website connections.
The bug exists in the OpenSSL 1.0.1 source code and stems from coding flaws in a fairly new feature known as the TLS Heartbeat Extension. “TLS heartbeats are used as ‘keep alive’ packets so that the ends of an encrypted connection can agree to keep the session open even when they don’t have any official data to exchange,” explains security veteran Paul Ducklin in a post on Sophos’ Naked Security blog.
The Heartbleed vulnerability in the OpenSSL cryptographic library might be exploited to reveal contents of secured communication exchanges. The same flaw might also be used to lift SSL keys.
This means that sites could still be vulnerable to attacks after installing the patches in cases where a private key has been stolen. Sites therefore need to revoke exposed keys, reissue new keys, and invalidate all session keys and session cookies.
“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.
Half a million sites are vulnerable, including my own. Test your vulnerability here.
The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.
At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.
April 7, 2014
March 16, 2014
ESR put this together as a backgrounder for a documentary film maker:
In its original and still most correct sense, the word “hacker” describes a member of a tribe of expert and playful programmers with roots in 1960s and 1970s computer-science academia, the early microcomputer experimenters, and several other contributory cultures including science-fiction fandom.
Through a historical process I could explain in as much detail as you like, this hacker culture became the architects of today’s Internet and evolved into the open-source software movement. (I had a significant role in this process as historian and activist, which is why my friends recommended that you talk to me.)
People outside this culture sometimes refer to it as “old-school hackers” or “white-hat hackers” (the latter term also has some more specific shades of meaning). People inside it (including me) insist that we are just “hackers” and using that term for anyone else is misleading and disrespectful.
Within this culture, “hacker” applied to an individual is understood to be a title of honor which it is arrogant to claim for yourself. It has to be conferred by people who are already insiders. You earn it by building things, by a combination of work and cleverness and the right attitude. Nowadays “building things” centers on open-source software and hardware, and on the support services for open-source projects.
There are — seriously — people in the hacker culture who refuse to describe themselves individually as hackers because they think they haven’t earned the title yet — they haven’t built enough stuff. One of the social functions of tribal elders like myself is to be seen to be conferring the title, a certification that is taken quite seriously; it’s like being knighted.
There is a cluster of geek subcultures within which the term “hacker” has very high prestige. If you think about my earlier description it should be clear why. Building stuff is cool, it’s an achievement.
There is a tendency for members of those other subcultures to try to appropriate hacker status for themselves, and to emulate various hacker behaviors — sometimes superficially, sometimes deeply and genuinely.
Imitative behavior creates a sort of gray zone around the hacker culture proper. Some people in that zone are mere posers. Some are genuinely trying to act out hacker values as they (incompletely) understand them. Some are ‘hacktivists’ with Internet-related political agendas but who don’t write code. Some are outright criminals exploiting journalistic confusion about what “hacker” means. Some are ambiguous mixtures of several of these types.
March 10, 2014
Bruce Schneier on the odd linguistic tic of how we describe an act depending on who the actor is:
Back when we first started getting reports of the Chinese breaking into U.S. computer networks for espionage purposes, we described it in some very strong language. We called the Chinese actions cyber–attacks. We sometimes even invoked the word cyberwar, and declared that a cyber-attack was an act of war.
When Edward Snowden revealed that the NSA has been doing exactly the same thing as the Chinese to computer networks around the world, we used much more moderate language to describe U.S. actions: words like espionage, or intelligence gathering, or spying. We stressed that it’s a peacetime activity, and that everyone does it.
The reality is somewhere in the middle, and the problem is that our intuitions are based on history.
Electronic espionage is different today than it was in the pre-Internet days of the Cold War. Eavesdropping isn’t passive anymore. It’s not the electronic equivalent of sitting close to someone and overhearing a conversation. It’s not passively monitoring a communications circuit. It’s more likely to involve actively breaking into an adversary’s computer network — be it Chinese, Brazilian, or Belgian — and installing malicious software designed to take over that network.
In other words, it’s hacking. Cyber-espionage is a form of cyber-attack. It’s an offensive action. It violates the sovereignty of another country, and we’re doing it with far too little consideration of its diplomatic and geopolitical costs.
March 3, 2014
Gather round you kids, ’cause Uncle Eric is going to tell you about the dim, distant days of hacking before open source:
I was a historian before I was an activist, and I’ve been reminded recently that a lot of younger hackers have a simplified and somewhat mythologized view of how our culture evolved, one which tends to back-project today’s conditions onto the past.
In particular, many of us never knew – or are in the process of forgetting – how dependent we used to be on proprietary software. I think by failing to remember that past we are risking that we will misunderstand the present and mispredict the future, so I’m going to do what I can to set the record straight.
Without the Unix-spawned framework of concepts and technologies, having source code simply didn’t help very much. This is hard for younger hackers to realize, because they have no experience of the software world before retargetable compilers and code portability became relatively common. It’s hard for a lot of older hackers to remember because we mostly cut our teeth on Unix environments that were a few crucial years ahead of the curve.
But we shouldn’t forget. One very good reason is that believing a myth of the fall obscures the remarkable rise that we actually accomplished, bootstrapping ourselves up through a series of technological and social inventions to where open source on everyone’s desk and in everyone’s phone and ubiquitous in the Internet infrastructure is now taken for granted.
We didn’t get here because we failed in our duty to protect a prelapsarian software commons, but because we succeeded in creating one. That is worth remembering.
Update: In a follow-up post, ESR talks about closed source “sharecroppers” and Unix “nomads”.
Like the communities around SHARE (IBM mainframe users) and DECUS (DEC minicomputers) in the 1960s and 1970s, whatever community existed around ESPOL was radically limited by its utter dependence on the permissions and APIs that a single vendor was willing to provide. The ESPOL compiler was not retargetable. Whatever community developed around it could neither develop any autonomy nor survive the death of its hardware platform; the contributors had no place to retreat to in the event of predictable single-point failures.
I’ll call this sort of community “sharecroppers”. That term is a reference to SHARE, the oldest such user group. It also roughly expresses the relationship between these user groups and contributors, on the one hand, and the vendor on the other. The implied power relationship was pretty totally asymmetrical.
Contrast this with early Unix development. The key difference is that Unix-hosted code could survive the death of not just original hardware platforms but entire product lines and vendors, and contributors could develop a portable skillset and toolkits. The enabling technology – retargetable C compilers – made them not sharecroppers but nomads, able to evade vendor control by leaving for platforms that were less locked down and taking their tools with them.
I understand that it’s sentimentally appealing to retrospectively sweep all the early sharecropper communities into “open source”. But I think it’s a mistake, because it blurs the importance of retargetability, the ability to resist or evade vendor lock-in, and portable tools that you can take away with you.
Without those things you cannot have anything like the individual mental habits or collective scale of contributions that I think is required before saying “an open-source culture” is really meaningful.
February 9, 2014
Driving your car anywhere soon? Got anti-hacking gear installed?
Spanish hackers have been showing off their latest car-hacking creation; a circuit board using untraceable, off-the-shelf parts worth $20 that can give wireless access to the car’s controls while it’s on the road.
The device, which will be shown off at next month’s Black Hat Asia hacking conference, uses the Controller Area Network (CAN) ports car manufacturers build into their engines for computer-system checks. Once assembled, the smartphone-sized device can be plugged in under some vehicles, or inside the bonnet of other models, and give the hackers remote access to control systems.
“A car is a mini network,” security researcher Alberto Garcia Illera told Forbes. “And right now there’s no security implemented.”
Illera and fellow security researcher Javier Vazquez-Vidal said that they had tested the CAN Hacking Tool (CHT) successfully on four popular makes of cars and had been able to apply the emergency brakes while the car was in motion, affect the steering, turn off the headlights, or set off the car alarm.
The device currently only works via Bluetooth, but the team says that they will have a GSM version ready by the time the conference starts. This would allow remote control of a target car from much greater distances, and more technical details of the CHT will be given out at the conference.
February 7, 2014
An interesting post by Susan Sons illustrating some of the reasons women do not become hackers in the same proportion that men do:
Looking around at the hackers I know, the great ones started before puberty. Even if they lacked computers, they were taking apart alarm clocks, repairing pencil sharpeners or tinkering with ham radios. Some of them built pumpkin launchers or LEGO trains. I started coding when I was six years old, sitting in my father’s basement office, on the machine he used to track inventory for his repair service. After a summer of determined trial and error, I’d managed to make some gorillas throw things other than exploding bananas. It felt like victory!
Twelve-year-old girls today don’t generally get to have the experiences that I did. Parents are warned to keep kids off the computer lest they get lured away by child molesters or worse — become fat! That goes doubly for girls, who then grow up to be liberal arts majors. Then, in their late teens or early twenties, someone who feels the gender skew in technology communities is a problem drags them to a LUG meeting or an IRC channel. Shockingly, this doesn’t turn the young women into hackers.
Why does anyone, anywhere, think this will work? Start with a young woman who’s already formed her identity. Dump her in a situation that operates on different social scripts than she’s accustomed to, full of people talking about a subject she doesn’t yet understand. Then tell her the community is hostile toward women and therefore doesn’t have enough of them, all while showing her off like a prize poodle so you can feel good about recruiting a female. This is a recipe for failure.
I’ve never had a problem with old-school hackers. These guys treat me like one of them, rather than “the woman in the group”, and many are old enough to remember when they worked on teams that were about one third women, and no one thought that strange. Of course, the key word here is “old” (sorry guys). Most of the programmers I like are closer to my father’s age than mine.
The new breed of open-source programmer isn’t like the old. They’ve changed the rules in ways that have put a spotlight on my sex for the first time in my 18 years in this community.
When we call a man a “technologist”, we mean he’s a programmer, system administrator, electrical engineer or something like that. The same used to be true when we called a woman a “technologist”. However, according to the new breed, a female technologist might also be a graphic designer or someone who tweets for a living. Now, I’m glad that there are social media people out there — it means I can ignore that end of things — but putting them next to programmers makes being a “woman in tech” feel a lot like the Programmer Special Olympics.
January 21, 2014
In The Register, John Leyden discusses a new start-up’s plans for defending websites against hackers:
Startup Shape Security is re-appropriating a favourite tactic of malware writers in developing a technology to protect websites against automated hacking attacks.
Trojan authors commonly obfuscate their code to frustrate reverse engineers at security firms. The former staffers from Google, VMWare and Mozilla (among others) have created a network security appliance which takes a similar approach (dubbed real-time polymorphism) towards defending websites against breaches — by hobbling the capability of malware, bots, and other scripted attacks to interact with web applications.
Polymorphic code was originally used by malicious software to rewrite its own code every time a new machine was infected. Shape has invented patent-pending technology that is able to implement “real-time polymorphism” — or dynamically changing code — on any website. By doing this, it removes the static elements which botnets and malware depend on for their attacks.
January 20, 2014
I’m not a programmer, although I’ve spent much of my working life around programmers, which is why I recognize the pattern so well: I’ve seen it in action so often.
The few times I’ve needed to create a program to do something (usually a text transformation of one sort or another), this has been exactly the way the “labour-saving” automation has gone. My personal version of the chart would have an additional phase at the beginning: I have to begin by learning or re-learning the tool I need to use. I learn just enough of how to use a given tool to do the task at hand, then the knowledge atrophies from lack of use and the next time I need to do something similar, the first priority is figuring out the right tool and then learning the same basic tasks all over again.
I started out with REXX when I was a co-op student at IBM. Several years later, I needed to convert a large set of documents from one markup language to another on a Unix system and that meant learning (just enough) shell scripting, sed and awk. A few years after that the right tool seemed to be Perl. In every case, the knowledge doesn’t stick with me because I don’t need to do anything with the language after I’ve finished the immediate task. I remember being able to do it but I don’t recall exactly how to do it.
December 12, 2013
Charles Stross has a few adrenaline shots for your paranoia gland this morning:
The internet of things may be coming to us all faster and harder than we’d like.
Reports coming out of Russia suggest that some Chinese domestic appliances, notably kettles, come kitted out with malware — in the shape of small embedded computers that leech off the mains power to the device. The covert computational passenger hunts for unsecured wifi networks, connects to them, and joins a spam and malware pushing botnet. The theory is that a home computer user might eventually twig if their PC is a zombie, but who looks inside the base of their electric kettle, or the casing of their toaster? We tend to forget that the Raspberry Pi is as powerful as an early 90s UNIX server or a late 90s desktop; it costs £25, is the size of a credit card, and runs off a 5 watt USB power source. And there are cheaper, less competent small computers out there. Building them into kettles is a stroke of genius for a budding crime lord looking to build a covert botnet.
But that’s not what I’m here to talk about.
I’m dozy and slow on the uptake: I should have been all over this years ago.
And it’s not just keyboards. It’s ebook readers. Flashlights. Not your smartphone, but the removable battery in your smartphone. (Have you noticed it running down just a little bit faster?) Your toaster and your kettle are just the start. Could your electric blanket be spying on you? Koomey’s law is going to keep pushing the power consumption of our devices down even after Moore’s law grinds to a halt: and once Moore’s law ends, the only way forward is to commoditize the product of those ultimate fab lines, and churn out chips for pennies. In another decade, we’ll have embedded computers running some flavour of Linux where today we have smart inventory control tags — any item in a shop that costs more than about £50, basically. Some of those inventory control tags will be watching and listening to us; and some of their siblings will, repurposed, be piggy-backing a ride home and casing the joint.
The possibilities are endless: it’s the dark side of the internet of things. If you’ll excuse me now, I’ve got to go wallpaper my apartment in tinfoil …
December 11, 2013
Yet for Babbage, the biggest innovation of Doom was something subtler. Video games, then and now, are mainly passive entertainment products, a bit like a more interactive television. You buy one and play it until you either beat it or get bored. But Doom was popular enough that eager users delved into its inner workings, hacking together programs that would let people build their own levels. Drawing something in what was, essentially, a rudimentary CAD program, and then running around inside your own creation, was an astonishing, liberating experience. Like almost everybody else, Babbage’s first custom level was an attempt to reconstruct his own house.
Other programs allowed you to play around with the game itself, changing how weapons worked, or how monsters behaved. For a 12-year-old who liked computers but was rather fuzzy about how they actually worked, being able to pull back the curtain like this was revelatory. Tinkering around with Doom was a wonderful introduction to the mysteries of computers and how their programs were put together. Rather than trying to stop this unauthorised meddling, id embraced it. Its next game, Quake, was designed to actively encourage it.
The modification, or “modding” movement that Doom and Quake inspired heavily influenced the growing games industry. Babbage knows people who got jobs in the industry off the back of their ability to remix others’ creations. (Tim Willits, id’s current creative director, was hired after impressing the firm with his home-brewed Doom maps.) Commercial products — even entire genres of games — exist that trace their roots back to a fascinated teenager playing around in his (or, more rarely, her) bedroom.
But it had more personal effects, too. Being able to alter the game transformed the player from a mere passive consumer of media into a producer in his own right, something that is much harder in most other kinds of media. Amateur filmmakers need expensive kit and a willing cast to indulge their passion. Mastering a musical instrument takes years of practice; starting a band requires like-minded friends. Writing a novel looks easy, until you try it. But creating your own Doom mod was easy enough that anyone could learn it in a day or two. With a bit of practice, it was possible to churn out professional-quality stuff. “User-generated content” was a big buzzword a few years back, but once again, Doom got there first.
November 14, 2013
The Electronic Frontier Foundation thinks that extending the DRM regime to cars (as in the latest vehicle from Renault) will drive consumers crazy:
Forget extra cupholders or power windows: the new Renault Zoe comes with a “feature” that absolutely nobody wants. Instead of selling consumers a complete car that they can use, repair, and upgrade as they see fit, Renault has opted to lock purchasers into a rental contract with a battery manufacturer and enforce that contract with digital rights management (DRM) restrictions that can remotely prevent the battery from charging at all.
We’ve long joined makers and tinkerers in warning that, as software becomes a part of more and more everyday devices, DRM and the legal restrictions on circumventing it will create hurdles to standard repairs and even operation. In the U.S., a car manufacturer who had wrapped its onboard software in technical restrictions could argue that attempts to get around those are in violation of the Digital Millennium Copyright Act (DMCA) — specifically section 1201, the notorious “anti-circumvention” provisions. These provisions make it illegal for users to circumvent DRM or help others do so, even if the purpose is perfectly legal otherwise. Similar laws exist around the world, and are even written into some international trade agreements — including, according to a recently leaked draft, the Trans-Pacific Partnership Agreement.
Since the DMCA became law in 1998, Section 1201 has resulted in countless unintended consequences. It has chilled innovation, stifled the speech of legitimate security researchers, and interfered with consumer rights. Section 1201 came under particular fire this year because it may prevent consumers from unlocking their own phones to use with different carriers. After a broadly popular petition raised the issue, the White House acknowledged that the restriction is out of line with common sense.