Quotulatiousness

February 23, 2015

The “Internet of Things” (That May Or May Not Let You Do That)

Filed under: Liberty,Technology — Tags: , , , , — Nicholas @ 03:00

Cory Doctorow is concerned about some of the possible developments within the “Internet of Things” that should concern us all:

The digital world has been colonized by a dangerous idea: that we can and should solve problems by preventing computer owners from deciding how their computers should behave. I’m not talking about a computer that’s designed to say, “Are you sure?” when you do something unexpected — not even one that asks, “Are you really, really sure?” when you click “OK.” I’m talking about a computer designed to say, “I CAN’T LET YOU DO THAT DAVE” when you tell it to give you root, to let you modify the OS or the filesystem.

Case in point: the cell-phone “kill switch” laws in California and Minneapolis, which require manufacturers to design phones so that carriers or manufacturers can push an over-the-air update that bricks the phone without any user intervention, designed to deter cell-phone thieves. Early data suggests that the law is effective in preventing this kind of crime, but at a high and largely needless (and ill-considered) price.

To understand this price, we need to talk about what “security” is, from the perspective of a mobile device user: it’s a whole basket of risks, including the physical threat of violence from muggers; the financial cost of replacing a lost device; the opportunity cost of setting up a new device; and the threats to your privacy, finances, employment, and physical safety from having your data compromised.

The current kill-switch regime puts a lot of emphasis on the physical risks, and treats risks to your data as unimportant. It’s true that the physical risks associated with phone theft are substantial, but if a catastrophic data compromise doesn’t strike terror into your heart, it’s probably because you haven’t thought hard enough about it — and it’s a sure bet that this risk will only increase in importance over time, as you bind your finances, your access controls (car ignition, house entry), and your personal life more tightly to your mobile devices.

That is to say, phones are only going to get cheaper to replace, while mobile data breaches are only going to get more expensive.

It’s a mistake to design a computer to accept instructions over a public network that its owner can’t see, review, and countermand. When every phone has a back door and can be compromised by hacking, social-engineering, or legal-engineering by a manufacturer or carrier, then your phone’s security is only intact for so long as every customer service rep is bamboozle-proof, every cop is honest, and every carrier’s back end is well designed and fully patched.

February 15, 2015

The term “carjacking” may take on a new meaning

Filed under: Law,Technology — Tags: , , , — Nicholas @ 05:00

Earlier this month, The Register‘s Iain Thomson summarized the rather disturbing report released by Senator Ed Markey (D-MA) on the self-reported security (or lack thereof) in modern automobile internal networks:

In short, as we’ve long suspected, the computers in today’s cars can be hijacked wirelessly by feeding specially crafted packets of data into their networks. There’s often no need for physical contact; no leaving of evidence lying around after getting your hands dirty.

This means, depending on the circumstances, the software running in your dashboard can be forced to unlock doors, or become infected with malware, and records on where you’ve have been and how fast you were going may be obtained. The lack of encryption in various models means sniffed packets may be readable.

Key systems to start up engines, the electronics connecting up vital things like the steering wheel and brakes, and stuff on the CAN bus, tend to be isolated and secure, we’re told.

The ability for miscreants to access internal systems wirelessly, cause mischief to infotainment and navigation gear, and invade one’s privacy, is irritating, though.

“Drivers have come to rely on these new technologies, but unfortunately the automakers haven’t done their part to protect us from cyber-attacks or privacy invasions,” said Markey, a member of the Senate’s Commerce, Science and Transportation Committee.

“Even as we are more connected than ever in our cars and trucks, our technology systems and data security remain largely unprotected. We need to work with the industry and cyber-security experts to establish clear rules of the road to ensure the safety and privacy of 21st-century American drivers.”

Of the 17 car makers who replied [PDF] to Markey’s letters (Tesla, Aston Martin, and Lamborghini didn’t) all made extensive use of computing in their 2014 models, with some carrying 50 electronic control units (ECUs) running on a series of internal networks.

BMW, Chrysler, Ford, General Motors, Honda, Hyundai, Jaguar Land Rover, Mazda, Mercedes-Benz, Mitsubishi, Nissan, Porsche, Subaru, Toyota, Volkswagen (with Audi), and Volvo responded to the study. According to the senator’s six-page dossier:

  • Over 90 per cent of vehicles manufactured in 2014 had a wireless network of some kind — such as Bluetooth to link smartphones to the dashboard or a proprietary standard for technicians to pull out diagnostics.
  • Only six automakers have any kind of security software running in their cars — such as firewalls for blocking connections from untrusted devices, or encryption for protecting data in transit around the vehicle.
  • Just five secured wireless access points with passwords, encryption or proximity sensors that (in theory) only allow hardware detected within the car to join a given network.
  • And only models made by two companies can alert the manufacturers in real time if a malicious software attack is attempted — the others wait until a technician checks at the next servicing.

There wasn’t much detail on the security of over-the-air updates for firmware, nor the use of crypto to protect personal data being phoned home from vehicles to an automaker’s HQ.

January 7, 2015

Cory Doctorow on the dangers of legally restricting technologies

Filed under: Law,Liberty,Media,Technology — Tags: , , , , — Nicholas @ 02:00

In Wired, Cory Doctorow explains why bad legal precedents from more than a decade ago are making us more vulnerable rather than safer:

We live in a world made of computers. Your car is a computer that drives down the freeway at 60 mph with you strapped inside. If you live or work in a modern building, computers regulate its temperature and respiration. And we’re not just putting our bodies inside computers — we’re also putting computers inside our bodies. I recently exchanged words in an airport lounge with a late arrival who wanted to use the sole electrical plug, which I had beat him to, fair and square. “I need to charge my laptop,” I said. “I need to charge my leg,” he said, rolling up his pants to show me his robotic prosthesis. I surrendered the plug.

You and I and everyone who grew up with earbuds? There’s a day in our future when we’ll have hearing aids, and chances are they won’t be retro-hipster beige transistorized analog devices: They’ll be computers in our heads.

And that’s why the current regulatory paradigm for computers, inherited from the 16-year-old stupidity that is the Digital Millennium Copyright Act, needs to change. As things stand, the law requires that computing devices be designed to sometimes disobey their owners, so that their owners won’t do something undesirable. To make this work, we also have to criminalize anything that might help owners change their computers to let the machines do that supposedly undesirable thing.

This approach to controlling digital devices was annoying back in, say, 1995, when we got the DVD player that prevented us from skipping ads or playing an out-of-region disc. But it will be intolerable and deadly dangerous when our 3-D printers, self-driving cars, smart houses, and even parts of our bodies are designed with the same restrictions. Because those restrictions would change the fundamental nature of computers. Speaking in my capacity as a dystopian science fiction writer: This scares the hell out of me.

December 19, 2014

Mark Steyn on the collapse of moral fibre at Sony

Filed under: Asia,Business,Media,USA — Tags: , , , — Nicholas @ 00:04

Mark Steyn is never one to hold back an opinion:

I was barely aware of The Interview until, while sitting through a trailer for what seemed like just another idiotic leaden comedy, my youngest informed me that the North Koreans had denounced the film as “an act of war”. If it is, they seem to have won it fairly decisively: Kim Jong-Un has just vaporized a Hollywood blockbuster as totally as if one of his No Dong missiles had taken out the studio. As it is, the fellows with no dong turned out to be the executives of Sony Pictures.

I wouldn’t mind but this is the same industry that congratulates itself endlessly — not least in its annual six-hour awards ceremony — on its artists’ courage and bravery. Called on to show some for the first time in their lives, they folded like a cheap suit. As opposed to the bank-breaking suit their lawyers advised them they’d be looking at if they released the film and someone put anthrax in the popcorn. I think of all the occasions in recent years when I’ve found myself sharing a stage with obscure Europeans who’ve fallen afoul of Islam — Swedish artists, Danish cartoonists, Norwegian comediennes, all of whom showed more courage than these Beverly Hills bigshots.

While I often find Mark Steyn’s comments amusing and insightful, the real lesson here may not be the spineless response of Sony, but the impact of a legal system on the otherwise free actions of individuals and organizations: if Sony had gone ahead with the release and someone did attack one or more of the theatres where the movie was being shown, how would the legal system treat the situation? As an act of war by an external enemy or as an act of gross negligence by Sony and the theatre owners that would bankrupt every single company in the distribution chain (and probably lead to criminal charges against individual theatre managers and corporate officers)? While I disagree with Sony’s decision to fold under the pressure, I can’t imagine any corporate board being comfortable with that kind of stark legal threat … Sony’s executives may have been presented with no choice at all.

I see that, following the disappearance of The Interview, a Texan movie theater replaced it with a screening of Team America. That film wouldn’t get made today, either.

Hollywood has spent the 21st century retreating from storytelling into a glossy, expensive CGI playground in which nothing real is at stake. That’s all we’ll be getting from now on. Oh, and occasional Oscar bait about embattled screenwriters who stood up to the House UnAmerican Activities Committee six decades ago, even as their successors cave to, of all things, Kim’s UnKorean Activities Committee. American pop culture — supposedly the most powerful and influential force on the planet – has just surrendered to a one-man psycho-state economic basket-case that starves its own population.

Kim Jong-won.

Eugene Volokh makes some of the same points that Steyn raises:

Deadline Hollywood mentions several such theater chains. Yesterday, the Department of Homeland Security stated that there was “no credible intelligence” that such threatened terrorist attacks would take place, but unsurprisingly, some chains are being extra cautious here.

I sympathize with the theaters’ situation — they’re in the business of showing patrons a good time, and they’re rightly not interested in becoming free speech martyrs, even if there’s only a small chance that they’ll be attacked. Moreover, the very threats may well keep moviegoers away from theater complexes that are showing the movie, thus reducing revenue from all the screens at the complex.

But behavior that is rewarded is repeated. Thugs who oppose movies that are hostile to North Korea, China, Russia, Iran, the Islamic State, extremist Islam generally or any other country or religion will learn the lesson. The same will go as to thugs who are willing to use threats of violence to squelch expression they oppose for reasons related to abortion, environmentalism, animal rights and so on.

December 3, 2014

The Tech Model Railroad Club’s role in the rise of the hacking community

Filed under: Railways,Technology — Tags: , , , — Nicholas @ 00:02

Steven Levy talks about one of the less likely origins of part of the hacking world — MIT’s model railroad club:

Peter Samson had been a member of the Tech Model Railroad Club since his first week at MIT in the fall of 1958. The first event that entering MIT freshmen attended was a traditional welcoming lecture, the same one that had been given for as long as anyone at MIT could remember. LOOK AT THE PERSON TO YOUR LEFT … LOOK AT THE PERSON TO YOUR RIGHT … ONE OF YOU THREE WILL NOT GRADUATE FROM THE INSTITUTE. The intended effect of the speech was to create that horrid feeling in the back of the collective freshman throat that signaled unprecedented dread. All their lives, these freshmen had been almost exempt from academic pressure. The exemption had been earned by virtue of brilliance. Now each of them had a person to the right and a person to the left who was just as smart. Maybe even smarter.

There were enough obstacles to learning already — why bother with stupid things like brown-nosing teachers and striving for grades? To students like Peter Samson, the quest meant more than the degree.

Sometime after the lecture came Freshman Midway. All the campus organizations — special-interest groups, fraternities, and such — set up booths in a large gymnasium to try to recruit new members. The group that snagged Peter was the Tech Model Railroad Club. Its members, bright-eyed and crew-cutted upperclassmen who spoke with the spasmodic cadences of people who want words out of the way in a hurry, boasted a spectacular display of HO gauge trains they had in a permanent clubroom in Building 20. Peter Samson had long been fascinated by trains, especially subways. So he went along on the walking tour to the building, a shingle-clad temporary structure built during World War II. The hallways were cavernous, and even though the clubroom was on the second floor it had the dank, dimly lit feel of a basement.

The clubroom was dominated by the huge train layout. It just about filled the room, and if you stood in the little control area called “the notch” you could see a little town, a little industrial area, a tiny working trolley line, a papier-mache mountain, and of course a lot of trains and tracks. The trains were meticulously crafted to resemble their full-scale counterparts, and they chugged along the twists and turns of track with picture-book perfection. And then Peter Samson looked underneath the chest-high boards which held the layout. It took his breath away. Underneath this layout was a more massive matrix of wires and relays and crossbar switches than Peter Samson had ever dreamed existed. There were neat regimental lines of switches, and achingly regular rows of dull bronze relays, and a long, rambling tangle of red, blue, and yellow wires—twisting and twirling like a rainbow-colored explosion of Einstein’s hair. It was an incredibly complicated system, and Peter Samson vowed to find out how it worked.

There were two factions of TMRC. Some members loved the idea of spending their time building and painting replicas of certain trains with historical and emotional value, or creating realistic scenery for the layout. This was the knife-and-paintbrush contingent, and it subscribed to railroad magazines and booked the club for trips on aging train lines. The other faction centered on the Signals and Power Subcommittee of the club, and it cared far more about what went on under the layout. This was The System, which worked something like a collaboration between Rube Goldberg and Wernher von Braun, and it was constantly being improved, revamped, perfected, and sometimes “gronked” — in club jargon, screwed up. S&P people were obsessed with the way The System worked, its increasing complexities, how any change you made would affect other parts, and how you could put those relationships between the parts to optimal use.

November 23, 2014

ESR on how to learn hacking

Filed under: Technology — Tags: , , — Nicholas @ 10:59

Eric S. Raymond has been asked to write this document for years, and he’s finally given in to the demand:

What Is Hacking?

The “hacking” we’ll be talking about in this document is exploratory programming in an open-source environment. If you think “hacking” has anything to do with computer crime or security breaking and came here to learn that, you can go away now. There’s nothing for you here.

Hacking is a style of programming, and following the recommendations in this document can be an effective way to acquire general-purpose programming skills. This path is not guaranteed to work for everybody; it appears to work best for those who start with an above-average talent for programming and a fair degree of mental flexibility. People who successfully learn this style tend to become generalists with skills that are not strongly tied to a particular application domain or language.

Note that one can be doing hacking without being a hacker. “Hacking”, broadly speaking, is a description of a method and style; “hacker” implies that you hack, and are also attached to a particular culture or historical tradition that uses this method. Properly, “hacker” is an honorific bestowed by other hackers.

Hacking doesn’t have enough formal apparatus to be a full-fledged methodology in the way the term is used in software engineering, but it does have some characteristics that tend to set it apart from other styles of programming.

  • Hacking is done on open source. Today, hacking skills are the individual micro-level of what is called “open source development” at the social macrolevel. A programmer working in the hacking style expects and readily uses peer review of source code by others to supplement and amplify his or her individual ability.
  • Hacking is lightweight and exploratory. Rigid procedures and elaborate a-priori specifications have no place in hacking; instead, the tendency is try-it-and-find-out with a rapid release tempo.
  • Hacking places a high value on modularity and reuse. In the hacking style, you try hard never to write a piece of code that can only be used once. You bias towards making general tools or libraries that can be specialized into what you want by freezing some arguments/variables or supplying a context.
  • Hacking favors scrap-and-rebuild over patch-and-extend. An essential part of hacking is ruthlessly throwing away code that has become overcomplicated or crufty, no matter how much time you have invested in it.

The hacking style has been closely associated with the technical tradition of the Unix operating system.

Recently it has become evident that hacking blends well with the “agile programming” style. Agile techniques such as pair programming and feature stories adapt readily to hacking and vice-versa. In part this is because the early thought leaders of agile were influenced by the open source community. But there has since been traffic in the other direction as well, with open-source projects increasingly adopting techniques such as test-driven development.

September 30, 2014

“These bugs were found – and were findable – because of open-source scrutiny”

Filed under: Technology — Tags: , , , — Nicholas @ 08:13

ESR talks about the visibility problem in software bugs:

The first thing to notice here is that these bugs were found – and were findable – because of open-source scrutiny.

There’s a “things seen versus things unseen” fallacy here that gives bugs like Heartbleed and Shellshock false prominence. We don’t know – and can’t know – how many far worse exploits lurk in proprietary code known only to crackers or the NSA.

What we can project based on other measures of differential defect rates suggests that, however imperfect “many eyeballs” scrutiny is, “few eyeballs” or “no eyeballs” is far worse.

July 10, 2014

Throwing a bit of light on security in the “internet of things”

Filed under: Technology — Tags: , , , , — Nicholas @ 07:36

The “internet of things” is coming: more and more of your surroundings are going to be connected in a vastly expanded internet. A lot of attention needs to be paid to security in this new world, as Dan Goodin explains:

In the latest cautionary tale involving the so-called Internet of things, white-hat hackers have devised an attack against network-connected lightbulbs that exposes Wi-Fi passwords to anyone in proximity to one of the LED devices.

The attack works against LIFX smart lightbulbs, which can be turned on and off and adjusted using iOS- and Android-based devices. Ars Senior Reviews Editor Lee Hutchinson gave a good overview here of the Philips Hue lights, which are programmable, controllable LED-powered bulbs that compete with LIFX. The bulbs are part of a growing trend in which manufacturers add computing and networking capabilities to appliances so people can manipulate them remotely using smartphones, computers, and other network-connected devices. A 2012 Kickstarter campaign raised more than $1.3 million for LIFX, more than 13 times the original goal of $100,000.

According to a blog post published over the weekend, LIFX has updated the firmware used to control the bulbs after researchers discovered a weakness that allowed hackers within about 30 meters to obtain the passwords used to secure the connected Wi-Fi network. The credentials are passed from one networked bulb to another over a mesh network powered by 6LoWPAN, a wireless specification built on top of the IEEE 802.15.4 standard. While the bulbs used the Advanced Encryption Standard (AES) to encrypt the passwords, the underlying pre-shared key never changed, making it easy for the attacker to decipher the payload.

June 12, 2014

Winnipeg Grade 9 students successfully hack Bank of Montreal ATM

Filed under: Business,Cancon,Technology — Tags: , , , , — Nicholas @ 08:00

“Hack” is the wrong word here, as it implies they did something highly technical and unusual. What they did was to use the formal documentation for the ATM and demonstrate that the installer had failed to change the default administrator password:

Matthew Hewlett and Caleb Turon, both Grade 9 students, found an old ATM operators manual online that showed how to get into the machine’s operator mode. On Wednesday over their lunch hour, they went to the BMO’s ATM at the Safeway on Grant Avenue to see if they could get into the system.

“We thought it would be fun to try it, but we were not expecting it to work,” Hewlett said. “When it did, it asked for a password.”

Hewlett and Turon were even more shocked when their first random guess at the six-digit password worked. They used a common default password. The boys then immediately went to the BMO Charleswood Centre branch on Grant Avenue to notify them.

When they told staff about a security problem with an ATM, they assumed one of their PIN numbers had been stolen, Hewlett said.

“I said: ‘No, no, no. We hacked your ATM. We got into the operator mode,'” Hewlett said.

“He said that wasn’t really possible and we don’t have any proof that we did it.

“I asked them: ‘Is it all right for us to get proof?’

“He said: ‘Yeah, sure, but you’ll never be able to get anything out of it.’

“So we both went back to the ATM and I got into the operator mode again. Then I started printing off documentation like how much money is currently in the machine, how many withdrawals have happened that day, how much it’s made off surcharges.

“Then I found a way to change the surcharge amount, so I changed the surcharge amount to one cent.”

As further proof, Hewlett playfully changed the ATM’s greeting from “Welcome to the BMO ATM” to “Go away. This ATM has been hacked.”

They returned to BMO with six printed documents. This time, staff took them seriously.

A lot of hardware is shipped with certain default security arrangements (known admin accounts with pre-set passwords, for example), and it’s part of the normal installation/configuration process to change them. A lazy installer may skip this, leaving the system open to inquisitive teens or more technically adept criminals. These two students were probably lucky not to be scapegoated by the bank’s security officers.

June 4, 2014

Bruce Schneier on the human side of the Heartbleed vulnerability

Filed under: Technology — Tags: , , , — Nicholas @ 07:24

Reposting at his own site an article he did for The Mark News:

The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.

It was a software insecurity, but the problem was entirely human.

Software has vulnerabilities because it’s written by people, and people make mistakes — thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.

In retrospect, the mistake should have been obvious, and it’s amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.

The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google’s security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.

When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it’s announced.

May 30, 2014

“French spies [are] number two in the world of industrial cyber-espionage”

Filed under: China,Europe,Government,Technology,USA — Tags: , , — Nicholas @ 08:11

High praise indeed for French espionage operatives from … former US Secretary of Defence Robert Gates:

Former spy and defense department secretary Robert Gates has identified France as a major cyber-spying threat against the US.

In statements that are bound to raise eyebrows on both sides of the Atlantic, Gates (not Bill) nominated French spies as being number two in the world of industrial cyber-espionage.

“In terms of the most capable, next to the Chinese, are the French – and they’ve been doing it a long time” he says in this interview at the Council on Foreign Relations.

Rather than a precis, The Register will give you some of Gates’s (not Bill) words verbatim, starting just after 21 minutes in the video, when he answers a question about America’s recent indictment of five Chinese military hackers.

“What we have accused the Chinese of doing – stealing American companies’ secrets and technology – is not new, nor is it something that’s done only by the Chinese,” Gates tells the interviewer. “There are probably a dozen or fifteen countries that steal our technology in this way.

“In terms of the most capable, next to the Chinese, are probably the French, and they’ve been doing it a long time.

April 23, 2014

LibreSSL website – “This page scientifically designed to annoy web hipsters”

Filed under: Technology — Tags: , , , , , — Nicholas @ 09:24

Julian Sanchez linked to this Ars Technica piece on a new fork of OpenSSL:

OpenBSD founder Theo de Raadt has created a fork of OpenSSL, the widely used open source cryptographic software library that contained the notorious Heartbleed security vulnerability.

OpenSSL has suffered from a lack of funding and code contributions despite being used in websites and products by many of the world’s biggest and richest corporations.

The decision to fork OpenSSL is bound to be controversial given that OpenSSL powers hundreds of thousands of Web servers. When asked why he wanted to start over instead of helping to make OpenSSL better, de Raadt said the existing code is too much of a mess.

“Our group removed half of the OpenSSL source tree in a week. It was discarded leftovers,” de Raadt told Ars in an e-mail. “The Open Source model depends [on] people being able to read the code. It depends on clarity. That is not a clear code base, because their community does not appear to care about clarity. Obviously, when such cruft builds up, there is a cultural gap. I did not make this decision… in our larger development group, it made itself.”

The LibreSSL code base is on OpenBSD.org, and the project is supported financially by the OpenBSD Foundation and OpenBSD Project. LibreSSL has a bare bones website that is intentionally unappealing.

“This page scientifically designed to annoy web hipsters,” the site says. “Donate now to stop the Comic Sans and Blink Tags.” In explaining the decision to fork, the site links to a YouTube video of a cover of the Twisted Sister song “We’re not gonna take it.”

April 16, 2014

QotD: The wizards of the web

Filed under: Business,Quotations,Technology — Tags: , , , , — Nicholas @ 08:28

You would have thought this would have sunk in by now. The fact that it hasn’t shows what an extraordinary machine the internet is — quite different to any technology that has gone before it. When the Lovebug struck, few of us lived our lives online. Back then we banked in branches, shopped in shops, met friends and lovers in the pub and obtained jobs by posting CVs. Tweeting was for the birds. Cyberspace was marginal. Now, for billions, the online world is their lives. But there is a problem. Only a tiny, tiny percentage of the people who use the internet have even the faintest clue about how any of it works. “SSL”, for instance, stands for “Secure Sockets Layer”.

I looked it up and sort of understood it — for about five minutes. While most drivers have at least a notion of how an engine works (something about petrol exploding in cylinders and making pistons go up and down and so forth) the very language of the internet — “domain names” and “DNS codes”, endless “protocols” and so forth — is arcane, exclusive; it is, in fact, the language of magic. For all intents and purposes the internet is run by wizards.

And the trouble with letting wizards run things is that when things go wrong we are at their mercy. The world spends several tens of billions of pounds a year on anti-malware programs, which we are exhorted to buy lest the walls of our digital castles collapse around us. Making security software is a huge industry, and whenever there is a problem — either caused by viruses or by a glitch like Heartbleed — the internet security companies rush to be quoted in the media. And guess what, their message is never “keep calm and carry on”. As Professor Ross Anderson of Cambridge University says: “Almost all the cost of cybercrime is the cost of anticipation.”

Michael Hanlon, “Relax, Mumsnet users: don’t lose sleep over Heartbleed hysteria”, Telegraph, 2014-04-16

April 11, 2014

Open source software and the Heartbleed bug

Filed under: Technology — Tags: , , , , — Nicholas @ 07:03

Some people are claiming that the Heartbleed bug proves that open source software is a failure. ESR quickly addresses that idiotic claim:

Heartbleed bugI actually chuckled when I read rumor that the few anti-open-source advocates still standing were crowing about the Heartbleed bug, because I’ve seen this movie before after every serious security flap in an open-source tool. The script, which includes a bunch of people indignantly exclaiming that many-eyeballs is useless because bug X lurked in a dusty corner for Y months, is so predictable that I can anticipate a lot of the lines.

The mistake being made here is a classic example of Frederic Bastiat’s “things seen versus things unseen”. Critics of Linus’s Law overweight the bug they can see and underweight the high probability that equivalently positioned closed-source security flaws they can’t see are actually far worse, just so far undiscovered.

That’s how it seems to go whenever we get a hint of the defect rate inside closed-source blobs, anyway. As a very pertinent example, in the last couple months I’ve learned some things about the security-defect density in proprietary firmware on residential and small business Internet routers that would absolutely curl your hair. It’s far, far worse than most people understand out there.

[…]

Ironically enough this will happen precisely because the open-source process is working … while, elsewhere, bugs that are far worse lurk in closed-source router firmware. Things seen vs. things unseen…

Returning to Heartbleed, one thing conspicuously missing from the downshouting against OpenSSL is any pointer to an implementation that is known to have a lower defect rate over time. This is for the very good reason that no such empirically-better implementation exists. What is the defect history on proprietary SSL/TLS blobs out there? We don’t know; the vendors aren’t saying. And we can’t even estimate the quality of their code, because we can’t audit it.

The response to the Heartbleed bug illustrates another huge advantage of open source: how rapidly we can push fixes. The repair for my Linux systems was a push-one-button fix less than two days after the bug hit the news. Proprietary-software customers will be lucky to see a fix within two months, and all too many of them will never see a fix patch.

Update: There are lots of sites offering tools to test whether a given site is vulnerable to the Heartbeat bug, but you need to step carefully there, as there’s a thin line between what’s legal in some countries and what counts as an illegal break-in attempt:

Websites and tools that have sprung up to check whether servers are vulnerable to OpenSSL’s mega-vulnerability Heartbleed have thrown up anomalies in computer crime law on both sides of the Atlantic.

Both the US Computer Fraud and Abuse Act and its UK equivalent the Computer Misuse Act make it an offence to test the security of third-party websites without permission.

Testing to see what version of OpenSSL a site is running, and whether it is also supports the vulnerable Heartbeat protocol, would be legal. But doing anything more active — without permission from website owners — would take security researchers onto the wrong side of the law.

And you shouldn’t just rush out and change all your passwords right now (you’ll probably need to do it, but the timing matters):

Heartbleed is a catastrophic bug in widely used OpenSSL that creates a means for attackers to lift passwords, crypto-keys and other sensitive data from the memory of secure server software, 64KB at a time. The mega-vulnerability was patched earlier this week, and software should be updated to use the new version, 1.0.1g. But to fully clean up the problem, admins of at-risk servers should generate new public-private key pairs, destroy their session cookies, and update their SSL certificates before telling users to change every potentially compromised password on the vulnerable systems.

April 9, 2014

XKCD on the impact of “Heartbleed”

Filed under: Technology — Tags: , , , , , — Nicholas @ 11:00

Update: In case you’re not concerned about the seriousness of this issue, The Register‘s John Leyden would like you to think again.

The catastrophic crypto key password vulnerability in OpenSSL affects far more than web servers, with everything from routers to smartphones also affected.

The so-called “Heartbleed” vulnerability (CVE-2014-0160) can be exploited to extract information from the servers running vulnerable version of OpenSSL, and this includes email servers and Android smartphones as well as routers.

Hackers could potentially gain access to private encryption key before using this information to decipher the encrypted traffic to and from vulnerable websites.

Web sites including Yahoo!, Flickr and OpenSSL were among the many left vulnerable to the megabug that exposed encryption keys, passwords and other sensitive information.

Preliminary tests suggested 47 of the 1000 largest sites are vulnerable to Heartbleed and that’s only among the less than half that provide support for SSL or HTTPS at all. Many of the affected sites – including Yahoo! – have since patched the vulnerability. Even so, security experts – such as Graham Cluley – remain concerned.

OpenSSL is a widely used encryption library that is a key component of technology that enables secure (https) website connections.

The bug exists in the OpenSSL 1.0.1 source code and stems from coding flaws in a fairly new feature known as the TLS Heartbeat Extension. “TLS heartbeats are used as ‘keep alive’ packets so that the ends of an encrypted connection can agree to keep the session open even when they don’t have any official data to exchange,” explains security veteran Paul Ducklin in a post on Sophos’ Naked Security blog.

The Heartbleed vulnerability in the OpenSSL cryptographic library might be exploited to reveal contents of secured communication exchanges. The same flaw might also be used to lift SSL keys.

This means that sites could still be vulnerable to attacks after installing the patches in cases where a private key has been stolen. Sites therefore need to revoke exposed keys, reissue new keys, and invalidate all session keys and session cookies.

Bruce Schneier:

“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.

Half a million sites are vulnerable, including my own. Test your vulnerability here.

The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.

At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.

Older Posts »

Powered by WordPress