Quotulatiousness

July 17, 2015

The case for encryption – “Encryption should be enabled for everything by default”

Filed under: Liberty,Technology — Tags: , , — Nicholas @ 03:00

Bruce Schneier explains why you should care (a lot) about having your data encrypted:

Encryption protects our data. It protects our data when it’s sitting on our computers and in data centers, and it protects it when it’s being transmitted around the Internet. It protects our conversations, whether video, voice, or text. It protects our privacy. It protects our anonymity. And sometimes, it protects our lives.

This protection is important for everyone. It’s easy to see how encryption protects journalists, human rights defenders, and political activists in authoritarian countries. But encryption protects the rest of us as well. It protects our data from criminals. It protects it from competitors, neighbors, and family members. It protects it from malicious attackers, and it protects it from accidents.

Encryption works best if it’s ubiquitous and automatic. The two forms of encryption you use most often — https URLs on your browser, and the handset-to-tower link for your cell phone calls — work so well because you don’t even know they’re there.

Encryption should be enabled for everything by default, not a feature you turn on only if you’re doing something you consider worth protecting.

This is important. If we only use encryption when we’re working with important data, then encryption signals that data’s importance. If only dissidents use encryption in a country, that country’s authorities have an easy way of identifying them. But if everyone uses it all of the time, encryption ceases to be a signal. No one can distinguish simple chatting from deeply private conversation. The government can’t tell the dissidents from the rest of the population. Every time you use encryption, you’re protecting someone who needs to use it to stay alive.

February 23, 2015

The “Internet of Things” (That May Or May Not Let You Do That)

Filed under: Liberty,Technology — Tags: , , , , — Nicholas @ 03:00

Cory Doctorow is concerned about some of the possible developments within the “Internet of Things” that should concern us all:

The digital world has been colonized by a dangerous idea: that we can and should solve problems by preventing computer owners from deciding how their computers should behave. I’m not talking about a computer that’s designed to say, “Are you sure?” when you do something unexpected — not even one that asks, “Are you really, really sure?” when you click “OK.” I’m talking about a computer designed to say, “I CAN’T LET YOU DO THAT DAVE” when you tell it to give you root, to let you modify the OS or the filesystem.

Case in point: the cell-phone “kill switch” laws in California and Minneapolis, which require manufacturers to design phones so that carriers or manufacturers can push an over-the-air update that bricks the phone without any user intervention, designed to deter cell-phone thieves. Early data suggests that the law is effective in preventing this kind of crime, but at a high and largely needless (and ill-considered) price.

To understand this price, we need to talk about what “security” is, from the perspective of a mobile device user: it’s a whole basket of risks, including the physical threat of violence from muggers; the financial cost of replacing a lost device; the opportunity cost of setting up a new device; and the threats to your privacy, finances, employment, and physical safety from having your data compromised.

The current kill-switch regime puts a lot of emphasis on the physical risks, and treats risks to your data as unimportant. It’s true that the physical risks associated with phone theft are substantial, but if a catastrophic data compromise doesn’t strike terror into your heart, it’s probably because you haven’t thought hard enough about it — and it’s a sure bet that this risk will only increase in importance over time, as you bind your finances, your access controls (car ignition, house entry), and your personal life more tightly to your mobile devices.

That is to say, phones are only going to get cheaper to replace, while mobile data breaches are only going to get more expensive.

It’s a mistake to design a computer to accept instructions over a public network that its owner can’t see, review, and countermand. When every phone has a back door and can be compromised by hacking, social-engineering, or legal-engineering by a manufacturer or carrier, then your phone’s security is only intact for so long as every customer service rep is bamboozle-proof, every cop is honest, and every carrier’s back end is well designed and fully patched.

February 15, 2015

The term “carjacking” may take on a new meaning

Filed under: Law,Technology — Tags: , , , — Nicholas @ 05:00

Earlier this month, The Register‘s Iain Thomson summarized the rather disturbing report released by Senator Ed Markey (D-MA) on the self-reported security (or lack thereof) in modern automobile internal networks:

In short, as we’ve long suspected, the computers in today’s cars can be hijacked wirelessly by feeding specially crafted packets of data into their networks. There’s often no need for physical contact; no leaving of evidence lying around after getting your hands dirty.

This means, depending on the circumstances, the software running in your dashboard can be forced to unlock doors, or become infected with malware, and records on where you’ve have been and how fast you were going may be obtained. The lack of encryption in various models means sniffed packets may be readable.

Key systems to start up engines, the electronics connecting up vital things like the steering wheel and brakes, and stuff on the CAN bus, tend to be isolated and secure, we’re told.

The ability for miscreants to access internal systems wirelessly, cause mischief to infotainment and navigation gear, and invade one’s privacy, is irritating, though.

“Drivers have come to rely on these new technologies, but unfortunately the automakers haven’t done their part to protect us from cyber-attacks or privacy invasions,” said Markey, a member of the Senate’s Commerce, Science and Transportation Committee.

“Even as we are more connected than ever in our cars and trucks, our technology systems and data security remain largely unprotected. We need to work with the industry and cyber-security experts to establish clear rules of the road to ensure the safety and privacy of 21st-century American drivers.”

Of the 17 car makers who replied [PDF] to Markey’s letters (Tesla, Aston Martin, and Lamborghini didn’t) all made extensive use of computing in their 2014 models, with some carrying 50 electronic control units (ECUs) running on a series of internal networks.

BMW, Chrysler, Ford, General Motors, Honda, Hyundai, Jaguar Land Rover, Mazda, Mercedes-Benz, Mitsubishi, Nissan, Porsche, Subaru, Toyota, Volkswagen (with Audi), and Volvo responded to the study. According to the senator’s six-page dossier:

  • Over 90 per cent of vehicles manufactured in 2014 had a wireless network of some kind — such as Bluetooth to link smartphones to the dashboard or a proprietary standard for technicians to pull out diagnostics.
  • Only six automakers have any kind of security software running in their cars — such as firewalls for blocking connections from untrusted devices, or encryption for protecting data in transit around the vehicle.
  • Just five secured wireless access points with passwords, encryption or proximity sensors that (in theory) only allow hardware detected within the car to join a given network.
  • And only models made by two companies can alert the manufacturers in real time if a malicious software attack is attempted — the others wait until a technician checks at the next servicing.

There wasn’t much detail on the security of over-the-air updates for firmware, nor the use of crypto to protect personal data being phoned home from vehicles to an automaker’s HQ.

January 29, 2015

Mocking “old fashioned” security systems

Filed under: Business,Media,USA — Tags: , , — Nicholas @ 02:00

Christopher Taylor points out that the folks who advised Comcast on their recent home security advertising campaign rather missed the mark:

Comcast is trying to act like using any other security system is old fashioned; its actually a tag line in some of their ads “don’t be old fashioned.” They’re using the old knight in armor to stand in for any other security system which, not being “in the cloud” and accessible “anywhere” from your smart phone is thus dated and old.

But consider; which would be preferable?

  • An internet based system which, by its own advertising notes that you can turn it off “from anywhere” using only a phone, and look at cameras anywhere in your home, just by using the phone.
  • An armored knight with a broadsword.

Now, perhaps you’re new to the internet and aren’t aware of this, but it gets hacked pretty much every minute of the day. Passwords are stolen and sold on Chinese and Russian websites. Your smart phone is not secure.
I once found a website (now gone) that had live feeds of people’s homes from around the world by clicking on various names. All they did was use commonly used passwords and logged into the security systems. It was like this weird voyeuristic show, but really boring because it was all empty rooms and darkness — people turn on their security when they leave, not when they do fun stuff to watch.

What I’m saying is what should be abundantly obvious to everyone who has a television to watch Comcast ads: this is a really stupid, bad idea. You’re making it easier for burglars to turn off your security system and watch for when you aren’t home. You’re making it easier for evil sexual predators and monsters to know your patterns and when you’re home or alone. Get it?

This is like publishing your daily activities and living in a glass building all day long. It seems cool and high tech and new and fancy, but its just really stupid.

But an armored knight? Unless he goes to sleep, he’s a physical, combat-ready soldier that acts as a physical deterrent to intruders.

And its not even old fashioned. It’s so old an image, it doesn’t even feel old fashioned, it feels beyond vintage to a fantasy era. Which is cooler to you, being guarded by a knight in shining armor with a sword, or your smart phone?

These ads have a viral feel to them, like some hip college dude with a fancy business card came up with it for Comcast, but they don’t make sense. I doubt they even get people to want to buy the product.

January 10, 2015

Sub-orbital airliners? Not if you know much about economics and physics

Filed under: Economics,Technology — Tags: , , , , , — Nicholas @ 02:00

Charles Stross in full “beat up the optimists” mode over a common SF notion about sub-orbital travel for the masses:

Let’s start with a simple normative assumption; that sub-orbital spaceplanes are going to obey the laws of physics. One consequence of this is that the amount of energy it takes to get from A to B via hypersonic airliner is going to exceed the energy input it takes to cover the same distance using a subsonic jet, by quite a margin. Yes, we can save some fuel by travelling above the atmosphere and cutting air resistance, but it’s not a free lunch: you expend energy getting up to altitude and speed, and the fuel burn for going faster rises nonlinearly with speed. Concorde, flying trans-Atlantic at Mach 2.0, burned about the same amount of fuel as a Boeing 747 of similar vintage flying trans-Atlantic at Mach 0.85 … while carrying less than a quarter as many passengers.

Rockets aren’t a magic technology. Neither are hybrid hypersonic air-breathing gadgets like Reaction Engines‘ Sabre engine. It’s going to be a wee bit expensive. But let’s suppose we can get the price down far enough that a seat in a Mach 5 to Mach 10 hypersonic or sub-orbital passenger aircraft is cost-competitive with a high-end first class seat on a subsonic jet. Surely the super-rich will all switch to hypersonic services in a shot, just as they used Concorde to commute between New York and London back before Airbus killed it off by cancelling support after the 30-year operational milestone?

Well, no.

Firstly, this is the post-9/11 age. Obviously security is a consideration for all civil aviation, right? Well, no: business jets are largely exempt, thanks to lobbying by their operators, backed up by their billionaire owners. But those of us who travel by civil airliners open to the general ticket-buying public are all suspects. If something goes wrong with a scheduled service, fighters are scrambled to intercept it, lest some fruitcake tries to fly it into a skyscraper.

So not only are we not going to get our promised flying cars, we’re not going to get fast, cheap, intercontinental travel options. But what about those hyper-rich folks who spend money like water?

First class air travel by civil aviation is a dying niche today. If you are wealthy enough to afford the £15,000-30,000 ticket cost of a first-class-plus intercontinental seat (or, rather, bedroom with en-suite toilet and shower if we’re talking about the very top end), you can also afford to pay for a seat on a business jet instead. A number of companies operate profitably on the basis that they lease seats on bizjets by the hour: you may end up sharing a jet with someone else who’s paying to fly the same route, but the operating principle is that when you call for it a jet will turn up and take you where you want to go, whenever you want. There’s no security theatre, no fuss, and it takes off when you want it to, not when the daily schedule says it has to. It will probably have internet connectivity via satellite—by the time hypersonic competition turns up, this is not a losing bet—and for extra money, the sky is the limit on comfort.

I don’t get to fly first class, but I’ve watched this happen over the past two decades. Business class is holding its own, and premium economy is growing on intercontinental flights (a cut-down version of Business with more leg-room than regular economy), but the number of first class seats you’ll find on an Air France or British Airways 747 is dwindling. The VIPs are leaving the carriers, driven away by the security annoyances and drawn by the convenience of much smaller jets that come when they call.

For rich people, time is the only thing money can’t buy. A HST flying between fixed hubs along pre-timed flight paths under conditions of high security is not convenient. A bizjet that flies at their beck and call is actually speedier across most intercontinental routes, unless the hypersonic route is serviced by multiple daily flights—which isn’t going to happen unless the operating costs are comparable to a subsonic craft.

January 7, 2015

Cory Doctorow on the dangers of legally restricting technologies

Filed under: Law,Liberty,Media,Technology — Tags: , , , , — Nicholas @ 02:00

In Wired, Cory Doctorow explains why bad legal precedents from more than a decade ago are making us more vulnerable rather than safer:

We live in a world made of computers. Your car is a computer that drives down the freeway at 60 mph with you strapped inside. If you live or work in a modern building, computers regulate its temperature and respiration. And we’re not just putting our bodies inside computers — we’re also putting computers inside our bodies. I recently exchanged words in an airport lounge with a late arrival who wanted to use the sole electrical plug, which I had beat him to, fair and square. “I need to charge my laptop,” I said. “I need to charge my leg,” he said, rolling up his pants to show me his robotic prosthesis. I surrendered the plug.

You and I and everyone who grew up with earbuds? There’s a day in our future when we’ll have hearing aids, and chances are they won’t be retro-hipster beige transistorized analog devices: They’ll be computers in our heads.

And that’s why the current regulatory paradigm for computers, inherited from the 16-year-old stupidity that is the Digital Millennium Copyright Act, needs to change. As things stand, the law requires that computing devices be designed to sometimes disobey their owners, so that their owners won’t do something undesirable. To make this work, we also have to criminalize anything that might help owners change their computers to let the machines do that supposedly undesirable thing.

This approach to controlling digital devices was annoying back in, say, 1995, when we got the DVD player that prevented us from skipping ads or playing an out-of-region disc. But it will be intolerable and deadly dangerous when our 3-D printers, self-driving cars, smart houses, and even parts of our bodies are designed with the same restrictions. Because those restrictions would change the fundamental nature of computers. Speaking in my capacity as a dystopian science fiction writer: This scares the hell out of me.

December 17, 2014

The Internet is on Fire | Mikko Hypponen | TEDxBrussels

Filed under: Government,Liberty,Technology — Tags: , , , , — Nicholas @ 00:02

Published on 6 Dec 2014

This talk was given at a local TEDx event, produced independently of the TED Conferences. The Internet is on Fire

Mikko is a world class cyber criminality expert who has led his team through some of the largest computer virus outbreaks in history. He spoke twice at TEDxBrussels in 2011 and in 2013. Every time his talks move the world and surpass the 1 million viewers. We’ve had a huge amount of requests for Mikko to come back this year. And guess what? He will!

Prepare for what is becoming his ‘yearly’ talk about PRISM and other modern surveillance issues.

December 2, 2014

The brief flicker of interest in the problems of police militarization

Filed under: Law,Liberty,USA — Tags: , , , — Nicholas @ 00:04

At Techdirt, Tim Cushing relives that brief, shining moment when the nation seemed to suddenly notice — and care about — the ongoing militarization of the police:

It’s an idea that almost makes sense, provided you don’t examine it too closely. America’s neverending series of intervention actions and pseudo-wars has created a wealth of military surplus — some outdated, some merely more than what was needed. Rather than simply scrap the merchandise or offload it at cut-rate prices to other countries’ militaries (and face the not-unheard-of possibility that those same weapons/vehicles might be used against us), the US government decided to distribute it to those fighting the war (on drugs, mostly) at home: law enforcement agencies.

What could possibly go wrong?

Well, it quickly became a way to turn police departments into low-rent military operations. Law enforcement officials sold fear and bought assault rifles, tear gas, grenade launchers and armored vehicles. They painted vivid pictures of well-armed drug cabals and terrorists, both domestic and otherwise, steadily encroaching on the everyday lives of the public, outmanning and outgunning the servers and protectors.

It worked. The Department of Homeland Security was so flattered by the parroting of its terrorist/domestic extremist talking points that it handed out generous grants and ignored incongruities, like a town of 23,000 requesting an armored BearCat because its annual Pumpkinfest might be a terrorist target.

Then the Ferguson protests began after Michael Brown’s shooting in August, and the media was suddenly awash in images of camouflage-clad cops riding armored vehicles while pointing weapons at protesters, looking for all the world like martial law had been declared and the military had arrived to quell dissent and maintain control.

This prompted a discussion that actually reached the halls of Congress. For a brief moment, it looked like there might be a unified movement to overhaul the mostly-uncontrolled military equipment re-gifting program. But now that the indictment has been denied and the city of Ferguson is looted and burning, those concerns appear to have been forgotten.

September 30, 2014

“These bugs were found – and were findable – because of open-source scrutiny”

Filed under: Technology — Tags: , , , — Nicholas @ 08:13

ESR talks about the visibility problem in software bugs:

The first thing to notice here is that these bugs were found – and were findable – because of open-source scrutiny.

There’s a “things seen versus things unseen” fallacy here that gives bugs like Heartbleed and Shellshock false prominence. We don’t know – and can’t know – how many far worse exploits lurk in proprietary code known only to crackers or the NSA.

What we can project based on other measures of differential defect rates suggests that, however imperfect “many eyeballs” scrutiny is, “few eyeballs” or “no eyeballs” is far worse.

July 10, 2014

Throwing a bit of light on security in the “internet of things”

Filed under: Technology — Tags: , , , , — Nicholas @ 07:36

The “internet of things” is coming: more and more of your surroundings are going to be connected in a vastly expanded internet. A lot of attention needs to be paid to security in this new world, as Dan Goodin explains:

In the latest cautionary tale involving the so-called Internet of things, white-hat hackers have devised an attack against network-connected lightbulbs that exposes Wi-Fi passwords to anyone in proximity to one of the LED devices.

The attack works against LIFX smart lightbulbs, which can be turned on and off and adjusted using iOS- and Android-based devices. Ars Senior Reviews Editor Lee Hutchinson gave a good overview here of the Philips Hue lights, which are programmable, controllable LED-powered bulbs that compete with LIFX. The bulbs are part of a growing trend in which manufacturers add computing and networking capabilities to appliances so people can manipulate them remotely using smartphones, computers, and other network-connected devices. A 2012 Kickstarter campaign raised more than $1.3 million for LIFX, more than 13 times the original goal of $100,000.

According to a blog post published over the weekend, LIFX has updated the firmware used to control the bulbs after researchers discovered a weakness that allowed hackers within about 30 meters to obtain the passwords used to secure the connected Wi-Fi network. The credentials are passed from one networked bulb to another over a mesh network powered by 6LoWPAN, a wireless specification built on top of the IEEE 802.15.4 standard. While the bulbs used the Advanced Encryption Standard (AES) to encrypt the passwords, the underlying pre-shared key never changed, making it easy for the attacker to decipher the payload.

July 2, 2014

Security threats and security myths

Filed under: Technology — Tags: , , — Nicholas @ 09:48

In Wired, Peter W. Singer And Allan Friedman analyze five common myths about online security:

“A domain for the nerds.” That is how the Internet used to be viewed back in the early 1990s, until all the rest of us began to use and depend on it. But this quote is from a White House official earlier this year describing how cybersecurity is too often viewed today. And therein lies the problem, and the needed solution.

Each of us, in whatever role we play in life, makes decisions about cybersecurity that will shape the future well beyond the world of computers. But by looking at this issue as only for the IT Crowd, we too often do so without the proper tools. Basic terms and essential concepts that define what is possible and proper are being missed, or even worse, distorted. Some threats are overblown and overreacted to, while others are ignored.

Perhaps the biggest problem is that while the Internet has given us the ability to run down the answer to almost any question, cybersecurity is a realm where past myth and future hype often weave together, obscuring what actually has happened and where we really are now. If we ever want to get anything effective done in securing the online world, we have to demystify it first.

[…]

Myth #2: Every Day We Face “Millions of Cyber Attacks”

This is what General Keith Alexander, the recently retired chief of US military and intelligence cyber operations, testified to Congress in 2010. Interestingly enough, leaders from China have made similar claims after their own hackers were indicted, pointing the finger back at the US. These numbers are both true and utterly useless.

Counting individual attack probes or unique forms of malware is like counting bacteria — you get big numbers very quickly, but all you really care about is the impact and the source. Even more so, these numbers conflate and confuse the range of threats we face, from scans and probes caught by elementary defenses before they could do any harm, to attempts at everything from pranks to political protests to economic and security related espionage (but notably no “Cyber Pearl Harbors,” which have been mentioned in government speeches and mass media a half million times). It’s a lot like combining everything from kids with firecrackers to protesters with smoke bombs to criminals with shotguns, spies with pistols, terrorists with grenades, and militaries with missiles in the same counting, all because they involve the same technology of gunpowder.

June 23, 2014

Justice Department staff fall for phishing scam simulation

Filed under: Cancon,Government,Technology — Tags: , — Nicholas @ 06:36

This doesn’t speak well of the federal government’s staff security training:

Many of the Justice Department’s finest legal minds are falling prey to a garden-variety Internet scam.

An internal survey shows almost 2,000 staff were conned into clicking on a phoney “phishing” link in their email, raising questions about the security of sensitive information.

The department launched the mock scam in December as a security exercise, sending emails to 5,000 employees to test their ability to recognize cyber fraud.

The emails looked like genuine communications from government or financial institutions, and contained a link to a fake website that was also made to look like the real thing.

What’s even more interesting is that the government bureaucrats fell for this scam at a far higher rate than average Canadian internet users:

The Justice Department’s mock exercise caught 1,850 people clicking on the phoney embedded links, or 37 per cent of everyone who received the emails.

That’s a much higher rate than for the general population, which a federal website says is only about five per cent.

The exercise did not put any confidential information at risk, but the poor results raise red flags about public servants being caught by actual phishing emails.

A spokeswoman says “no privacy breaches have been reported” from any real phishing scams at Justice Canada.

Carole Saindon also said that two more waves of mock emails in February and April show improved results, with clicking rates falling by half.

So in an earlier test, our public servants were clicking on phishing links well over 50% of the time? Yikes.

June 18, 2014

This is why computer security folks look so frustrated

Filed under: Technology — Tags: , , — Nicholas @ 07:41

It’s not that the “security” part of the job is so wearing … it’s that people are morons:

Security white hats, despair: users will run dodgy executables if they are paid as little as one cent.

Even more would allow their computers to become infected by botnet software nasties if the price was increased to five or 10 cents. Offer a whole dollar and you’ll secure a herd of willing internet slaves.

The demoralising findings come from a study lead by Nicolas Christin, research professor at Carnegie Mellon University’s CyLab which baited users with a benign Windows executable sold to users under the guise of contributing to a (fictitious) study.

It was downloaded 1,714 times and 965 users actually ran the code. The application ran a timer simulating an hour’s computational tasks after which a token for payment would be generated.

The researchers collected information on user machines discovering that many of the predominantly US and Indian user machines were already infected with malware despite having security systems installed, and were happy to click past Windows’ User Access Control warning prompts.

The presence of malware actually increased on machines running the latest patches and infosec tools in what was described as an indication of users’ false sense of security.

June 12, 2014

Winnipeg Grade 9 students successfully hack Bank of Montreal ATM

Filed under: Business,Cancon,Technology — Tags: , , , , — Nicholas @ 08:00

“Hack” is the wrong word here, as it implies they did something highly technical and unusual. What they did was to use the formal documentation for the ATM and demonstrate that the installer had failed to change the default administrator password:

Matthew Hewlett and Caleb Turon, both Grade 9 students, found an old ATM operators manual online that showed how to get into the machine’s operator mode. On Wednesday over their lunch hour, they went to the BMO’s ATM at the Safeway on Grant Avenue to see if they could get into the system.

“We thought it would be fun to try it, but we were not expecting it to work,” Hewlett said. “When it did, it asked for a password.”

Hewlett and Turon were even more shocked when their first random guess at the six-digit password worked. They used a common default password. The boys then immediately went to the BMO Charleswood Centre branch on Grant Avenue to notify them.

When they told staff about a security problem with an ATM, they assumed one of their PIN numbers had been stolen, Hewlett said.

“I said: ‘No, no, no. We hacked your ATM. We got into the operator mode,'” Hewlett said.

“He said that wasn’t really possible and we don’t have any proof that we did it.

“I asked them: ‘Is it all right for us to get proof?’

“He said: ‘Yeah, sure, but you’ll never be able to get anything out of it.’

“So we both went back to the ATM and I got into the operator mode again. Then I started printing off documentation like how much money is currently in the machine, how many withdrawals have happened that day, how much it’s made off surcharges.

“Then I found a way to change the surcharge amount, so I changed the surcharge amount to one cent.”

As further proof, Hewlett playfully changed the ATM’s greeting from “Welcome to the BMO ATM” to “Go away. This ATM has been hacked.”

They returned to BMO with six printed documents. This time, staff took them seriously.

A lot of hardware is shipped with certain default security arrangements (known admin accounts with pre-set passwords, for example), and it’s part of the normal installation/configuration process to change them. A lazy installer may skip this, leaving the system open to inquisitive teens or more technically adept criminals. These two students were probably lucky not to be scapegoated by the bank’s security officers.

June 4, 2014

Bruce Schneier on the human side of the Heartbleed vulnerability

Filed under: Technology — Tags: , , , — Nicholas @ 07:24

Reposting at his own site an article he did for The Mark News:

The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.

It was a software insecurity, but the problem was entirely human.

Software has vulnerabilities because it’s written by people, and people make mistakes — thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.

In retrospect, the mistake should have been obvious, and it’s amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.

The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google’s security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.

When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it’s announced.

Older Posts »

Powered by WordPress