Published on 6 Dec 2014
This talk was given at a local TEDx event, produced independently of the TED Conferences. The Internet is on Fire
Mikko is a world class cyber criminality expert who has led his team through some of the largest computer virus outbreaks in history. He spoke twice at TEDxBrussels in 2011 and in 2013. Every time his talks move the world and surpass the 1 million viewers. We’ve had a huge amount of requests for Mikko to come back this year. And guess what? He will!
Prepare for what is becoming his ‘yearly’ talk about PRISM and other modern surveillance issues.
December 17, 2014
December 2, 2014
At Techdirt, Tim Cushing relives that brief, shining moment when the nation seemed to suddenly notice — and care about — the ongoing militarization of the police:
It’s an idea that almost makes sense, provided you don’t examine it too closely. America’s neverending series of intervention actions and pseudo-wars has created a wealth of military surplus — some outdated, some merely more than what was needed. Rather than simply scrap the merchandise or offload it at cut-rate prices to other countries’ militaries (and face the not-unheard-of possibility that those same weapons/vehicles might be used against us), the US government decided to distribute it to those fighting the war (on drugs, mostly) at home: law enforcement agencies.
What could possibly go wrong?
Well, it quickly became a way to turn police departments into low-rent military operations. Law enforcement officials sold fear and bought assault rifles, tear gas, grenade launchers and armored vehicles. They painted vivid pictures of well-armed drug cabals and terrorists, both domestic and otherwise, steadily encroaching on the everyday lives of the public, outmanning and outgunning the servers and protectors.
It worked. The Department of Homeland Security was so flattered by the parroting of its terrorist/domestic extremist talking points that it handed out generous grants and ignored incongruities, like a town of 23,000 requesting an armored BearCat because its annual Pumpkinfest might be a terrorist target.
Then the Ferguson protests began after Michael Brown’s shooting in August, and the media was suddenly awash in images of camouflage-clad cops riding armored vehicles while pointing weapons at protesters, looking for all the world like martial law had been declared and the military had arrived to quell dissent and maintain control.
This prompted a discussion that actually reached the halls of Congress. For a brief moment, it looked like there might be a unified movement to overhaul the mostly-uncontrolled military equipment re-gifting program. But now that the indictment has been denied and the city of Ferguson is looted and burning, those concerns appear to have been forgotten.
September 30, 2014
ESR talks about the visibility problem in software bugs:
The first thing to notice here is that these bugs were found – and were findable – because of open-source scrutiny.
There’s a “things seen versus things unseen” fallacy here that gives bugs like Heartbleed and Shellshock false prominence. We don’t know – and can’t know – how many far worse exploits lurk in proprietary code known only to crackers or the NSA.
What we can project based on other measures of differential defect rates suggests that, however imperfect “many eyeballs” scrutiny is, “few eyeballs” or “no eyeballs” is far worse.
July 10, 2014
The “internet of things” is coming: more and more of your surroundings are going to be connected in a vastly expanded internet. A lot of attention needs to be paid to security in this new world, as Dan Goodin explains:
In the latest cautionary tale involving the so-called Internet of things, white-hat hackers have devised an attack against network-connected lightbulbs that exposes Wi-Fi passwords to anyone in proximity to one of the LED devices.
The attack works against LIFX smart lightbulbs, which can be turned on and off and adjusted using iOS- and Android-based devices. Ars Senior Reviews Editor Lee Hutchinson gave a good overview here of the Philips Hue lights, which are programmable, controllable LED-powered bulbs that compete with LIFX. The bulbs are part of a growing trend in which manufacturers add computing and networking capabilities to appliances so people can manipulate them remotely using smartphones, computers, and other network-connected devices. A 2012 Kickstarter campaign raised more than $1.3 million for LIFX, more than 13 times the original goal of $100,000.
According to a blog post published over the weekend, LIFX has updated the firmware used to control the bulbs after researchers discovered a weakness that allowed hackers within about 30 meters to obtain the passwords used to secure the connected Wi-Fi network. The credentials are passed from one networked bulb to another over a mesh network powered by 6LoWPAN, a wireless specification built on top of the IEEE 802.15.4 standard. While the bulbs used the Advanced Encryption Standard (AES) to encrypt the passwords, the underlying pre-shared key never changed, making it easy for the attacker to decipher the payload.
July 2, 2014
In Wired, Peter W. Singer And Allan Friedman analyze five common myths about online security:
“A domain for the nerds.” That is how the Internet used to be viewed back in the early 1990s, until all the rest of us began to use and depend on it. But this quote is from a White House official earlier this year describing how cybersecurity is too often viewed today. And therein lies the problem, and the needed solution.
Each of us, in whatever role we play in life, makes decisions about cybersecurity that will shape the future well beyond the world of computers. But by looking at this issue as only for the IT Crowd, we too often do so without the proper tools. Basic terms and essential concepts that define what is possible and proper are being missed, or even worse, distorted. Some threats are overblown and overreacted to, while others are ignored.
Perhaps the biggest problem is that while the Internet has given us the ability to run down the answer to almost any question, cybersecurity is a realm where past myth and future hype often weave together, obscuring what actually has happened and where we really are now. If we ever want to get anything effective done in securing the online world, we have to demystify it first.
Myth #2: Every Day We Face “Millions of Cyber Attacks”
This is what General Keith Alexander, the recently retired chief of US military and intelligence cyber operations, testified to Congress in 2010. Interestingly enough, leaders from China have made similar claims after their own hackers were indicted, pointing the finger back at the US. These numbers are both true and utterly useless.
Counting individual attack probes or unique forms of malware is like counting bacteria — you get big numbers very quickly, but all you really care about is the impact and the source. Even more so, these numbers conflate and confuse the range of threats we face, from scans and probes caught by elementary defenses before they could do any harm, to attempts at everything from pranks to political protests to economic and security related espionage (but notably no “Cyber Pearl Harbors,” which have been mentioned in government speeches and mass media a half million times). It’s a lot like combining everything from kids with firecrackers to protesters with smoke bombs to criminals with shotguns, spies with pistols, terrorists with grenades, and militaries with missiles in the same counting, all because they involve the same technology of gunpowder.
June 23, 2014
This doesn’t speak well of the federal government’s staff security training:
Many of the Justice Department’s finest legal minds are falling prey to a garden-variety Internet scam.
An internal survey shows almost 2,000 staff were conned into clicking on a phoney “phishing” link in their email, raising questions about the security of sensitive information.
The department launched the mock scam in December as a security exercise, sending emails to 5,000 employees to test their ability to recognize cyber fraud.
The emails looked like genuine communications from government or financial institutions, and contained a link to a fake website that was also made to look like the real thing.
What’s even more interesting is that the government bureaucrats fell for this scam at a far higher rate than average Canadian internet users:
The Justice Department’s mock exercise caught 1,850 people clicking on the phoney embedded links, or 37 per cent of everyone who received the emails.
That’s a much higher rate than for the general population, which a federal website says is only about five per cent.
The exercise did not put any confidential information at risk, but the poor results raise red flags about public servants being caught by actual phishing emails.
A spokeswoman says “no privacy breaches have been reported” from any real phishing scams at Justice Canada.
Carole Saindon also said that two more waves of mock emails in February and April show improved results, with clicking rates falling by half.
So in an earlier test, our public servants were clicking on phishing links well over 50% of the time? Yikes.
June 18, 2014
It’s not that the “security” part of the job is so wearing … it’s that people are morons:
Security white hats, despair: users will run dodgy executables if they are paid as little as one cent.
Even more would allow their computers to become infected by botnet software nasties if the price was increased to five or 10 cents. Offer a whole dollar and you’ll secure a herd of willing internet slaves.
The demoralising findings come from a study lead by Nicolas Christin, research professor at Carnegie Mellon University’s CyLab which baited users with a benign Windows executable sold to users under the guise of contributing to a (fictitious) study.
It was downloaded 1,714 times and 965 users actually ran the code. The application ran a timer simulating an hour’s computational tasks after which a token for payment would be generated.
The researchers collected information on user machines discovering that many of the predominantly US and Indian user machines were already infected with malware despite having security systems installed, and were happy to click past Windows’ User Access Control warning prompts.
The presence of malware actually increased on machines running the latest patches and infosec tools in what was described as an indication of users’ false sense of security.
June 12, 2014
“Hack” is the wrong word here, as it implies they did something highly technical and unusual. What they did was to use the formal documentation for the ATM and demonstrate that the installer had failed to change the default administrator password:
Matthew Hewlett and Caleb Turon, both Grade 9 students, found an old ATM operators manual online that showed how to get into the machine’s operator mode. On Wednesday over their lunch hour, they went to the BMO’s ATM at the Safeway on Grant Avenue to see if they could get into the system.
“We thought it would be fun to try it, but we were not expecting it to work,” Hewlett said. “When it did, it asked for a password.”
Hewlett and Turon were even more shocked when their first random guess at the six-digit password worked. They used a common default password. The boys then immediately went to the BMO Charleswood Centre branch on Grant Avenue to notify them.
When they told staff about a security problem with an ATM, they assumed one of their PIN numbers had been stolen, Hewlett said.
“I said: ‘No, no, no. We hacked your ATM. We got into the operator mode,'” Hewlett said.
“He said that wasn’t really possible and we don’t have any proof that we did it.
“I asked them: ‘Is it all right for us to get proof?’
“He said: ‘Yeah, sure, but you’ll never be able to get anything out of it.’
“So we both went back to the ATM and I got into the operator mode again. Then I started printing off documentation like how much money is currently in the machine, how many withdrawals have happened that day, how much it’s made off surcharges.
“Then I found a way to change the surcharge amount, so I changed the surcharge amount to one cent.”
As further proof, Hewlett playfully changed the ATM’s greeting from “Welcome to the BMO ATM” to “Go away. This ATM has been hacked.”
They returned to BMO with six printed documents. This time, staff took them seriously.
A lot of hardware is shipped with certain default security arrangements (known admin accounts with pre-set passwords, for example), and it’s part of the normal installation/configuration process to change them. A lazy installer may skip this, leaving the system open to inquisitive teens or more technically adept criminals. These two students were probably lucky not to be scapegoated by the bank’s security officers.
June 4, 2014
Reposting at his own site an article he did for The Mark News:
The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.
It was a software insecurity, but the problem was entirely human.
Software has vulnerabilities because it’s written by people, and people make mistakes — thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.
In retrospect, the mistake should have been obvious, and it’s amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.
The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google’s security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.
When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it’s announced.
May 15, 2014
In The Atlantic, Conor Friedersdorf reviews Glenn Greenwald’s new book, No Place to Hide:
Collect it all. Know it all. Exploit it all.
That totalitarian approach came straight from the top. Outgoing NSA chief Keith Alexander began using “collect it all” in Iraq at the height of the counterinsurgency. Eventually, he aimed similar tools at hundreds of millions of innocent people living in liberal democracies at peace, not war zones under occupation.
The strongest passages in No Place to Hide convey the awesome spying powers amassed by the U.S. government and its surveillance partners; the clear and present danger they pose to privacy; and the ideology of the national-security state. The NSA really is intent on subverting every method a human could use to communicate without the state being able to monitor the conversation.
U.S. officials regard the unprecedented concentration of power that would entail to be less dangerous than the alternative. They can’t conceive of serious abuses perpetrated by the federal government, though recent U.S. history offers many examples.
But it is a mistake (albeit a common one) to survey the NSA-surveillance controversy and to conclude that Greenwald represents the radical position. His writing can be acerbic, mordant, biting, trenchant, scathing, scornful, and caustic. He is stubbornly uncompromising in his principles, as dramatized by how close he came to quitting The Guardian when it wasn’t moving as fast as he wanted to publish the first story sourced to Edward Snowden. Unlike many famous journalists, he is not deferential to U.S. leaders.
Yet tone and zeal should never be mistaken for radicalism on the core question before us: What should America’s approach to state surveillance be? “Defenders of suspicionless mass surveillance often insist … that some spying is always necessary. But this is a straw man … nobody disagrees with that,” Greenwald explains. “The alternative to mass surveillance is not the complete elimination of surveillance. It is, instead, targeted surveillance, aimed only at those for whom there is substantial evidence to believe they are engaged in real wrongdoing.”
That’s as traditionally American as the Fourth Amendment.
Targeted surveillance “is consistent with American constitutional values and basic precepts of Western justice,” Greenwald continues. Notice that the authority he most often cites to justify his position is the Constitution. That’s not the mark of a radical. In fact, so many aspects of Greenwald’s book and the positions that he takes on surveillance are deeply, unmistakably conservative.
May 11, 2014
A few days back, Charles Stross pointed out one of the most ironic points of interest in the NSA scandal … they did it to themselves, over the course of several years effort:
I don’t need to tell you about the global surveillance disclosures of 2013 to the present — it’s no exaggeration to call them the biggest secret intelligence leak in history, a monumental gaffe (from the perspective of the espionage-industrial complex) and a security officer’s worst nightmare.
But it occurs to me that it’s worth pointing out that the NSA set themselves up for it by preventing the early internet specifications from including transport layer encryption.
At every step in the development of the public internet the NSA systematically lobbied for weaker security, to enhance their own information-gathering capabilities. The trouble is, the success of the internet protocols created a networking monoculture that the NSA themselves came to rely on for their internal infrastructure. The same security holes that the NSA relied on to gain access to your (or Osama bin Laden’s) email allowed gangsters to steal passwords and login credentials and credit card numbers. And ultimately these same baked-in security holes allowed Edward Snowden — who, let us remember, is merely one guy: a talented system administrator and programmer, but no Clark Kent — to rampage through their internal information systems.
The moral of the story is clear: be very cautious about poisoning the banquet you serve your guests, lest you end up accidentally ingesting it yourself.
April 23, 2014
OpenBSD founder Theo de Raadt has created a fork of OpenSSL, the widely used open source cryptographic software library that contained the notorious Heartbleed security vulnerability.
OpenSSL has suffered from a lack of funding and code contributions despite being used in websites and products by many of the world’s biggest and richest corporations.
The decision to fork OpenSSL is bound to be controversial given that OpenSSL powers hundreds of thousands of Web servers. When asked why he wanted to start over instead of helping to make OpenSSL better, de Raadt said the existing code is too much of a mess.
“Our group removed half of the OpenSSL source tree in a week. It was discarded leftovers,” de Raadt told Ars in an e-mail. “The Open Source model depends [on] people being able to read the code. It depends on clarity. That is not a clear code base, because their community does not appear to care about clarity. Obviously, when such cruft builds up, there is a cultural gap. I did not make this decision… in our larger development group, it made itself.”
The LibreSSL code base is on OpenBSD.org, and the project is supported financially by the OpenBSD Foundation and OpenBSD Project. LibreSSL has a bare bones website that is intentionally unappealing.
“This page scientifically designed to annoy web hipsters,” the site says. “Donate now to stop the Comic Sans and Blink Tags.” In explaining the decision to fork, the site links to a YouTube video of a cover of the Twisted Sister song “We’re not gonna take it.”
April 16, 2014
You would have thought this would have sunk in by now. The fact that it hasn’t shows what an extraordinary machine the internet is — quite different to any technology that has gone before it. When the Lovebug struck, few of us lived our lives online. Back then we banked in branches, shopped in shops, met friends and lovers in the pub and obtained jobs by posting CVs. Tweeting was for the birds. Cyberspace was marginal. Now, for billions, the online world is their lives. But there is a problem. Only a tiny, tiny percentage of the people who use the internet have even the faintest clue about how any of it works. “SSL”, for instance, stands for “Secure Sockets Layer”.
I looked it up and sort of understood it — for about five minutes. While most drivers have at least a notion of how an engine works (something about petrol exploding in cylinders and making pistons go up and down and so forth) the very language of the internet — “domain names” and “DNS codes”, endless “protocols” and so forth — is arcane, exclusive; it is, in fact, the language of magic. For all intents and purposes the internet is run by wizards.
And the trouble with letting wizards run things is that when things go wrong we are at their mercy. The world spends several tens of billions of pounds a year on anti-malware programs, which we are exhorted to buy lest the walls of our digital castles collapse around us. Making security software is a huge industry, and whenever there is a problem — either caused by viruses or by a glitch like Heartbleed — the internet security companies rush to be quoted in the media. And guess what, their message is never “keep calm and carry on”. As Professor Ross Anderson of Cambridge University says: “Almost all the cost of cybercrime is the cost of anticipation.”
Michael Hanlon, “Relax, Mumsnet users: don’t lose sleep over Heartbleed hysteria”, Telegraph, 2014-04-16
April 11, 2014
Some people are claiming that the Heartbleed bug proves that open source software is a failure. ESR quickly addresses that idiotic claim:
I actually chuckled when I read rumor that the few anti-open-source advocates still standing were crowing about the Heartbleed bug, because I’ve seen this movie before after every serious security flap in an open-source tool. The script, which includes a bunch of people indignantly exclaiming that many-eyeballs is useless because bug X lurked in a dusty corner for Y months, is so predictable that I can anticipate a lot of the lines.
The mistake being made here is a classic example of Frederic Bastiat’s “things seen versus things unseen”. Critics of Linus’s Law overweight the bug they can see and underweight the high probability that equivalently positioned closed-source security flaws they can’t see are actually far worse, just so far undiscovered.
That’s how it seems to go whenever we get a hint of the defect rate inside closed-source blobs, anyway. As a very pertinent example, in the last couple months I’ve learned some things about the security-defect density in proprietary firmware on residential and small business Internet routers that would absolutely curl your hair. It’s far, far worse than most people understand out there.
Ironically enough this will happen precisely because the open-source process is working … while, elsewhere, bugs that are far worse lurk in closed-source router firmware. Things seen vs. things unseen…
Returning to Heartbleed, one thing conspicuously missing from the downshouting against OpenSSL is any pointer to an implementation that is known to have a lower defect rate over time. This is for the very good reason that no such empirically-better implementation exists. What is the defect history on proprietary SSL/TLS blobs out there? We don’t know; the vendors aren’t saying. And we can’t even estimate the quality of their code, because we can’t audit it.
The response to the Heartbleed bug illustrates another huge advantage of open source: how rapidly we can push fixes. The repair for my Linux systems was a push-one-button fix less than two days after the bug hit the news. Proprietary-software customers will be lucky to see a fix within two months, and all too many of them will never see a fix patch.
Update: There are lots of sites offering tools to test whether a given site is vulnerable to the Heartbeat bug, but you need to step carefully there, as there’s a thin line between what’s legal in some countries and what counts as an illegal break-in attempt:
Websites and tools that have sprung up to check whether servers are vulnerable to OpenSSL’s mega-vulnerability Heartbleed have thrown up anomalies in computer crime law on both sides of the Atlantic.
Both the US Computer Fraud and Abuse Act and its UK equivalent the Computer Misuse Act make it an offence to test the security of third-party websites without permission.
Testing to see what version of OpenSSL a site is running, and whether it is also supports the vulnerable Heartbeat protocol, would be legal. But doing anything more active — without permission from website owners — would take security researchers onto the wrong side of the law.
And you shouldn’t just rush out and change all your passwords right now (you’ll probably need to do it, but the timing matters):
Heartbleed is a catastrophic bug in widely used OpenSSL that creates a means for attackers to lift passwords, crypto-keys and other sensitive data from the memory of secure server software, 64KB at a time. The mega-vulnerability was patched earlier this week, and software should be updated to use the new version, 1.0.1g. But to fully clean up the problem, admins of at-risk servers should generate new public-private key pairs, destroy their session cookies, and update their SSL certificates before telling users to change every potentially compromised password on the vulnerable systems.
April 9, 2014
Update: In case you’re not concerned about the seriousness of this issue, The Register‘s John Leyden would like you to think again.
The catastrophic crypto key password vulnerability in OpenSSL affects far more than web servers, with everything from routers to smartphones also affected.
The so-called “Heartbleed” vulnerability (CVE-2014-0160) can be exploited to extract information from the servers running vulnerable version of OpenSSL, and this includes email servers and Android smartphones as well as routers.
Hackers could potentially gain access to private encryption key before using this information to decipher the encrypted traffic to and from vulnerable websites.
Web sites including Yahoo!, Flickr and OpenSSL were among the many left vulnerable to the megabug that exposed encryption keys, passwords and other sensitive information.
Preliminary tests suggested 47 of the 1000 largest sites are vulnerable to Heartbleed and that’s only among the less than half that provide support for SSL or HTTPS at all. Many of the affected sites – including Yahoo! – have since patched the vulnerability. Even so, security experts – such as Graham Cluley – remain concerned.
OpenSSL is a widely used encryption library that is a key component of technology that enables secure (https) website connections.
The bug exists in the OpenSSL 1.0.1 source code and stems from coding flaws in a fairly new feature known as the TLS Heartbeat Extension. “TLS heartbeats are used as ‘keep alive’ packets so that the ends of an encrypted connection can agree to keep the session open even when they don’t have any official data to exchange,” explains security veteran Paul Ducklin in a post on Sophos’ Naked Security blog.
The Heartbleed vulnerability in the OpenSSL cryptographic library might be exploited to reveal contents of secured communication exchanges. The same flaw might also be used to lift SSL keys.
This means that sites could still be vulnerable to attacks after installing the patches in cases where a private key has been stolen. Sites therefore need to revoke exposed keys, reissue new keys, and invalidate all session keys and session cookies.
“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.
Half a million sites are vulnerable, including my own. Test your vulnerability here.
The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.
At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.