We saw in an earlier story that the government is trying to tighten regulations on private company cyber security practices at the same time its own network security practices have been shown to be a joke. In finance, it can never balance a budget and uses accounting techniques that would get most companies thrown in jail. It almost never fully funds its pensions. Anything it does is generally done more expensively than would be the same task undertaken privately. Its various sites are among the worst superfund environmental messes. Almost all the current threats to water quality in rivers and oceans comes from municipal sewage plants. The government’s Philadelphia naval yard single-handedly accounts for a huge number of the worst asbestos exposure cases to date.
By what alchemy does such a failing organization suddenly become such a good regulator?
January 26, 2017
October 26, 2016
Joey DeVilla provides a convenient layman’s terms description of last Friday’s denial of service attacks on Dyn:
If you’ve been reading about the cyberattack that took place last Friday and are confused by the jargon and technobabble, this primer was written for you! By the end of this article, you’ll have a better understanding of what happened, what caused it, and what can be done to prevent similar problems in the future.
On Friday, October 21, 2016 at around 6:00 a.m. EDT, a botnet made up of what could be up to tens of millions of machines — a large number of which were IoT devices — mounted a denial-of-service attack on Dyn, disrupting DNS over a large part of the internet in the U.S.. This in turn led to a large internet outage on the U.S. east coast, slowing down the internet for many users and rendered a number of big sites inaccessible, including Amazon, Netflix, Reddit, Spotify, Tumblr, and Twitter.
Flashpoint, a firm that detects and mitigates online threats, was the first to announce that the attack was carried out by a botnet of compromised IoT devices controlled by Mirai malware. Dyn later corroborated Flashpoint’s claim, stating that their servers were under attack from devices located at millions of IP addresses.
The animation above is a visualization of the attack based on the devices’ IP addresses and IP geolocation (a means of approximating the geographic location of an IP address; for more, see this explanation on Stack Overflow). Note that the majority of the devices were at IP addresses (and therefore, geographic locations) outside the United States.
October 14, 2016
Every few years, a researcher replicates a security study by littering USB sticks around an organization’s grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as “teachable moments” for others. “If only everyone was more security aware and had more security training,” they say, “the Internet would be a much safer place.”
Enough of that. The problem isn’t the users: it’s that we’ve designed our computer systems’ security so badly that we demand the user do all of these counterintuitive things. Why can’t users choose easy-to-remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?
Bruce Schneier, “Security Design: Stop Trying to Fix the User”, Schneier on Security, 2016-10-03.
July 27, 2016
When it comes to computer security, you should always listen to what Bruce Schneier has to say, especially when it comes to the “Internet of things”:
Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).
So far, internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by — presumptively — China in 2015.
On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door — or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location.
With the advent of the Internet of Things and cyber-physical systems in general, we’ve given the internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete.
Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.
The increased risks come from three things: software control of systems, interconnections between systems, and automatic or autonomous systems. Let’s look at them in turn
I’m usually a pretty tech-positive person, but I actively avoid anything that bills itself as being IoT-enabled … call me paranoid, but I don’t want to hand over local control of my environment, my heating or cooling system, or pretty much anything else on my property to an outside agency (whether government or corporate).
June 27, 2016
Urging vague and unconstrained government power is not how responsible citizens of a free society ought to act. It’s a bad habit and it’s dangerous and irresponsible to promote it.
This is not an abstract or hypothetical point. We live in a country in which arbitrary power is routinely abused, usually to the detriment of the least powerful and the most abused among us. We live in a country in which we have been panicked into giving the government more and more power to protect us from harm, and that power is most often not used for the things we were told, but to solidify and expand previously existing government power. We live in a country where the government uses the power we’ve already given it as a rationale for giving it more: “how can we not ban x when we’ve already banned y?” We live in a country where vague laws are used arbitrarily and capriciously.
Ken White, “In Support Of A Total Ban on Civilians Owning Firearms”, Popehat, 2016-06-16.
March 29, 2016
Charles Stross has a theory:
A lot of people are watching the spectacle of Apple vs. the FBI and the Homeland Security Theatre and rubbing their eyes, wondering why Apple (in the person of CEO Tim Cook) is suddenly the knight in shining armour on the side of consumer privacy and civil rights. Apple, after all, is a goliath-sized corporate behemoth with the second largest market cap in US stock market history — what’s in it for them?
As is always the case, to understand why Apple has become so fanatical about customer privacy over the past five years that they’re taking on the US government, you need to follow the money.
Apple see their long term future as including a global secure payments infrastructure that takes over the role of Visa and Mastercard’s networks — and ultimately of spawning a retail banking subsidiary to provide financial services directly, backed by some of their cash stockpile.
The FBI thought they were asking for a way to unlock a mobile phone, because the FBI is myopically focussed on past criminal investigations, not the future of the technology industry, and the FBI did not understand that they were actually asking for a way to tracelessly unlock and mess with every ATM and credit card on the planet circa 2030 (if not via Apple, then via the other phone OSs, once the festering security fleapit that is Android wakes up and smells the money).
If the FBI get what they want, then the back door will be installed and the next-generation payments infrastructure will be just as prone to fraud as the last-generation card infrastructure, with its card skimmers and identity theft.
And this is why Tim Cook is willing to go to the mattresses with the US department of justice over iOS security: if nobody trusts their iPhone, nobody will be willing to trust the next-generation Apple Bank, and Apple is going to lose their best option for securing their cash pile as it climbs towards the stratosphere.
October 29, 2015
At Ars Technica, Dan Goodin discussed the imminent availability of free HTTPS certificates to all registered domain owners:
A nonprofit effort aimed at encrypting the entire Web has reached an important milestone: its HTTPS certificates are now trusted by all major browsers.
The service, which is backed by the Electronic Frontier Foundation, Mozilla, Cisco Systems, and Akamai, is known as Let’s Encrypt. As Ars reported last year, the group will offer free HTTPS certificates to anyone who owns a domain name. Let’s Encrypt promises to provide open source tools that automate processes for both applying for and receiving the credential and configuring a website to use it securely.
HTTPS uses the transport layer security or secure sockets layer protocols to secure websites in two important ways. First, it encrypts communications passing between visitors and the Web server so they can’t be read or modified by anyone who may be monitoring the connection. Second, in the case of bare bones certificates, it cryptographically proves that a server belongs to the same organization or person with control over the domain, rather than an imposter posing as that organization. (Extended validation certificates go a step beyond by authenticating the identity of the organization or individual.)
September 30, 2015
Strategy Page on the less-than-perfect result of Russia’s attempt to get hackers to crack The Onion Router for a medium-sized monetary prize:
Back in mid-2014 Russia offered a prize of $111,000 for whoever could deliver, by August 20th 2014, software that would allow Russian security services to identify people on the Internet using Tor (The Onion Router), a system that enables users to access the Internet anonymously. On August 22nd Russia announced that an unnamed Russian contractor, with a top security clearance, had received the $111,000 prize. No other details were provided at the time. A year later is was revealed that the winner of the Tor prize is now spending even more on lawyers to try and get out of the contract to crack Tor’s security. It seems the winners found that their theoretical solution was too difficult to implement effectively. In part this was because the worldwide community of programmers and software engineers that developed Tor is constantly upgrading it. Cracking Tor security is firing at a moving target and one that constantly changes shape and is quite resistant to damage. Tor is not perfect but it has proved very resistant to attack. A lot of people are trying to crack Tor, which is also used by criminals and Islamic terrorists was well as people trying to avoid government surveillance. This is a matter of life and death in many countries, including Russia.
Similar to anonymizer software, Tor was even more untraceable. Unlike anonymizer software, Tor relies on thousands of people running the Tor software, and acting as nodes for email (and attachments) to be sent through so many Tor nodes that it was believed virtually impossible to track down the identity of the sender. Tor was developed as part of an American government program to create software that people living in dictatorships could use to avoid arrest for saying things on the Internet that their government did not like. Tor also enabled Internet users in dictatorships to communicate safely with the outside world. Tor first appeared in 2002 and has since then defied most attempts to defeat it. The Tor developers were also quick to modify their software when a vulnerability was detected.
But by 2014 it was believed that NSA had cracked TOR and others may have done so as well but were keeping quiet about it so that the Tor support community did not fix whatever aspect of the software that made it vulnerable. At the same time there were alternatives to Tor, as well as supplemental software that were apparently uncracked by anyone.
September 11, 2015
Brewster Kahle on the need to
blow up change the current web and recreate it with true open characteristics built-in from the start:
Over the last 25 years, millions of people have poured creativity and knowledge into the World Wide Web. New features have been added and dramatic flaws have emerged based on the original simple design. I would like to suggest we could now build a new Web on top of the existing Web that secures what we want most out of an expressive communication tool without giving up its inclusiveness. I believe we can do something quite counter-intuitive: We can lock the Web open.
One of my heroes, Larry Lessig, famously said “Code is Law.” The way we code the web will determine the way we live online. So we need to bake our values into our code. Freedom of expression needs to be baked into our code. Privacy should be baked into our code. Universal access to all knowledge. But right now, those values are not embedded in the Web.
It turns out that the World Wide Web is quite fragile. But it is huge. At the Internet Archive we collect one billion pages a week. We now know that Web pages only last about 100 days on average before they change or disappear. They blink on and off in their servers.
And the Web is massively accessible – unless you live in China. The Chinese government has blocked the Internet Archive, the New York Times, and other sites from its citizens. And other countries block their citizens’ access as well every once in a while. So the Web is not reliably accessible.
And the Web isn’t private. People, corporations, countries can spy on what you are reading. And they do. We now know, thanks to Edward Snowden, that Wikileaks readers were selected for targeting by the National Security Agency and the UK’s equivalent just because those organizations could identify those Web browsers that visited the site and identify the people likely to be using those browsers. In the library world, we know how important it is to protect reader privacy. Rounding people up for the things that they’ve read has a long and dreadful history. So we need a Web that is better than it is now in order to protect reader privacy.
August 28, 2015
At The Register, Richard Chirgwin explains why every new “internet of things” release is pretty much certain to be lacking in the security department:
Let me introduce someone I’ll call the Junior VP of Embedded Systems Security, who wears the permanent pucker of chronic disappointment.
The reason he looks so disappointed is that he’s in charge of embedded Internet of Things security for a prominent Smart Home startup.
Everybody said “get into security, you’ll be employable forever on a good income”, so he did.
Because it’s a startup he has to live in the Valley. After his $10k per month take-home, the rent leaves him just enough to live on Soylent plus whatever’s on offer in the company canteen where every week is either vegan week or paleo week.
Nobody told him that as Junior VP for Embedded Systems Security (JVPESS), his job is to give advice that’s routinely ignored or overruled.
Meet the designer
“All we want to do is integrate the experience of the bedside A.M. clock-radio into a fully-social cloud platform to leverage its audience reach and maximise the effectiveness of converting advertising into a positive buying experience”, the Chief Design Officer said (the CDO dresses like Jony Ive, because they retired the Steve Jobs uniform like a football club retiring the Number 10 jumper when Pele quit).
For his implementation, the JVPESS chose a chip so stupid the Republicans want to field it as Trump’s running-mate, wrote a communications spec that did exactly and only what was in the requirements, and briefed the embedded software engineer.
The embedded software engineer only makes stuff actually work, so he earns about one-sixth that of the User Experience Ninja that reports to Jony Ive’s Style Slave and has to live in Detroit. But he’s boring and conscientious and delivers the code.
Eventually, the JVPESS hands over a design to Jony Ive’s Outfit knowing it’ll end in tears.
Two weeks later, Jony Ive’s Style Slave returns to request approval for “just a couple of last minute revisions. We have to press ‘go’ on the project by close-of-business today so if you could just look this over”.
August 7, 2015
At The Register, John Leyden talks about the recent revelation that the Tesla Model S has known hacking vulnerabilities:
Security researchers have uncovered six fresh vulnerabilities with the Tesla S.
Kevin Mahaffey, CTO of mobile security firm Lookout, and Cloudflare’s principal security researcher Marc Rogers, discovered the flaws after physically examining a vehicle before working with Elon Musk’s firm to resolve security bugs in the electric automobile.
The vulnerabilities allowed the researchers to gain root (administrator) access to the Model S infotainment systems.
With access to these systems, they were able to remotely lock and unlock the car, control the radio and screens, display any content on the screens (changing map displays and the speedometer), open and close the trunk/boot, and turn off the car systems.
When turning off the car systems, Mahaffey and Rogers discovered that, if the car was below five miles per hour (8km/hr) or idling they were able to apply the emergency hand brake, a minor issue in practice.
If the car was going at any speed the technique could be used to cut power to the car while still allowing the driver to safely brake and steer. Consumer’s safety was still preserved even in cases, like the hand-brake issue, where the system ran foul of bugs.
Despite uncovering half a dozen security bugs the two researcher nonetheless came away impressed by Tesla’s infosec policies and procedures as well as its fail-safe engineering approach.
“Tesla takes a software-first approach to its cars, so it’s no surprise that it has key security features in place that minimised and contained the risk of the discovered vulnerabilities,” the researchers explain.
August 2, 2015
The Economist looks at the apparently unstoppable rush to internet-connect everything and why we should worry about security now:
Unfortunately, computer security is about to get trickier. Computers have already spread from people’s desktops into their pockets. Now they are embedding themselves in all sorts of gadgets, from cars and televisions to children’s toys, refrigerators and industrial kit. Cisco, a maker of networking equipment, reckons that there are 15 billion connected devices out there today. By 2020, it thinks, that number could climb to 50 billion. Boosters promise that a world of networked computers and sensors will be a place of unparalleled convenience and efficiency. They call it the “internet of things”.
Computer-security people call it a disaster in the making. They worry that, in their rush to bring cyber-widgets to market, the companies that produce them have not learned the lessons of the early years of the internet. The big computing firms of the 1980s and 1990s treated security as an afterthought. Only once the threats—in the forms of viruses, hacking attacks and so on—became apparent, did Microsoft, Apple and the rest start trying to fix things. But bolting on security after the fact is much harder than building it in from the start.
Of course, governments are desperate to prevent us from hiding our activities from them by way of cryptography or even moderately secure connections, so there’s the risk that any pre-rolled security option offered by a major corporation has already been riddled with convenient holes for government spooks … which makes it even more likely that others can also find and exploit those security holes.
… companies in all industries must heed the lessons that computing firms learned long ago. Writing completely secure code is almost impossible. As a consequence, a culture of openness is the best defence, because it helps spread fixes. When academic researchers contacted a chipmaker working for Volkswagen to tell it that they had found a vulnerability in a remote-car-key system, Volkswagen’s response included a court injunction. Shooting the messenger does not work. Indeed, firms such as Google now offer monetary rewards, or “bug bounties”, to hackers who contact them with details of flaws they have unearthed.
Thirty years ago, computer-makers that failed to take security seriously could claim ignorance as a defence. No longer. The internet of things will bring many benefits. The time to plan for its inevitable flaws is now.
July 17, 2015
Bruce Schneier explains why you should care (a lot) about having your data encrypted:
Encryption protects our data. It protects our data when it’s sitting on our computers and in data centers, and it protects it when it’s being transmitted around the Internet. It protects our conversations, whether video, voice, or text. It protects our privacy. It protects our anonymity. And sometimes, it protects our lives.
This protection is important for everyone. It’s easy to see how encryption protects journalists, human rights defenders, and political activists in authoritarian countries. But encryption protects the rest of us as well. It protects our data from criminals. It protects it from competitors, neighbors, and family members. It protects it from malicious attackers, and it protects it from accidents.
Encryption works best if it’s ubiquitous and automatic. The two forms of encryption you use most often — https URLs on your browser, and the handset-to-tower link for your cell phone calls — work so well because you don’t even know they’re there.
Encryption should be enabled for everything by default, not a feature you turn on only if you’re doing something you consider worth protecting.
This is important. If we only use encryption when we’re working with important data, then encryption signals that data’s importance. If only dissidents use encryption in a country, that country’s authorities have an easy way of identifying them. But if everyone uses it all of the time, encryption ceases to be a signal. No one can distinguish simple chatting from deeply private conversation. The government can’t tell the dissidents from the rest of the population. Every time you use encryption, you’re protecting someone who needs to use it to stay alive.
February 23, 2015
Cory Doctorow is concerned about some of the possible developments within the “Internet of Things” that should concern us all:
The digital world has been colonized by a dangerous idea: that we can and should solve problems by preventing computer owners from deciding how their computers should behave. I’m not talking about a computer that’s designed to say, “Are you sure?” when you do something unexpected — not even one that asks, “Are you really, really sure?” when you click “OK.” I’m talking about a computer designed to say, “I CAN’T LET YOU DO THAT DAVE” when you tell it to give you root, to let you modify the OS or the filesystem.
Case in point: the cell-phone “kill switch” laws in California and Minneapolis, which require manufacturers to design phones so that carriers or manufacturers can push an over-the-air update that bricks the phone without any user intervention, designed to deter cell-phone thieves. Early data suggests that the law is effective in preventing this kind of crime, but at a high and largely needless (and ill-considered) price.
To understand this price, we need to talk about what “security” is, from the perspective of a mobile device user: it’s a whole basket of risks, including the physical threat of violence from muggers; the financial cost of replacing a lost device; the opportunity cost of setting up a new device; and the threats to your privacy, finances, employment, and physical safety from having your data compromised.
The current kill-switch regime puts a lot of emphasis on the physical risks, and treats risks to your data as unimportant. It’s true that the physical risks associated with phone theft are substantial, but if a catastrophic data compromise doesn’t strike terror into your heart, it’s probably because you haven’t thought hard enough about it — and it’s a sure bet that this risk will only increase in importance over time, as you bind your finances, your access controls (car ignition, house entry), and your personal life more tightly to your mobile devices.
That is to say, phones are only going to get cheaper to replace, while mobile data breaches are only going to get more expensive.
It’s a mistake to design a computer to accept instructions over a public network that its owner can’t see, review, and countermand. When every phone has a back door and can be compromised by hacking, social-engineering, or legal-engineering by a manufacturer or carrier, then your phone’s security is only intact for so long as every customer service rep is bamboozle-proof, every cop is honest, and every carrier’s back end is well designed and fully patched.
February 15, 2015
Earlier this month, The Register‘s Iain Thomson summarized the rather disturbing report released by Senator Ed Markey (D-MA) on the self-reported security (or lack thereof) in modern automobile internal networks:
In short, as we’ve long suspected, the computers in today’s cars can be hijacked wirelessly by feeding specially crafted packets of data into their networks. There’s often no need for physical contact; no leaving of evidence lying around after getting your hands dirty.
This means, depending on the circumstances, the software running in your dashboard can be forced to unlock doors, or become infected with malware, and records on where you’ve have been and how fast you were going may be obtained. The lack of encryption in various models means sniffed packets may be readable.
Key systems to start up engines, the electronics connecting up vital things like the steering wheel and brakes, and stuff on the CAN bus, tend to be isolated and secure, we’re told.
The ability for miscreants to access internal systems wirelessly, cause mischief to infotainment and navigation gear, and invade one’s privacy, is irritating, though.
“Drivers have come to rely on these new technologies, but unfortunately the automakers haven’t done their part to protect us from cyber-attacks or privacy invasions,” said Markey, a member of the Senate’s Commerce, Science and Transportation Committee.
“Even as we are more connected than ever in our cars and trucks, our technology systems and data security remain largely unprotected. We need to work with the industry and cyber-security experts to establish clear rules of the road to ensure the safety and privacy of 21st-century American drivers.”
Of the 17 car makers who replied [PDF] to Markey’s letters (Tesla, Aston Martin, and Lamborghini didn’t) all made extensive use of computing in their 2014 models, with some carrying 50 electronic control units (ECUs) running on a series of internal networks.
BMW, Chrysler, Ford, General Motors, Honda, Hyundai, Jaguar Land Rover, Mazda, Mercedes-Benz, Mitsubishi, Nissan, Porsche, Subaru, Toyota, Volkswagen (with Audi), and Volvo responded to the study. According to the senator’s six-page dossier:
- Over 90 per cent of vehicles manufactured in 2014 had a wireless network of some kind — such as Bluetooth to link smartphones to the dashboard or a proprietary standard for technicians to pull out diagnostics.
- Only six automakers have any kind of security software running in their cars — such as firewalls for blocking connections from untrusted devices, or encryption for protecting data in transit around the vehicle.
- Just five secured wireless access points with passwords, encryption or proximity sensors that (in theory) only allow hardware detected within the car to join a given network.
- And only models made by two companies can alert the manufacturers in real time if a malicious software attack is attempted — the others wait until a technician checks at the next servicing.
There wasn’t much detail on the security of over-the-air updates for firmware, nor the use of crypto to protect personal data being phoned home from vehicles to an automaker’s HQ.