Quotulatiousness

October 26, 2015

Going price for a working Enigma machine – $365,000

Filed under: Europe, History, Military, Technology, WW2 — Tags: , , — Nicholas @ 03:00

Lester Haines reports on a recent record auction price for an Enigma machine:

A fully-functioning four-rotor M4 Enigma WW2 cipher machine has sold at auction for $365,000.

Enigma machine

The German encryption device, as used by the U-Boat fleet and described as “one of the rarest of all the Enigma machines”, went under the hammer at Bonham’s in New York last night as part of the “Conflicts of the 20th Century” sale.

The M4 was adopted by the German Navy, the Kriegsmarine, in early 1942 following the capture of U-570 in August 1941*. Although the crew of U-570 had destroyed their three-rotor Enigma, the British found aboard written material which compromised the security of the machine.

The traffic to and from the replacement machines was dubbed “Shark” by codebreakers at Bletchley Park. Decryption proved troublesome, due in part to an initial lack of “cribs” (identified or suspected plaintext in an encrypted message) for the new device, but by December 1942, the British were regularly cracking M4 messages.

I recently read David O’Keefe’s One Day in August, which seems to explain the otherwise inexplicable launch of “Operation Jubilee”, the Dieppe raid … in his reading, the raid was actually a cover-up operation while British intelligence operatives tried to snatch one or more of the new Enigma machines (like the one shown above) without tipping off the Germans that that was the actual goal. Joel Ralph reviewed the book when it was released:

One Day in August, by David O’Keefe, takes a completely different approach to the Dieppe landing. With significant new evidence in hand, O’Keefe seeks to reframe the entire raid within the context of the secret naval intelligence war being fought against Nazi Germany.

On February 1, 1942, German U-boats operating in the Atlantic Ocean switched from using a three-rotor Enigma code machine to a new four-rotor machine. Britain’s Naval Intelligence Division, which had broken the three-rotor code and was regularly reading German coded messages, was suddenly left entirely in the dark as to the positions and intentions of enemy submarines. By the summer of 1942, the Battle of the Atlantic had reached a state of crisis and was threatening to cut off Britain from the resources needed to carry on with the war.

O’Keefe spends nearly two hundred pages documenting the secret war against Germany and the growth of the Naval Intelligence Division. What ties this to Dieppe and sparked O’Keefe’s research was the development of a unique naval intelligence commando unit tasked with retrieving vital code-breaking material. As O’Keefe’s research reveals, the origins of this unit were at Dieppe, on an almost suicidal mission to gather intelligence they hoped would crack the four-rotor Enigma machine.

O’Keefe has uncovered new documents and first-hand accounts that provide evidence for the existence of such a mission. But he takes it one step further and argues that these secret commandos were not simply along for the ride at Dieppe. Instead, he claims, the entire Dieppe raid was cover for their important task.

It’s easy to dismiss O’Keefe’s argument as too incredible (Zuehlke does so quickly in his brief conclusion [in his book Operation Jubilee, August 19, 1942]). But O’Keefe would argue that just about everything associated with combined operations defied conventional military logic, from Operation Ruthless, a planned but never executed James Bond-style mission, to the successful raid on the French port of St. Nazaire only months before Dieppe.

Clearly this commando operation was an important part of the Dieppe raid. But, while the circumstantial evidence is robust, there is no single clear document that directly lays out the Dieppe raid as cover for a secret “pinch by design” operation to steal German code books and Enigma material.

July 17, 2015

The case for encryption – “Encryption should be enabled for everything by default”

Filed under: Liberty, Technology — Tags: , , — Nicholas @ 03:00

Bruce Schneier explains why you should care (a lot) about having your data encrypted:

Encryption protects our data. It protects our data when it’s sitting on our computers and in data centers, and it protects it when it’s being transmitted around the Internet. It protects our conversations, whether video, voice, or text. It protects our privacy. It protects our anonymity. And sometimes, it protects our lives.

This protection is important for everyone. It’s easy to see how encryption protects journalists, human rights defenders, and political activists in authoritarian countries. But encryption protects the rest of us as well. It protects our data from criminals. It protects it from competitors, neighbors, and family members. It protects it from malicious attackers, and it protects it from accidents.

Encryption works best if it’s ubiquitous and automatic. The two forms of encryption you use most often — https URLs on your browser, and the handset-to-tower link for your cell phone calls — work so well because you don’t even know they’re there.

Encryption should be enabled for everything by default, not a feature you turn on only if you’re doing something you consider worth protecting.

This is important. If we only use encryption when we’re working with important data, then encryption signals that data’s importance. If only dissidents use encryption in a country, that country’s authorities have an easy way of identifying them. But if everyone uses it all of the time, encryption ceases to be a signal. No one can distinguish simple chatting from deeply private conversation. The government can’t tell the dissidents from the rest of the population. Every time you use encryption, you’re protecting someone who needs to use it to stay alive.

May 8, 2015

Quantum Insert

Filed under: Britain, Technology, USA — Tags: , , , , , — Nicholas @ 02:00

Kim Zetter talks about some of the NSA’s more sneaky ways of intercepting communications:

Among all of the NSA hacking operations exposed by whistleblower Edward Snowden over the last two years, one in particular has stood out for its sophistication and stealthiness. Known as Quantum Insert, the man-on-the-side hacking technique has been used to great effect since 2005 by the NSA and its partner spy agency, Britain’s GCHQ, to hack into high-value, hard-to-reach systems and implant malware.

Quantum Insert is useful for getting at machines that can’t be reached through phishing attacks. It works by hijacking a browser as it’s trying to access web pages and forcing it to visit a malicious web page, rather than the page the target intend to visit. The attackers can then surreptitiously download malware onto the target’s machine from the rogue web page.

Quantum Insert has been used to hack the machines of terrorist suspects in the Middle East, but it was also used in a controversial GCHQ/NSA operation against employees of the Belgian telecom Belgacom and against workers at OPEC, the Organization of Petroleum Exporting Countries. The “highly successful” technique allowed the NSA to place 300 malicious implants on computers around the world in 2010, according to the spy agency’s own internal documents — all while remaining undetected.

But now security researchers with Fox-IT in the Netherlands, who helped investigate that hack against Belgacom, have found a way to detect Quantum Insert attacks using common intrusion detection tools such as Snort, Bro and Suricata.

February 15, 2015

The term “carjacking” may take on a new meaning

Filed under: Law, Technology — Tags: , , , — Nicholas @ 05:00

Earlier this month, The Register‘s Iain Thomson summarized the rather disturbing report released by Senator Ed Markey (D-MA) on the self-reported security (or lack thereof) in modern automobile internal networks:

In short, as we’ve long suspected, the computers in today’s cars can be hijacked wirelessly by feeding specially crafted packets of data into their networks. There’s often no need for physical contact; no leaving of evidence lying around after getting your hands dirty.

This means, depending on the circumstances, the software running in your dashboard can be forced to unlock doors, or become infected with malware, and records on where you’ve have been and how fast you were going may be obtained. The lack of encryption in various models means sniffed packets may be readable.

Key systems to start up engines, the electronics connecting up vital things like the steering wheel and brakes, and stuff on the CAN bus, tend to be isolated and secure, we’re told.

The ability for miscreants to access internal systems wirelessly, cause mischief to infotainment and navigation gear, and invade one’s privacy, is irritating, though.

“Drivers have come to rely on these new technologies, but unfortunately the automakers haven’t done their part to protect us from cyber-attacks or privacy invasions,” said Markey, a member of the Senate’s Commerce, Science and Transportation Committee.

“Even as we are more connected than ever in our cars and trucks, our technology systems and data security remain largely unprotected. We need to work with the industry and cyber-security experts to establish clear rules of the road to ensure the safety and privacy of 21st-century American drivers.”

Of the 17 car makers who replied [PDF] to Markey’s letters (Tesla, Aston Martin, and Lamborghini didn’t) all made extensive use of computing in their 2014 models, with some carrying 50 electronic control units (ECUs) running on a series of internal networks.

BMW, Chrysler, Ford, General Motors, Honda, Hyundai, Jaguar Land Rover, Mazda, Mercedes-Benz, Mitsubishi, Nissan, Porsche, Subaru, Toyota, Volkswagen (with Audi), and Volvo responded to the study. According to the senator’s six-page dossier:

  • Over 90 per cent of vehicles manufactured in 2014 had a wireless network of some kind — such as Bluetooth to link smartphones to the dashboard or a proprietary standard for technicians to pull out diagnostics.
  • Only six automakers have any kind of security software running in their cars — such as firewalls for blocking connections from untrusted devices, or encryption for protecting data in transit around the vehicle.
  • Just five secured wireless access points with passwords, encryption or proximity sensors that (in theory) only allow hardware detected within the car to join a given network.
  • And only models made by two companies can alert the manufacturers in real time if a malicious software attack is attempted — the others wait until a technician checks at the next servicing.

There wasn’t much detail on the security of over-the-air updates for firmware, nor the use of crypto to protect personal data being phoned home from vehicles to an automaker’s HQ.

January 14, 2015

British PM’s latest technological brain fart

Filed under: Britain, Law, Liberty, Technology — Tags: , , , , — Nicholas @ 07:43

Cory Doctorow explains why David Cameron’s proposals are not just dumb, but doubleplus-dumb:

What David Cameron thinks he’s saying is, “We will command all the software creators we can reach to introduce back-doors into their tools for us.” There are enormous problems with this: there’s no back door that only lets good guys go through it. If your Whatsapp or Google Hangouts has a deliberately introduced flaw in it, then foreign spies, criminals, crooked police (like those who fed sensitive information to the tabloids who were implicated in the hacking scandal — and like the high-level police who secretly worked for organised crime for years), and criminals will eventually discover this vulnerability. They — and not just the security services — will be able to use it to intercept all of our communications. That includes things like the pictures of your kids in your bath that you send to your parents to the trade secrets you send to your co-workers.

But this is just for starters. David Cameron doesn’t understand technology very well, so he doesn’t actually know what he’s asking for.

For David Cameron’s proposal to work, he will need to stop Britons from installing software that comes from software creators who are out of his jurisdiction. The very best in secure communications are already free/open source projects, maintained by thousands of independent programmers around the world. They are widely available, and thanks to things like cryptographic signing, it is possible to download these packages from any server in the world (not just big ones like Github) and verify, with a very high degree of confidence, that the software you’ve downloaded hasn’t been tampered with.

[…]

This, then, is what David Cameron is proposing:

* All Britons’ communications must be easy for criminals, voyeurs and foreign spies to intercept

* Any firms within reach of the UK government must be banned from producing secure software

* All major code repositories, such as Github and Sourceforge, must be blocked

* Search engines must not answer queries about web-pages that carry secure software

* Virtually all academic security work in the UK must cease — security research must only take place in proprietary research environments where there is no onus to publish one’s findings, such as industry R&D and the security services

* All packets in and out of the country, and within the country, must be subject to Chinese-style deep-packet inspection and any packets that appear to originate from secure software must be dropped

* Existing walled gardens (like Ios and games consoles) must be ordered to ban their users from installing secure software

* Anyone visiting the country from abroad must have their smartphones held at the border until they leave

* Proprietary operating system vendors (Microsoft and Apple) must be ordered to redesign their operating systems as walled gardens that only allow users to run software from an app store, which will not sell or give secure software to Britons

* Free/open source operating systems — that power the energy, banking, ecommerce, and infrastructure sectors — must be banned outright

David Cameron will say that he doesn’t want to do any of this. He’ll say that he can implement weaker versions of it — say, only blocking some “notorious” sites that carry secure software. But anything less than the programme above will have no material effect on the ability of criminals to carry on perfectly secret conversations that “we cannot read”. If any commodity PC or jailbroken phone can run any of the world’s most popular communications applications, then “bad guys” will just use them. Jailbreaking an OS isn’t hard. Downloading an app isn’t hard. Stopping people from running code they want to run is — and what’s more, it puts the whole nation — individuals and industry — in terrible jeopardy.

September 20, 2014

Can you trust Apple’s new commitment to your privacy?

Filed under: Business, Technology — Tags: , , , — Nicholas @ 12:32

David Akin posted a list of questions posed by John Gilmore, challenging the Apple iOS8 cryptography promises:

Gilmore considered what Apple said and considered how Apple creates its software — a closed, secret, proprietary method — and what coders like him know about the code that Apple says protects our privacy — pretty much nothing — and then wrote the following for distribution on Dave Farber‘s Interesting People listserv. I’m pretty sure neither Farber nor Gilmore will begrudge me reproducing it.

    And why do we believe [Apple]?

    • Because we can read the source code and the protocol descriptions ourselves, and determine just how secure they are?
    • Because they’re a big company and big companies never lie?
    • Because they’ve implemented it in proprietary binary software, and proprietary crypto is always stronger than the company claims it to be?
    • Because they can’t covertly send your device updated software that would change all these promises, for a targeted individual, or on a mass basis?
    • Because you will never agree to upgrade the software on your device, ever, no matter how often they send you updates?
    • Because this first release of their encryption software has no security bugs, so you will never need to upgrade it to retain your privacy?
    • Because if a future update INSERTS privacy or security bugs, we will surely be able to distinguish these updates from future updates that FIX privacy or security bugs?
    • Because if they change their mind and decide to lessen our privacy for their convenience, or by secret government edict, they will be sure to let us know?
    • Because they have worked hard for years to prevent you from upgrading the software that runs on their devices so that YOU can choose it and control it instead of them?
    • Because the US export control bureacracy would never try to stop Apple from selling secure mass market proprietary encryption products across the border?
    • Because the countries that wouldn’t let Blackberry sell phones that communicate securely with your own corporate servers, will of course let Apple sell whatever high security non-tappable devices it wants to?
    • Because we’re apple fanboys and the company can do no wrong?
    • Because they want to help the terrorists win?
    • Because NSA made them mad once, therefore they are on the side of the public against NSA?
    • Because it’s always better to wiretap people after you convince them that they are perfectly secure, so they’ll spill all their best secrets?

    There must be some other reason, I’m just having trouble thinking of it.

September 19, 2014

Doctorow – “The time has come to create privacy tools for normal people”

Filed under: Liberty, Technology — Tags: , , , , — Nicholas @ 00:03

In the Guardian, Cory Doctorow says that we need privacy-enhancing technical tools that can be easily used by everyone, not just the highly technical (or highly paranoid) among us:

You don’t need to be a technical expert to understand privacy risks anymore. From the Snowden revelations to the daily parade of internet security horrors around the world – like Syrian and Egyptian checkpoints where your Facebook logins are required in order to weigh your political allegiances (sometimes with fatal consequences) or celebrities having their most intimate photos splashed all over the web.

The time has come to create privacy tools for normal people – people with a normal level of technical competence. That is, all of us, no matter what our level of technical expertise, need privacy. Some privacy measures do require extraordinary technical competence; if you’re Edward Snowden, with the entire NSA bearing down on your communications, you will need to be a real expert to keep your information secure. But the kind of privacy that makes you immune to mass surveillance and attacks-of-opportunity from voyeurs, identity thieves and other bad guys is attainable by anyone.

I’m a volunteer on the advisory board for a nonprofit that’s aiming to do just that: Simply Secure (which launches Thursday at simplysecure.org) collects together some very bright usability and cryptography experts with the aim of revamping the user interface of the internet’s favorite privacy tools, starting with OTR, the extremely secure chat system whose best-known feature is “perfect forward secrecy” which gives each conversation its own unique keys, so a breach of one conversation’s keys can’t be used to snoop on others.

More importantly, Simply Secure’s process for attaining, testing and refining usability is the main product of its work. This process will be documented and published as a set of best practices for other organisations, whether they are for-profits or non-profits, creating a framework that anyone can use to make secure products easier for everyone.

July 10, 2014

Throwing a bit of light on security in the “internet of things”

Filed under: Technology — Tags: , , , , — Nicholas @ 07:36

The “internet of things” is coming: more and more of your surroundings are going to be connected in a vastly expanded internet. A lot of attention needs to be paid to security in this new world, as Dan Goodin explains:

In the latest cautionary tale involving the so-called Internet of things, white-hat hackers have devised an attack against network-connected lightbulbs that exposes Wi-Fi passwords to anyone in proximity to one of the LED devices.

The attack works against LIFX smart lightbulbs, which can be turned on and off and adjusted using iOS- and Android-based devices. Ars Senior Reviews Editor Lee Hutchinson gave a good overview here of the Philips Hue lights, which are programmable, controllable LED-powered bulbs that compete with LIFX. The bulbs are part of a growing trend in which manufacturers add computing and networking capabilities to appliances so people can manipulate them remotely using smartphones, computers, and other network-connected devices. A 2012 Kickstarter campaign raised more than $1.3 million for LIFX, more than 13 times the original goal of $100,000.

According to a blog post published over the weekend, LIFX has updated the firmware used to control the bulbs after researchers discovered a weakness that allowed hackers within about 30 meters to obtain the passwords used to secure the connected Wi-Fi network. The credentials are passed from one networked bulb to another over a mesh network powered by 6LoWPAN, a wireless specification built on top of the IEEE 802.15.4 standard. While the bulbs used the Advanced Encryption Standard (AES) to encrypt the passwords, the underlying pre-shared key never changed, making it easy for the attacker to decipher the payload.

May 19, 2014

“Parallel construction” and Godwinizing the NSA

Filed under: Government, History, USA, WW2 — Tags: , , , — Nicholas @ 09:33

At Popehat, Clark uses an excerpt from a Bruce Schneier post to make a larger point. Here’s what Bruce wrote last year:

This dynamic was vitally important during World War II. During the war, the British were able to break the German Enigma encryption machine and eavesdrop on German military communications. But while the Allies knew a lot, they would only act on information they learned when there was another plausible way they could have learned it. They even occasionally manufactured plausible explanations. It was just too risky to tip the Germans off that their encryption machines’ code had been broken.

And this is Clark’s take:

We know that the NSA collects all sorts of information on American citizens. We know that the FBI and the CIA have full access to this information. We know that the DEA also has full access to that data. And we know that when the DEA busts someone using information gleaned by the electronic panopticon of our internal spy organization, they take pains to hide the source of the information via the subterfuge of parallel construction.

The insight is this: our government is now dealing with the citizenry the same way that the British dealt with the Nazis: treating them as an external existential threat, spying on them, and taking pains to obfuscate the source of the information that they use to target their attacks.

Yeah, Godwin’s law, whatever, whatever. My point is NOT that the NSA is the same as the Nazi party (in fact, my argument has the NSA on the opposite side). My point is that the government now treats ordinary civilians as worthy of the same sort of tactics that they once used against the Nazis.

H/T to Bernard King for the link.

April 23, 2014

LibreSSL website – “This page scientifically designed to annoy web hipsters”

Filed under: Technology — Tags: , , , , , — Nicholas @ 09:24

Julian Sanchez linked to this Ars Technica piece on a new fork of OpenSSL:

OpenBSD founder Theo de Raadt has created a fork of OpenSSL, the widely used open source cryptographic software library that contained the notorious Heartbleed security vulnerability.

OpenSSL has suffered from a lack of funding and code contributions despite being used in websites and products by many of the world’s biggest and richest corporations.

The decision to fork OpenSSL is bound to be controversial given that OpenSSL powers hundreds of thousands of Web servers. When asked why he wanted to start over instead of helping to make OpenSSL better, de Raadt said the existing code is too much of a mess.

“Our group removed half of the OpenSSL source tree in a week. It was discarded leftovers,” de Raadt told Ars in an e-mail. “The Open Source model depends [on] people being able to read the code. It depends on clarity. That is not a clear code base, because their community does not appear to care about clarity. Obviously, when such cruft builds up, there is a cultural gap. I did not make this decision… in our larger development group, it made itself.”

The LibreSSL code base is on OpenBSD.org, and the project is supported financially by the OpenBSD Foundation and OpenBSD Project. LibreSSL has a bare bones website that is intentionally unappealing.

“This page scientifically designed to annoy web hipsters,” the site says. “Donate now to stop the Comic Sans and Blink Tags.” In explaining the decision to fork, the site links to a YouTube video of a cover of the Twisted Sister song “We’re not gonna take it.”

April 9, 2014

XKCD on the impact of “Heartbleed”

Filed under: Technology — Tags: , , , , , — Nicholas @ 11:00

Update: In case you’re not concerned about the seriousness of this issue, The Register‘s John Leyden would like you to think again.

The catastrophic crypto key password vulnerability in OpenSSL affects far more than web servers, with everything from routers to smartphones also affected.

The so-called “Heartbleed” vulnerability (CVE-2014-0160) can be exploited to extract information from the servers running vulnerable version of OpenSSL, and this includes email servers and Android smartphones as well as routers.

Hackers could potentially gain access to private encryption key before using this information to decipher the encrypted traffic to and from vulnerable websites.

Web sites including Yahoo!, Flickr and OpenSSL were among the many left vulnerable to the megabug that exposed encryption keys, passwords and other sensitive information.

Preliminary tests suggested 47 of the 1000 largest sites are vulnerable to Heartbleed and that’s only among the less than half that provide support for SSL or HTTPS at all. Many of the affected sites – including Yahoo! – have since patched the vulnerability. Even so, security experts – such as Graham Cluley – remain concerned.

OpenSSL is a widely used encryption library that is a key component of technology that enables secure (https) website connections.

The bug exists in the OpenSSL 1.0.1 source code and stems from coding flaws in a fairly new feature known as the TLS Heartbeat Extension. “TLS heartbeats are used as ‘keep alive’ packets so that the ends of an encrypted connection can agree to keep the session open even when they don’t have any official data to exchange,” explains security veteran Paul Ducklin in a post on Sophos’ Naked Security blog.

The Heartbleed vulnerability in the OpenSSL cryptographic library might be exploited to reveal contents of secured communication exchanges. The same flaw might also be used to lift SSL keys.

This means that sites could still be vulnerable to attacks after installing the patches in cases where a private key has been stolen. Sites therefore need to revoke exposed keys, reissue new keys, and invalidate all session keys and session cookies.

Bruce Schneier:

“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.

Half a million sites are vulnerable, including my own. Test your vulnerability here.

The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.

At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.

April 2, 2014

Enigma’s 21st century open sourced descendent

Filed under: History, Military, Technology, WW2 — Tags: , , — Nicholas @ 09:51

The Enigma device was used by the German military in World War 2 to encrypt and decrypt communication between units and headquarters on land and at sea. Original Enigma units — the few that are on the market at any time — sell for tens of thousands of dollars. You may not be able to afford an original, but you might be interested in a modern implementation of Enigma using Arduino-based open-source hardware and software:

Actual hand-crafted Final design

Actual hand-crafted Final design

Enigma machines have captivated everyone from legendary code breaker Alan Turing and the dedicated cryptographers from England’s Bletchley Park to historians and collectors the world over.

But while many history buffs would surely love to get their hands on an authentic Enigma machine used during WWII, the devices aren’t exactly affordable (last year, a 1944 German Enigma machine was available for auction at Bonhams with an estimated worth of up to $82,000). Enter the Open Enigma Project, a kit for building one from scratch.

The idea came to Marc Tessier and James Sanderson from S&T Geotronics by accident.

“We were working on designing and building intelligent Arduino-based open-source geocaching devices to produce a unique interactive challenge at an upcoming Geocaching Mega Event,” Tessier told Crave. “A friend of ours suggested we use an Enigma type encrypting/decrypting machine as the ultimate stage of the challenge and pointed us to an Instructables tutorial that used a kid’s toy to provide some Enigma encoding. We looked all over to buy a real Enigma machine even if we had to assemble it ourselves and realized that there was nothing available at the moment. So we decided to build our own.”

[…]

“Our version is an electronic microprocessor-based machine that is running software which is a mathematical expression of how the historical mechanical machine behaved,” Sanderson told Crave. “Having never touched a real Enigma M4, we built our open version based on what we read online. From what we understand, the real electro-mechanical devices are much heavier and a little bigger.”

They took some design liberties — replacing the physical rotors with LED units and replacing the light bulbs with white LEDs. The replica can be modified by changing the Arduino code and can communicate to any computer via USB. Future versions may include Wi-Fi and/or Bluetooth.

November 14, 2013

How the internet was “weaponized”

Filed under: Government, Technology, USA — Tags: , , , , , — Nicholas @ 07:45

In Wired, Nicholas Weaver looks back on the way the internet was converted from a passive network infrastructure to a spy agency wonderland:

According to revelations about the QUANTUM program, the NSA can “shoot” (their words) an exploit at any target it desires as his or her traffic passes across the backbone. It appears that the NSA and GCHQ were the first to turn the internet backbone into a weapon; absent Snowdens of their own, other countries may do the same and then say, “It wasn’t us. And even if it was, you started it.”

If the NSA can hack Petrobras, the Russians can justify attacking Exxon/Mobil. If GCHQ can hack Belgicom to enable covert wiretaps, France can do the same to AT&T. If the Canadians target the Brazilian Ministry of Mines and Energy, the Chinese can target the U.S. Department of the Interior. We now live in a world where, if we are lucky, our attackers may be every country our traffic passes through except our own.

Which means the rest of us — and especially any company or individual whose operations are economically or politically significant — are now targets. All cleartext traffic is not just information being sent from sender to receiver, but is a possible attack vector.

[…]

The only self defense from all of the above is universal encryption. Universal encryption is difficult and expensive, but unfortunately necessary.

Encryption doesn’t just keep our traffic safe from eavesdroppers, it protects us from attack. DNSSEC validation protects DNS from tampering, while SSL armors both email and web traffic.

There are many engineering and logistic difficulties involved in encrypting all traffic on the internet, but its one we must overcome if we are to defend ourselves from the entities that have weaponized the backbone.

September 15, 2013

Bruce Schneier on what you can do to stay out of the NSA’s view

Filed under: Liberty, Technology — Tags: , , , , , — Nicholas @ 10:44

Other than going completely off the grid, you don’t have the ability to stay completely hidden, but there are some things you can do to decrease your visibility to the NSA:

With all this in mind, I have five pieces of advice:

  1. Hide in the network. Implement hidden services. Use Tor to anonymize yourself. Yes, the NSA targets Tor users, but it’s work for them. The less obvious you are, the safer you are.
  2. Encrypt your communications. Use TLS. Use IPsec. Again, while it’s true that the NSA targets encrypted connections — and it may have explicit exploits against these protocols — you’re much better protected than if you communicate in the clear.
  3. Assume that while your computer can be compromised, it would take work and risk on the part of the NSA — so it probably isn’t. If you have something really important, use an air gap. Since I started working with the Snowden documents, I bought a new computer that has never been connected to the Internet. If I want to transfer a file, I encrypt the file on the secure computer and walk it over to my Internet computer, using a USB stick. To decrypt something, I reverse the process. This might not be bulletproof, but it’s pretty good.
  4. Be suspicious of commercial encryption software, especially from large vendors. My guess is that most encryption products from large US companies have NSA-friendly back doors, and many foreign ones probably do as well. It’s prudent to assume that foreign products also have foreign-installed backdoors. Closed-source software is easier for the NSA to backdoor than open-source software. Systems relying on master secrets are vulnerable to the NSA, through either legal or more clandestine means.
  5. Try to use public-domain encryption that has to be compatible with other implementations. For example, it’s harder for the NSA to backdoor TLS than BitLocker, because any vendor’s TLS has to be compatible with every other vendor’s TLS, while BitLocker only has to be compatible with itself, giving the NSA a lot more freedom to make changes. And because BitLocker is proprietary, it’s far less likely those changes will be discovered. Prefer symmetric cryptography over public-key cryptography. Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.

Since I started working with Snowden’s documents, I have been using GPG, Silent Circle, Tails, OTR, TrueCrypt, BleachBit, and a few other things I’m not going to write about. There’s an undocumented encryption feature in my Password Safe program from the command line; I’ve been using that as well.

I understand that most of this is impossible for the typical Internet user. Even I don’t use all these tools for most everything I am working on. And I’m still primarily on Windows, unfortunately. Linux would be safer.

The NSA has turned the fabric of the Internet into a vast surveillance platform, but they are not magical. They’re limited by the same economic realities as the rest of us, and our best defense is to make surveillance of us as expensive as possible.

Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it. That’s how you can remain secure even in the face of the NSA.

September 6, 2013

Bruce Schneier on taking back the internet

Filed under: Liberty, Technology, USA — Tags: , , , , — Nicholas @ 08:51

From his article in yesterday’s Guardian:

This is not the internet the world needs, or the internet its creators envisioned. We need to take it back.

And by we, I mean the engineering community.

Yes, this is primarily a political problem, a policy matter that requires political intervention.

But this is also an engineering problem, and there are several things engineers can — and should — do.

One, we should expose. If you do not have a security clearance, and if you have not received a National Security Letter, you are not bound by a federal confidentially requirements or a gag order. If you have been contacted by the NSA to subvert a product or protocol, you need to come forward with your story. Your employer obligations don’t cover illegal or unethical activity. If you work with classified data and are truly brave, expose what you know. We need whistleblowers.

We need to know how exactly how the NSA and other agencies are subverting routers, switches, the internet backbone, encryption technologies and cloud systems. I already have five stories from people like you, and I’ve just started collecting. I want 50. There’s safety in numbers, and this form of civil disobedience is the moral thing to do.

Two, we can design. We need to figure out how to re-engineer the internet to prevent this kind of wholesale spying. We need new techniques to prevent communications intermediaries from leaking private information.

We can make surveillance expensive again. In particular, we need open protocols, open implementations, open systems — these will be harder for the NSA to subvert.

The Internet Engineering Task Force, the group that defines the standards that make the internet run, has a meeting planned for early November in Vancouver. This group needs to dedicate its next meeting to this task. This is an emergency, and demands an emergency response.

Update: Glenn Greenwald retweeted this, saying it was “not really hard for a rational person to understand why this is newsworthy”.

« Newer PostsOlder Posts »

Powered by WordPress