Quotulatiousness

July 1, 2017

Hunting the Bismarck – III: A Chance to Strike – Extra History

Filed under: Britain, Germany, History, Military — Tags: , , , , — Nicholas @ 02:00

Published on May 25, 2017

Sponsored by Wargaming! New players: Download World of Warships and use the code EXTRA1 for free goodies! http://cpm.wargaming.net/i3v7c6uu/?pu…

The order went out: Sink the Bismarck. Ships converged from all over the Atlantic to hunt down the pride of the German navy, and Swordfish planes launched from the aircraft carrier Ark Royal raced to harry the great warship.

June 7, 2017

“Hey, Joey, ‘splain me public key cryptography!”

Filed under: Technology — Tags: , , — Nicholas @ 10:20

Joey deVilla explains public key cryptography for non-geeks:

Have you ever tried to explain public-key cryptography (a.k.a. asymmetric cryptography) or the concept of public and private keys and what they’re for to non-techies? It’s tough, and I’ve spent the last little while trying to come up with an analogy that’s layperson-friendly and memorable.

It turns out that it already exists, and Panayotis Vryonis […], came up with it. Go over to his blog and check out the article titled Public-key cryptography for non-geeks. Whenever I have to explain what private keys and public keys are for to someone who’s new to cryptography, I use Vryonis’ “box with special lock and special keys” analogy. Not only does the explanation work, but it’s so good that the people I’ve used it on have used it themselves to explain public-key crypto to others.

I’ve recently used Vryonis’ analogy in a couple of presentations and thought I’d share images from my slides. Enjoy!

May 16, 2017

The Virtual Lorenz machine

Filed under: Germany, History, Military, Technology — Tags: , — Nicholas @ 03:00

At The Register, Gareth Corfield discusses the new virtual coding device simulating the WW2 German Lorenz cipher machine:

The National Museum of Computing has put an emulation of an “unbreakable” Second World War German cipher machine online for world+dog to admire.

The Virtual Lorenz machine has been launched in honour of WW2 codebreaker Bill Tutte, the man who broke the crypto used in the twelve-rotor cipher machine.

As The National Museum of Computing (TNMOC) puts it, Tutte’s work “shortened the conflict” – even though he had never even seen the cipher machine or its crypto scheme, the breaking of which the museum added was “the greatest intellectual feat of the war”.

TNMOC unveiled the Virtual Lorenz today to celebrate Tutte’s 100th birthday. Built by computing chap Martin Gillow, the simulation accurately reproduces the whirring of the cipher wheels (you might want to turn it down as the “background whirr” is a little too realistic).

The BBC profiled the “gifted mathematician” a few years ago, highlighting how the Lorenz machine whose secrets Tutte cracked was “several degrees more advanced than Enigma”, the cipher famously cracked by Tutte’s colleague Alan Turing. Tutte cracked the Lorenz in about six months, reverse-engineering its workings by reading intercepted Lorenz messages. When the Allies wanted to fool Hitler into believing the D-Day landings would take place in a false location, our ability to read Lorenz was critical for confirming that the ruse had worked – saving thousands of soldiers, sailors and airmen’s lives.

[…]

The Virtual Lorenz can be found here. Word to the wise – it’s not on an HTTPS site, so if you’re hoping to use it to thwart GCHQ, you might want to think again.

January 16, 2017

100 years ago today

Filed under: Americas, Britain, Europe, History, Military, USA — Tags: , , , , — Nicholas @ 09:26

From the Facebook page of The Great War:

On this day 100 years ago, a coded telegram was sent by German foreign secretary Arthur Zimmermann to German Ambassador to Mexico, Heinrich von Eckardt. In this telegram, Zimmermann instructed von Eckardt to offer Mexico a military alliance and financial support against the United States should they not remain neutral. This was a possibility since Germany was about to unleash unrestricted submarine warfare by February 1, 1917.

To understand this telegram, it is important to understand that talks about military cooperation and even a military alliance between Mexico and the German Empire had been going on since 1915 already.

The telegram was sent via the American undersea cable since the German cable was interrupted by the British when the war broke out. US President Woodrow Wilson had offered the Germans to use their cable for diplomatic correspondence. What neither Wilson nor the Germans knew: The cable was monitored by the British intelligence at a relay station in England. Furthermore, the British codebreakers of Room 40 had already cracked the German encryption.

The biggest challenge for the British now was to reveal the content of this telegram without admitting that they were monitoring the cable while ensuring it had the desired impact.

March 20, 2016

Apple software engineers threaten to quit rather than crack encryption for the government

Filed under: Business, Government, Liberty, Technology, USA — Tags: , , , , — Nicholas @ 02:00

It’s only a rumour rather than a definite stand, but it is a hopeful one for civil liberties:

The spirit of anarchy and anti-establishment still runs strong at Apple. Rather than comply with the government’s requests to develop a so-called “GovtOS” to unlock the iPhone 5c of San Bernardino shooter Syed Rizwan Farook, The New York Times‘ half-dozen sources say that some software engineers may quit instead. “It’s an independent culture and a rebellious one,” former Apple engineering manager Jean-Louis Gassée tells NYT. “If the government tries to compel testimony or action from these engineers, good luck with that.”

Former senior product manager for Apple’s security and privacy division Window Snyder agrees. “If someone attempts to force them to work on something that’s outside their personal values, they can expect to find a position that’s a better fit somewhere else.”

In another instance of Apple’s company culture clashing with what the federal government demands, the development teams are apparently relatively siloed off from one another. It isn’t until a product gets closer to release that disparate teams like hardware and software engineers come together for finalizing a given gizmo. NYT notes that the team of six to 10 engineers needed to develop the back door doesn’t currently exist and that forcing any sort of collaboration would be incredibly difficult, again, due to how Apple works internally.

January 31, 2016

“To be honest, the spooks love PGP”

Filed under: Liberty, Technology — Tags: , , — Nicholas @ 03:00

If nothing else, it’s a needle in their acres of data haystacks. Use of any kind of encryption doesn’t necessarily let CSIS and their foreign friends read your communications, but it alerts them that you think you’ve got something to say that they shouldn’t read:

Although the cops and Feds wont stop banging on and on about encryption – the spies have a different take on the use of crypto.

To be brutally blunt, they love it. Why? Because using detectable encryption technology like PGP, Tor, VPNs and so on, lights you up on the intelligence agencies’ dashboards. Agents and analysts don’t even have to see the contents of the communications – the metadata is enough for g-men to start making your life difficult.

“To be honest, the spooks love PGP,” Nicholas Weaver, a researcher at the International Computer Science Institute, told the Usenix Enigma conference in San Francisco on Wednesdy. “It’s really chatty and it gives them a lot of metadata and communication records. PGP is the NSA’s friend.”

Weaver, who has spent much of the last decade investigating NSA techniques, said that all PGP traffic, including who sent it and to whom, is automatically stored and backed up onto tape. This can then be searched as needed when matched with other surveillance data.

Given that the NSA has taps on almost all of the internet’s major trunk routes, the PGP records can be incredibly useful. It’s a simple matter to build a script that can identify one PGP user and then track all their contacts to build a journal of their activities.

Even better is the Mujahedeen Secrets encryption system, which was released by the Global Islamic Media Front to allow Al Qaeda supporters to communicate in private. Weaver said that not only was it even harder to use than PGP, but it was a boon for metadata – since almost anyone using it identified themselves as a potential terrorist.

“It’s brilliant!” enthused Weaver. “Whoever it was at the NSA or GCHQ who invented it give them a big Christmas bonus.”

October 29, 2015

Free HTTPS certificates coming soon

Filed under: Technology — Tags: , , , — Nicholas @ 02:00

At Ars Technica, Dan Goodin discussed the imminent availability of free HTTPS certificates to all registered domain owners:

lets-encrypt

A nonprofit effort aimed at encrypting the entire Web has reached an important milestone: its HTTPS certificates are now trusted by all major browsers.

The service, which is backed by the Electronic Frontier Foundation, Mozilla, Cisco Systems, and Akamai, is known as Let’s Encrypt. As Ars reported last year, the group will offer free HTTPS certificates to anyone who owns a domain name. Let’s Encrypt promises to provide open source tools that automate processes for both applying for and receiving the credential and configuring a website to use it securely.

HTTPS uses the transport layer security or secure sockets layer protocols to secure websites in two important ways. First, it encrypts communications passing between visitors and the Web server so they can’t be read or modified by anyone who may be monitoring the connection. Second, in the case of bare bones certificates, it cryptographically proves that a server belongs to the same organization or person with control over the domain, rather than an imposter posing as that organization. (Extended validation certificates go a step beyond by authenticating the identity of the organization or individual.)

October 26, 2015

Going price for a working Enigma machine – $365,000

Filed under: Europe, History, Military, Technology — Tags: , , , — Nicholas @ 03:00

Lester Haines reports on a recent record auction price for an Enigma machine:

A fully-functioning four-rotor M4 Enigma WW2 cipher machine has sold at auction for $365,000.

Enigma machine

The German encryption device, as used by the U-Boat fleet and described as “one of the rarest of all the Enigma machines”, went under the hammer at Bonham’s in New York last night as part of the “Conflicts of the 20th Century” sale.

The M4 was adopted by the German Navy, the Kriegsmarine, in early 1942 following the capture of U-570 in August 1941*. Although the crew of U-570 had destroyed their three-rotor Enigma, the British found aboard written material which compromised the security of the machine.

The traffic to and from the replacement machines was dubbed “Shark” by codebreakers at Bletchley Park. Decryption proved troublesome, due in part to an initial lack of “cribs” (identified or suspected plaintext in an encrypted message) for the new device, but by December 1942, the British were regularly cracking M4 messages.

I recently read David O’Keefe’s One Day in August, which seems to explain the otherwise inexplicable launch of “Operation Jubilee”, the Dieppe raid … in his reading, the raid was actually a cover-up operation while British intelligence operatives tried to snatch one or more of the new Enigma machines (like the one shown above) without tipping off the Germans that that was the actual goal. Joel Ralph reviewed the book when it was released:

One Day in August, by David O’Keefe, takes a completely different approach to the Dieppe landing. With significant new evidence in hand, O’Keefe seeks to reframe the entire raid within the context of the secret naval intelligence war being fought against Nazi Germany.

On February 1, 1942, German U-boats operating in the Atlantic Ocean switched from using a three-rotor Enigma code machine to a new four-rotor machine. Britain’s Naval Intelligence Division, which had broken the three-rotor code and was regularly reading German coded messages, was suddenly left entirely in the dark as to the positions and intentions of enemy submarines. By the summer of 1942, the Battle of the Atlantic had reached a state of crisis and was threatening to cut off Britain from the resources needed to carry on with the war.

O’Keefe spends nearly two hundred pages documenting the secret war against Germany and the growth of the Naval Intelligence Division. What ties this to Dieppe and sparked O’Keefe’s research was the development of a unique naval intelligence commando unit tasked with retrieving vital code-breaking material. As O’Keefe’s research reveals, the origins of this unit were at Dieppe, on an almost suicidal mission to gather intelligence they hoped would crack the four-rotor Enigma machine.

O’Keefe has uncovered new documents and first-hand accounts that provide evidence for the existence of such a mission. But he takes it one step further and argues that these secret commandos were not simply along for the ride at Dieppe. Instead, he claims, the entire Dieppe raid was cover for their important task.

It’s easy to dismiss O’Keefe’s argument as too incredible (Zuehlke does so quickly in his brief conclusion [in his book Operation Jubilee, August 19, 1942]). But O’Keefe would argue that just about everything associated with combined operations defied conventional military logic, from Operation Ruthless, a planned but never executed James Bond-style mission, to the successful raid on the French port of St. Nazaire only months before Dieppe.

Clearly this commando operation was an important part of the Dieppe raid. But, while the circumstantial evidence is robust, there is no single clear document that directly lays out the Dieppe raid as cover for a secret “pinch by design” operation to steal German code books and Enigma material.

July 17, 2015

The case for encryption – “Encryption should be enabled for everything by default”

Filed under: Liberty, Technology — Tags: , , — Nicholas @ 03:00

Bruce Schneier explains why you should care (a lot) about having your data encrypted:

Encryption protects our data. It protects our data when it’s sitting on our computers and in data centers, and it protects it when it’s being transmitted around the Internet. It protects our conversations, whether video, voice, or text. It protects our privacy. It protects our anonymity. And sometimes, it protects our lives.

This protection is important for everyone. It’s easy to see how encryption protects journalists, human rights defenders, and political activists in authoritarian countries. But encryption protects the rest of us as well. It protects our data from criminals. It protects it from competitors, neighbors, and family members. It protects it from malicious attackers, and it protects it from accidents.

Encryption works best if it’s ubiquitous and automatic. The two forms of encryption you use most often — https URLs on your browser, and the handset-to-tower link for your cell phone calls — work so well because you don’t even know they’re there.

Encryption should be enabled for everything by default, not a feature you turn on only if you’re doing something you consider worth protecting.

This is important. If we only use encryption when we’re working with important data, then encryption signals that data’s importance. If only dissidents use encryption in a country, that country’s authorities have an easy way of identifying them. But if everyone uses it all of the time, encryption ceases to be a signal. No one can distinguish simple chatting from deeply private conversation. The government can’t tell the dissidents from the rest of the population. Every time you use encryption, you’re protecting someone who needs to use it to stay alive.

May 8, 2015

Quantum Insert

Filed under: Britain, Technology, USA — Tags: , , , , , — Nicholas @ 02:00

Kim Zetter talks about some of the NSA’s more sneaky ways of intercepting communications:

Among all of the NSA hacking operations exposed by whistleblower Edward Snowden over the last two years, one in particular has stood out for its sophistication and stealthiness. Known as Quantum Insert, the man-on-the-side hacking technique has been used to great effect since 2005 by the NSA and its partner spy agency, Britain’s GCHQ, to hack into high-value, hard-to-reach systems and implant malware.

Quantum Insert is useful for getting at machines that can’t be reached through phishing attacks. It works by hijacking a browser as it’s trying to access web pages and forcing it to visit a malicious web page, rather than the page the target intend to visit. The attackers can then surreptitiously download malware onto the target’s machine from the rogue web page.

Quantum Insert has been used to hack the machines of terrorist suspects in the Middle East, but it was also used in a controversial GCHQ/NSA operation against employees of the Belgian telecom Belgacom and against workers at OPEC, the Organization of Petroleum Exporting Countries. The “highly successful” technique allowed the NSA to place 300 malicious implants on computers around the world in 2010, according to the spy agency’s own internal documents — all while remaining undetected.

But now security researchers with Fox-IT in the Netherlands, who helped investigate that hack against Belgacom, have found a way to detect Quantum Insert attacks using common intrusion detection tools such as Snort, Bro and Suricata.

February 15, 2015

The term “carjacking” may take on a new meaning

Filed under: Law, Technology — Tags: , , , — Nicholas @ 05:00

Earlier this month, The Register‘s Iain Thomson summarized the rather disturbing report released by Senator Ed Markey (D-MA) on the self-reported security (or lack thereof) in modern automobile internal networks:

In short, as we’ve long suspected, the computers in today’s cars can be hijacked wirelessly by feeding specially crafted packets of data into their networks. There’s often no need for physical contact; no leaving of evidence lying around after getting your hands dirty.

This means, depending on the circumstances, the software running in your dashboard can be forced to unlock doors, or become infected with malware, and records on where you’ve have been and how fast you were going may be obtained. The lack of encryption in various models means sniffed packets may be readable.

Key systems to start up engines, the electronics connecting up vital things like the steering wheel and brakes, and stuff on the CAN bus, tend to be isolated and secure, we’re told.

The ability for miscreants to access internal systems wirelessly, cause mischief to infotainment and navigation gear, and invade one’s privacy, is irritating, though.

“Drivers have come to rely on these new technologies, but unfortunately the automakers haven’t done their part to protect us from cyber-attacks or privacy invasions,” said Markey, a member of the Senate’s Commerce, Science and Transportation Committee.

“Even as we are more connected than ever in our cars and trucks, our technology systems and data security remain largely unprotected. We need to work with the industry and cyber-security experts to establish clear rules of the road to ensure the safety and privacy of 21st-century American drivers.”

Of the 17 car makers who replied [PDF] to Markey’s letters (Tesla, Aston Martin, and Lamborghini didn’t) all made extensive use of computing in their 2014 models, with some carrying 50 electronic control units (ECUs) running on a series of internal networks.

BMW, Chrysler, Ford, General Motors, Honda, Hyundai, Jaguar Land Rover, Mazda, Mercedes-Benz, Mitsubishi, Nissan, Porsche, Subaru, Toyota, Volkswagen (with Audi), and Volvo responded to the study. According to the senator’s six-page dossier:

  • Over 90 per cent of vehicles manufactured in 2014 had a wireless network of some kind — such as Bluetooth to link smartphones to the dashboard or a proprietary standard for technicians to pull out diagnostics.
  • Only six automakers have any kind of security software running in their cars — such as firewalls for blocking connections from untrusted devices, or encryption for protecting data in transit around the vehicle.
  • Just five secured wireless access points with passwords, encryption or proximity sensors that (in theory) only allow hardware detected within the car to join a given network.
  • And only models made by two companies can alert the manufacturers in real time if a malicious software attack is attempted — the others wait until a technician checks at the next servicing.

There wasn’t much detail on the security of over-the-air updates for firmware, nor the use of crypto to protect personal data being phoned home from vehicles to an automaker’s HQ.

January 14, 2015

British PM’s latest technological brain fart

Filed under: Britain, Law, Liberty, Technology — Tags: , , , , — Nicholas @ 07:43

Cory Doctorow explains why David Cameron’s proposals are not just dumb, but doubleplus-dumb:

What David Cameron thinks he’s saying is, “We will command all the software creators we can reach to introduce back-doors into their tools for us.” There are enormous problems with this: there’s no back door that only lets good guys go through it. If your Whatsapp or Google Hangouts has a deliberately introduced flaw in it, then foreign spies, criminals, crooked police (like those who fed sensitive information to the tabloids who were implicated in the hacking scandal — and like the high-level police who secretly worked for organised crime for years), and criminals will eventually discover this vulnerability. They — and not just the security services — will be able to use it to intercept all of our communications. That includes things like the pictures of your kids in your bath that you send to your parents to the trade secrets you send to your co-workers.

But this is just for starters. David Cameron doesn’t understand technology very well, so he doesn’t actually know what he’s asking for.

For David Cameron’s proposal to work, he will need to stop Britons from installing software that comes from software creators who are out of his jurisdiction. The very best in secure communications are already free/open source projects, maintained by thousands of independent programmers around the world. They are widely available, and thanks to things like cryptographic signing, it is possible to download these packages from any server in the world (not just big ones like Github) and verify, with a very high degree of confidence, that the software you’ve downloaded hasn’t been tampered with.

[…]

This, then, is what David Cameron is proposing:

* All Britons’ communications must be easy for criminals, voyeurs and foreign spies to intercept

* Any firms within reach of the UK government must be banned from producing secure software

* All major code repositories, such as Github and Sourceforge, must be blocked

* Search engines must not answer queries about web-pages that carry secure software

* Virtually all academic security work in the UK must cease — security research must only take place in proprietary research environments where there is no onus to publish one’s findings, such as industry R&D and the security services

* All packets in and out of the country, and within the country, must be subject to Chinese-style deep-packet inspection and any packets that appear to originate from secure software must be dropped

* Existing walled gardens (like Ios and games consoles) must be ordered to ban their users from installing secure software

* Anyone visiting the country from abroad must have their smartphones held at the border until they leave

* Proprietary operating system vendors (Microsoft and Apple) must be ordered to redesign their operating systems as walled gardens that only allow users to run software from an app store, which will not sell or give secure software to Britons

* Free/open source operating systems — that power the energy, banking, ecommerce, and infrastructure sectors — must be banned outright

David Cameron will say that he doesn’t want to do any of this. He’ll say that he can implement weaker versions of it — say, only blocking some “notorious” sites that carry secure software. But anything less than the programme above will have no material effect on the ability of criminals to carry on perfectly secret conversations that “we cannot read”. If any commodity PC or jailbroken phone can run any of the world’s most popular communications applications, then “bad guys” will just use them. Jailbreaking an OS isn’t hard. Downloading an app isn’t hard. Stopping people from running code they want to run is — and what’s more, it puts the whole nation — individuals and industry — in terrible jeopardy.

September 20, 2014

Can you trust Apple’s new commitment to your privacy?

Filed under: Business, Technology — Tags: , , , — Nicholas @ 12:32

David Akin posted a list of questions posed by John Gilmore, challenging the Apple iOS8 cryptography promises:

Gilmore considered what Apple said and considered how Apple creates its software — a closed, secret, proprietary method — and what coders like him know about the code that Apple says protects our privacy — pretty much nothing — and then wrote the following for distribution on Dave Farber‘s Interesting People listserv. I’m pretty sure neither Farber nor Gilmore will begrudge me reproducing it.

    And why do we believe [Apple]?

    • Because we can read the source code and the protocol descriptions ourselves, and determine just how secure they are?
    • Because they’re a big company and big companies never lie?
    • Because they’ve implemented it in proprietary binary software, and proprietary crypto is always stronger than the company claims it to be?
    • Because they can’t covertly send your device updated software that would change all these promises, for a targeted individual, or on a mass basis?
    • Because you will never agree to upgrade the software on your device, ever, no matter how often they send you updates?
    • Because this first release of their encryption software has no security bugs, so you will never need to upgrade it to retain your privacy?
    • Because if a future update INSERTS privacy or security bugs, we will surely be able to distinguish these updates from future updates that FIX privacy or security bugs?
    • Because if they change their mind and decide to lessen our privacy for their convenience, or by secret government edict, they will be sure to let us know?
    • Because they have worked hard for years to prevent you from upgrading the software that runs on their devices so that YOU can choose it and control it instead of them?
    • Because the US export control bureacracy would never try to stop Apple from selling secure mass market proprietary encryption products across the border?
    • Because the countries that wouldn’t let Blackberry sell phones that communicate securely with your own corporate servers, will of course let Apple sell whatever high security non-tappable devices it wants to?
    • Because we’re apple fanboys and the company can do no wrong?
    • Because they want to help the terrorists win?
    • Because NSA made them mad once, therefore they are on the side of the public against NSA?
    • Because it’s always better to wiretap people after you convince them that they are perfectly secure, so they’ll spill all their best secrets?

    There must be some other reason, I’m just having trouble thinking of it.

September 19, 2014

Doctorow – “The time has come to create privacy tools for normal people”

Filed under: Liberty, Technology — Tags: , , , , — Nicholas @ 00:03

In the Guardian, Cory Doctorow says that we need privacy-enhancing technical tools that can be easily used by everyone, not just the highly technical (or highly paranoid) among us:

You don’t need to be a technical expert to understand privacy risks anymore. From the Snowden revelations to the daily parade of internet security horrors around the world – like Syrian and Egyptian checkpoints where your Facebook logins are required in order to weigh your political allegiances (sometimes with fatal consequences) or celebrities having their most intimate photos splashed all over the web.

The time has come to create privacy tools for normal people – people with a normal level of technical competence. That is, all of us, no matter what our level of technical expertise, need privacy. Some privacy measures do require extraordinary technical competence; if you’re Edward Snowden, with the entire NSA bearing down on your communications, you will need to be a real expert to keep your information secure. But the kind of privacy that makes you immune to mass surveillance and attacks-of-opportunity from voyeurs, identity thieves and other bad guys is attainable by anyone.

I’m a volunteer on the advisory board for a nonprofit that’s aiming to do just that: Simply Secure (which launches Thursday at simplysecure.org) collects together some very bright usability and cryptography experts with the aim of revamping the user interface of the internet’s favorite privacy tools, starting with OTR, the extremely secure chat system whose best-known feature is “perfect forward secrecy” which gives each conversation its own unique keys, so a breach of one conversation’s keys can’t be used to snoop on others.

More importantly, Simply Secure’s process for attaining, testing and refining usability is the main product of its work. This process will be documented and published as a set of best practices for other organisations, whether they are for-profits or non-profits, creating a framework that anyone can use to make secure products easier for everyone.

July 10, 2014

Throwing a bit of light on security in the “internet of things”

Filed under: Technology — Tags: , , , , — Nicholas @ 07:36

The “internet of things” is coming: more and more of your surroundings are going to be connected in a vastly expanded internet. A lot of attention needs to be paid to security in this new world, as Dan Goodin explains:

In the latest cautionary tale involving the so-called Internet of things, white-hat hackers have devised an attack against network-connected lightbulbs that exposes Wi-Fi passwords to anyone in proximity to one of the LED devices.

The attack works against LIFX smart lightbulbs, which can be turned on and off and adjusted using iOS- and Android-based devices. Ars Senior Reviews Editor Lee Hutchinson gave a good overview here of the Philips Hue lights, which are programmable, controllable LED-powered bulbs that compete with LIFX. The bulbs are part of a growing trend in which manufacturers add computing and networking capabilities to appliances so people can manipulate them remotely using smartphones, computers, and other network-connected devices. A 2012 Kickstarter campaign raised more than $1.3 million for LIFX, more than 13 times the original goal of $100,000.

According to a blog post published over the weekend, LIFX has updated the firmware used to control the bulbs after researchers discovered a weakness that allowed hackers within about 30 meters to obtain the passwords used to secure the connected Wi-Fi network. The credentials are passed from one networked bulb to another over a mesh network powered by 6LoWPAN, a wireless specification built on top of the IEEE 802.15.4 standard. While the bulbs used the Advanced Encryption Standard (AES) to encrypt the passwords, the underlying pre-shared key never changed, making it easy for the attacker to decipher the payload.

Older Posts »

Powered by WordPress