Scratch the surface of “Silicon Valley culture” and you’ll find dozens of subcultures beneath. One means of production unites many tribes, but that’s about all that unites them. At a company the size of Google or even GitHub, you can expect to find as many varieties of cliques as you would in an equivalently sized high school, along with a “corporate culture” that’s as loudly promoted and roughly as genuine as the “school spirit” on display at every pep rally you were ever forced to sit through. One of those groups will invariably be the weirdoes.
Humans are social animals, and part of what makes a social species social is that its members place a high priority on signaling their commitment to other members of their species. Weirdoes’ priorities are different; our primary commitment is to an idea or a project or a field of inquiry. Species-membership commitment doesn’t just take a back seat, it’s in the trunk with a bag over its head.
Not only that, our primary commitments are so consuming that they leak over into everything we think, say, and do. This makes us stick out like the proverbial sore thumb: We’re unable to hide that our deepest loyalties aren’t necessarily to the people immediately around us, even if they’re around us every day. We have a name for people whose loyalties adhere to the field of technology — and to the society of our fellow weirdoes who we meet and befriend in technology-mediated spaces — rather than to the hairless apes nearby. I prefer this term to “weird nerds,” and so I’ll use it here: hackers.
You might not consider hackers to be a tribe apart, but I guarantee you that many — if not most — hackers themselves do. Eric S. Raymond’s “A Brief History of Hackerdom,” whose first draft dates to 1992, contains a litany of descriptions that speak to this:
They wore white socks and polyester shirts and ties and thick glasses and coded in machine language and assembler and FORTRAN and half a dozen ancient languages now forgotten .…
The mainstream of hackerdom, (dis)organized around the Internet and by now largely identified with the Unix technical culture, didn’t care about the commercial services. These hackers wanted better tools and more Internet ….
[I]nstead of remaining in isolated small groups each developing their own ephemeral local cultures, they discovered (or re-invented) themselves as a networked tribe.
Meredith Patterson, “When Nerds Collide: My intersectionality will have weirdoes or it will be bullshit”, Medium.com, 2014-04-23.
June 29, 2016
June 22, 2016
Of all the sound, fury, and quiet voices of reason in the storm of controversy about tech culture and what is to become of it, quiet voice of reason Zeynep Tufekci’s “No, Nate, brogrammers may not be macho, but that’s not all there is to it” moves the discussion farther forward than any other contribution I’ve seen to date. Sadly, though, it still falls short of truly bridging the conceptual gap between nerds and “weird nerds.” Speaking as a lifelong member of the weird-nerd contingent, it’s truly surreal that this distinction exists at all. I’m slightly older than Nate Silver and about a decade younger than Paul Graham, so it wouldn’t surprise me if either or both find it just as puzzling. There was no cultural concept of cool nerds, or even not-cool-but-not-that-weird nerds, when we were growing up, or even when we were entering the workforce.
That’s no longer true. My younger colleague @puellavulnerata observes that for a long time, there were only weird nerds, but when our traditional pursuits (programming, electrical engineering, computer games, &c) became a route to career stability, nerdiness and its surface-level signifiers got culturally co-opted by trend-chasers who jumped on the style but never picked up on the underlying substance that differentiates weird nerds from the culture that still shuns them. That doesn’t make them “fake geeks,” boy, girl, or otherwise — you can adopt geek interests without taking on the entire weird-nerd package — but it’s still an important distinction. Indeed, the notion of “cool nerds” serves to erase the very existence of weird nerds, to the extent that many people who aren’t weird nerds themselves only seem to remember we exist when we commit some faux pas by their standards.
Even so, science, technology, and mathematics continue to attract the same awkward, isolated, and lonely personalities they have always attracted. Weird nerds are made, not born, and our society turns them out at a young age. Tufekci argues that “life’s not just high school,” but the process of unlearning lessons ingrained from childhood takes a lot more than a cap and gown or even a $10 million VC check, especially when life continues to reinforce those lessons well into adulthood. When weird nerds watch the cool kids jockeying for social position on Twitter, we see no difference between these status games and the ones we opted out of in high school. No one’s offered evidence to the contrary, so what incentive do we have to play that game? Telling us to grow up, get over it, and play a game we’re certain to lose is a demand that we deny the evidence of our senses and an infantilising insult rolled into one.
This phenomenon explains much of the backlash from weird nerds against “brogrammers” and “geek feminists” alike. (If you thought the conflict was only between those two groups, or that someone who criticises one group must necessarily be a member of the other, then you haven’t been paying close enough attention.) Both groups are latecomers barging in on a cultural space that was once a respite for us, and we don’t appreciate either group bringing its cultural conflicts into our space in a way that demands we choose one side or the other. That’s a false dichotomy, and false dichotomies make us want to tear our hair out.
Meredith Patterson, “When Nerds Collide: My intersectionality will have weirdoes or it will be bullshit”, Medium.com, 2014-04-23.
May 18, 2016
March 20, 2016
It’s only a rumour rather than a definite stand, but it is a hopeful one for civil liberties:
The spirit of anarchy and anti-establishment still runs strong at Apple. Rather than comply with the government’s requests to develop a so-called “GovtOS” to unlock the iPhone 5c of San Bernardino shooter Syed Rizwan Farook, The New York Times‘ half-dozen sources say that some software engineers may quit instead. “It’s an independent culture and a rebellious one,” former Apple engineering manager Jean-Louis Gassée tells NYT. “If the government tries to compel testimony or action from these engineers, good luck with that.”
Former senior product manager for Apple’s security and privacy division Window Snyder agrees. “If someone attempts to force them to work on something that’s outside their personal values, they can expect to find a position that’s a better fit somewhere else.”
In another instance of Apple’s company culture clashing with what the federal government demands, the development teams are apparently relatively siloed off from one another. It isn’t until a product gets closer to release that disparate teams like hardware and software engineers come together for finalizing a given gizmo. NYT notes that the team of six to 10 engineers needed to develop the back door doesn’t currently exist and that forcing any sort of collaboration would be incredibly difficult, again, due to how Apple works internally.
November 17, 2015
September 30, 2015
Strategy Page on the less-than-perfect result of Russia’s attempt to get hackers to crack The Onion Router for a medium-sized monetary prize:
Back in mid-2014 Russia offered a prize of $111,000 for whoever could deliver, by August 20th 2014, software that would allow Russian security services to identify people on the Internet using Tor (The Onion Router), a system that enables users to access the Internet anonymously. On August 22nd Russia announced that an unnamed Russian contractor, with a top security clearance, had received the $111,000 prize. No other details were provided at the time. A year later is was revealed that the winner of the Tor prize is now spending even more on lawyers to try and get out of the contract to crack Tor’s security. It seems the winners found that their theoretical solution was too difficult to implement effectively. In part this was because the worldwide community of programmers and software engineers that developed Tor is constantly upgrading it. Cracking Tor security is firing at a moving target and one that constantly changes shape and is quite resistant to damage. Tor is not perfect but it has proved very resistant to attack. A lot of people are trying to crack Tor, which is also used by criminals and Islamic terrorists was well as people trying to avoid government surveillance. This is a matter of life and death in many countries, including Russia.
Similar to anonymizer software, Tor was even more untraceable. Unlike anonymizer software, Tor relies on thousands of people running the Tor software, and acting as nodes for email (and attachments) to be sent through so many Tor nodes that it was believed virtually impossible to track down the identity of the sender. Tor was developed as part of an American government program to create software that people living in dictatorships could use to avoid arrest for saying things on the Internet that their government did not like. Tor also enabled Internet users in dictatorships to communicate safely with the outside world. Tor first appeared in 2002 and has since then defied most attempts to defeat it. The Tor developers were also quick to modify their software when a vulnerability was detected.
But by 2014 it was believed that NSA had cracked TOR and others may have done so as well but were keeping quiet about it so that the Tor support community did not fix whatever aspect of the software that made it vulnerable. At the same time there were alternatives to Tor, as well as supplemental software that were apparently uncracked by anyone.
August 7, 2015
At The Register, John Leyden talks about the recent revelation that the Tesla Model S has known hacking vulnerabilities:
Security researchers have uncovered six fresh vulnerabilities with the Tesla S.
Kevin Mahaffey, CTO of mobile security firm Lookout, and Cloudflare’s principal security researcher Marc Rogers, discovered the flaws after physically examining a vehicle before working with Elon Musk’s firm to resolve security bugs in the electric automobile.
The vulnerabilities allowed the researchers to gain root (administrator) access to the Model S infotainment systems.
With access to these systems, they were able to remotely lock and unlock the car, control the radio and screens, display any content on the screens (changing map displays and the speedometer), open and close the trunk/boot, and turn off the car systems.
When turning off the car systems, Mahaffey and Rogers discovered that, if the car was below five miles per hour (8km/hr) or idling they were able to apply the emergency hand brake, a minor issue in practice.
If the car was going at any speed the technique could be used to cut power to the car while still allowing the driver to safely brake and steer. Consumer’s safety was still preserved even in cases, like the hand-brake issue, where the system ran foul of bugs.
Despite uncovering half a dozen security bugs the two researcher nonetheless came away impressed by Tesla’s infosec policies and procedures as well as its fail-safe engineering approach.
“Tesla takes a software-first approach to its cars, so it’s no surprise that it has key security features in place that minimised and contained the risk of the discovered vulnerabilities,” the researchers explain.
August 2, 2015
The Economist looks at the apparently unstoppable rush to internet-connect everything and why we should worry about security now:
Unfortunately, computer security is about to get trickier. Computers have already spread from people’s desktops into their pockets. Now they are embedding themselves in all sorts of gadgets, from cars and televisions to children’s toys, refrigerators and industrial kit. Cisco, a maker of networking equipment, reckons that there are 15 billion connected devices out there today. By 2020, it thinks, that number could climb to 50 billion. Boosters promise that a world of networked computers and sensors will be a place of unparalleled convenience and efficiency. They call it the “internet of things”.
Computer-security people call it a disaster in the making. They worry that, in their rush to bring cyber-widgets to market, the companies that produce them have not learned the lessons of the early years of the internet. The big computing firms of the 1980s and 1990s treated security as an afterthought. Only once the threats—in the forms of viruses, hacking attacks and so on—became apparent, did Microsoft, Apple and the rest start trying to fix things. But bolting on security after the fact is much harder than building it in from the start.
Of course, governments are desperate to prevent us from hiding our activities from them by way of cryptography or even moderately secure connections, so there’s the risk that any pre-rolled security option offered by a major corporation has already been riddled with convenient holes for government spooks … which makes it even more likely that others can also find and exploit those security holes.
… companies in all industries must heed the lessons that computing firms learned long ago. Writing completely secure code is almost impossible. As a consequence, a culture of openness is the best defence, because it helps spread fixes. When academic researchers contacted a chipmaker working for Volkswagen to tell it that they had found a vulnerability in a remote-car-key system, Volkswagen’s response included a court injunction. Shooting the messenger does not work. Indeed, firms such as Google now offer monetary rewards, or “bug bounties”, to hackers who contact them with details of flaws they have unearthed.
Thirty years ago, computer-makers that failed to take security seriously could claim ignorance as a defence. No longer. The internet of things will bring many benefits. The time to plan for its inevitable flaws is now.
August 1, 2015
Published on 6 Jan 2015
One of the biggest news stories this Christmas was the (un-)cancelled release of Sony Pictures’ movie The Interview. In the movie, Seth Rogan and James Franco try to assassinate North Korean dictator Kim Jong-Un. After terror threats against movie theatres showing the film, Sony cancelled the release of the movie. This ultimately increased the movies attention and made the later online release the most successful one this year. Actually, there is a name for this kind of phenomenon: the Streisand Effect. In this episode of INTO CONTEXT, Indy explains why it’s not always smart to try to hide things on the internet.
June 23, 2015
Megan McArdle on what she characterizes as possibly “the worst cyber-breach the U.S. has ever experienced”:
And yet, neither the government nor the public seems to be taking it all that seriously. It’s been getting considerably less play than the Snowden affair did, or the administration’s other massively public IT failure: the meltdown of the Obamacare exchanges. For that matter, Google News returns more hits on a papal encyclical about climate change that will have no obvious impact on anything than it does for a major security breach in the U.S. government. The administration certainly doesn’t seem that concerned. Yesterday, the White House told Reuters that President Obama “continues to have confidence in Office of Personnel Management Director Katherine Archuleta.”
I’m tempted to suggest that the confidence our president expresses in people who preside over these cyber-disasters, and the remarkable string of said cyber-disasters that have occurred under his presidency, might actually be connected. So tempted that I actually am suggesting it. President Obama’s administration has been marked by titanic serial IT disasters, and no one seems to feel any particular urgency about preventing the next one. By now, that’s hardly surprising. Kathleen Sebelius was eased out months after the Department of Health and Human Services botched the one absolutely crucial element of the Obamacare rollout. The NSA director’s offer to resign over the Snowden leak was politely declined. And now, apparently, Obama has full faith and confidence in the folks at OPM. Why shouldn’t he? Voters have never held Obama responsible for his administration’s appalling IT record, so why should he demand accountability from those below him?
Yes, yes, I know. You can’t say this is all Obama’s fault. Government IT is almost doomed to be terrible; the public sector can’t pay salaries that are competitive with the private sector, they’re hampered by government contracting rules, and their bureaucratic procedures make it hard to build good systems. And that’s all true. Yet note this: When the exchanges crashed on their maiden flight, the government managed to build a crudely functioning website in, basically, a month, a task they’d been systematically failing at for the previous three years. What was the difference? Urgency. When Obama understood that his presidency was on the line, he made sure it got done.
Update: It’s now asserted that the OPM hack exposed more than four times as many people’s personal data than the agency had previously admitted.
The personal data of an estimated 18 million current, former and prospective federal employees were affected by a cyber breach at the Office of Personnel Management – more than four times the 4.2 million the agency has publicly acknowledged. The number is expected to grow, according to U.S. officials briefed on the investigation.
FBI Director James Comey gave the 18 million estimate in a closed-door briefing to Senators in recent weeks, using the OPM’s own internal data, according to U.S. officials briefed on the matter. Those affected could include people who applied for government jobs, but never actually ended up working for the government.
The same hackers who accessed OPM’s data are believed to have last year breached an OPM contractor, KeyPoint Government Solutions, U.S. officials said. When the OPM breach was discovered in April, investigators found that KeyPoint security credentials were used to breach the OPM system.
Some investigators believe that after that intrusion last year, OPM officials should have blocked all access from KeyPoint, and that doing so could have prevented more serious damage. But a person briefed on the investigation says OPM officials don’t believe such a move would have made a difference. That’s because the OPM breach is believed to have pre-dated the KeyPoint breach. Hackers are also believed to have built their own backdoor access to the OPM system, armed with high-level system administrator access to the system. One official called it the “keys to the kingdom.” KeyPoint did not respond to CNN’s request for comment.
U.S. investigators believe the Chinese government is behind the cyber intrusion, which are considered the worst ever against the U.S. government.
May 9, 2015
BBC News picked up the story of a Guild Wars 2 player who’d been cheating on a massive scale:
A character controlled by a hacker who used exploits to dominate online game Guild Wars 2 has been put to death in the virtual world.
The character, called DarkSide, was stripped then forced to leap to their death from a high bridge.
The death sentence was carried out after players gathered evidence about the trouble the hacker had caused.
This helped the game’s security staff find the player, take over their account and kill them off.
Over the past three weeks many players of the popular multi-player game Guild Wars 2 have been complaining about the activities of a character called DarkSide. About four million copies of the game have been sold.
Via a series of exploits the character was able to teleport, deal massive damage, survive co-ordinated attacks by other players and dominate player-versus-player combat.
To spur Guild Wars‘ creator ArenaNet to react, players gathered videos of DarkSide’s antics and posted them on YouTube.
The videos helped ArenaNet’s security head Chris Cleary identify the player behind DarkSide, he said in a forum post explaining what action it had taken. Mr Cleary took over the account to carry out the punishment.
H/T to MassivelyOP for both the original story and the BBC News link.
May 8, 2015
Kim Zetter talks about some of the NSA’s more sneaky ways of intercepting communications:
Among all of the NSA hacking operations exposed by whistleblower Edward Snowden over the last two years, one in particular has stood out for its sophistication and stealthiness. Known as Quantum Insert, the man-on-the-side hacking technique has been used to great effect since 2005 by the NSA and its partner spy agency, Britain’s GCHQ, to hack into high-value, hard-to-reach systems and implant malware.
Quantum Insert is useful for getting at machines that can’t be reached through phishing attacks. It works by hijacking a browser as it’s trying to access web pages and forcing it to visit a malicious web page, rather than the page the target intend to visit. The attackers can then surreptitiously download malware onto the target’s machine from the rogue web page.
Quantum Insert has been used to hack the machines of terrorist suspects in the Middle East, but it was also used in a controversial GCHQ/NSA operation against employees of the Belgian telecom Belgacom and against workers at OPEC, the Organization of Petroleum Exporting Countries. The “highly successful” technique allowed the NSA to place 300 malicious implants on computers around the world in 2010, according to the spy agency’s own internal documents — all while remaining undetected.
But now security researchers with Fox-IT in the Netherlands, who helped investigate that hack against Belgacom, have found a way to detect Quantum Insert attacks using common intrusion detection tools such as Snort, Bro and Suricata.
April 28, 2015
At The Register, John Leyden offers a bit of doubt that Professor David Stupples can really be said to be an impartial observer:
The rollout of a next generation train signalling system across the UK could leave the network at greater risk of hack attacks, a university professor has claimed.
Prof David Stupples warns that plans to replace the existing (aging) signalling system with the new European Rail Traffic Management System (ERTMS) could open up the network to potential attacks, particularly from disgruntled employees or other rogue insiders.
ERTMS will manage how fast trains travel, so a hack attack might potentially cause trains to move too quickly. UK tests of the European Rail Traffic Management System have already begun ahead of the expected rollout.
By the 2020s the system will be in full control of trains on mainline routes. Other countries have already successfully rolled out the system and there are no reports, at least, of any meaningful cyber-attack to date.
Nonetheless, Prof Stupples is concerned that hacks against the system could cause “major disruption” or even a “nasty accident”.
ERTMS is designed to make networks safer by safeguarding against driver mistakes, a significant factor historically in rail accidents. Yet these benefits are offset by the risk of hackers manipulating control systems, Prof Stupples, of City University London, warned.
February 23, 2015
Cory Doctorow is concerned about some of the possible developments within the “Internet of Things” that should concern us all:
The digital world has been colonized by a dangerous idea: that we can and should solve problems by preventing computer owners from deciding how their computers should behave. I’m not talking about a computer that’s designed to say, “Are you sure?” when you do something unexpected — not even one that asks, “Are you really, really sure?” when you click “OK.” I’m talking about a computer designed to say, “I CAN’T LET YOU DO THAT DAVE” when you tell it to give you root, to let you modify the OS or the filesystem.
Case in point: the cell-phone “kill switch” laws in California and Minneapolis, which require manufacturers to design phones so that carriers or manufacturers can push an over-the-air update that bricks the phone without any user intervention, designed to deter cell-phone thieves. Early data suggests that the law is effective in preventing this kind of crime, but at a high and largely needless (and ill-considered) price.
To understand this price, we need to talk about what “security” is, from the perspective of a mobile device user: it’s a whole basket of risks, including the physical threat of violence from muggers; the financial cost of replacing a lost device; the opportunity cost of setting up a new device; and the threats to your privacy, finances, employment, and physical safety from having your data compromised.
The current kill-switch regime puts a lot of emphasis on the physical risks, and treats risks to your data as unimportant. It’s true that the physical risks associated with phone theft are substantial, but if a catastrophic data compromise doesn’t strike terror into your heart, it’s probably because you haven’t thought hard enough about it — and it’s a sure bet that this risk will only increase in importance over time, as you bind your finances, your access controls (car ignition, house entry), and your personal life more tightly to your mobile devices.
That is to say, phones are only going to get cheaper to replace, while mobile data breaches are only going to get more expensive.
It’s a mistake to design a computer to accept instructions over a public network that its owner can’t see, review, and countermand. When every phone has a back door and can be compromised by hacking, social-engineering, or legal-engineering by a manufacturer or carrier, then your phone’s security is only intact for so long as every customer service rep is bamboozle-proof, every cop is honest, and every carrier’s back end is well designed and fully patched.
February 15, 2015
Earlier this month, The Register‘s Iain Thomson summarized the rather disturbing report released by Senator Ed Markey (D-MA) on the self-reported security (or lack thereof) in modern automobile internal networks:
In short, as we’ve long suspected, the computers in today’s cars can be hijacked wirelessly by feeding specially crafted packets of data into their networks. There’s often no need for physical contact; no leaving of evidence lying around after getting your hands dirty.
This means, depending on the circumstances, the software running in your dashboard can be forced to unlock doors, or become infected with malware, and records on where you’ve have been and how fast you were going may be obtained. The lack of encryption in various models means sniffed packets may be readable.
Key systems to start up engines, the electronics connecting up vital things like the steering wheel and brakes, and stuff on the CAN bus, tend to be isolated and secure, we’re told.
The ability for miscreants to access internal systems wirelessly, cause mischief to infotainment and navigation gear, and invade one’s privacy, is irritating, though.
“Drivers have come to rely on these new technologies, but unfortunately the automakers haven’t done their part to protect us from cyber-attacks or privacy invasions,” said Markey, a member of the Senate’s Commerce, Science and Transportation Committee.
“Even as we are more connected than ever in our cars and trucks, our technology systems and data security remain largely unprotected. We need to work with the industry and cyber-security experts to establish clear rules of the road to ensure the safety and privacy of 21st-century American drivers.”
Of the 17 car makers who replied [PDF] to Markey’s letters (Tesla, Aston Martin, and Lamborghini didn’t) all made extensive use of computing in their 2014 models, with some carrying 50 electronic control units (ECUs) running on a series of internal networks.
BMW, Chrysler, Ford, General Motors, Honda, Hyundai, Jaguar Land Rover, Mazda, Mercedes-Benz, Mitsubishi, Nissan, Porsche, Subaru, Toyota, Volkswagen (with Audi), and Volvo responded to the study. According to the senator’s six-page dossier:
- Over 90 per cent of vehicles manufactured in 2014 had a wireless network of some kind — such as Bluetooth to link smartphones to the dashboard or a proprietary standard for technicians to pull out diagnostics.
- Only six automakers have any kind of security software running in their cars — such as firewalls for blocking connections from untrusted devices, or encryption for protecting data in transit around the vehicle.
- Just five secured wireless access points with passwords, encryption or proximity sensors that (in theory) only allow hardware detected within the car to join a given network.
- And only models made by two companies can alert the manufacturers in real time if a malicious software attack is attempted — the others wait until a technician checks at the next servicing.
There wasn’t much detail on the security of over-the-air updates for firmware, nor the use of crypto to protect personal data being phoned home from vehicles to an automaker’s HQ.