Whenever I see screaming, hate-filled behavior like hers the important part never turns out to be whatever principles the screamer claims to be advocating. Those are just window-dressing for the bullying, the dominance games, and the rage.
You cannot ameliorate the behavior of people like that by accepting their premises and arguing within them; they’ll just pocket your concessions and attack again, seeking increasingly abject submission. In one-on-one relationships this is called “emotional abuse”, and like abusers they are all about control of you while claiming to be about anything but.
Third-wave feminism, “social justice” and “anti-racism” are rotten with this. Some of the principles, considered in isolation, would be noble; but they don’t stay noble in the minds of a rage mob.
The good news is that, like emotional abusers, they only have the power over you that you allow them. Liberation begins with recognizing the abuse for what it is. It continues by entirely rejecting their attempts at manipulation. This means rejecting their terminology, their core concepts, their framing, and their attempts to jam you into a “victim” or “oppressor” identity that denies your lived experience.
The identity-jamming part maradydd clearly gets; the most eloquent sections of her writing are those in which she (rightly) rejects feminist attempts to jam her into a victim identity. But I don’t think she quite gets how thoroughly you have to reject the rest of the SJW pitch in order not to enable their abuse.
This is why, for example, I basically disengage from anyone who uses the phrase “white privilege” or the term “patriarchy”. There is a possible world in which these might be useful terms of discussion, but if that were ever our universe it has long since ceased to be. Now what they mean is “I am about to attempt to bully you into submission using kafkatraps and your own sense of decency as a club”.
Eric S. Raymond, “Meredith Patterson’s valiant effort is probably doomed”, Armed and Dangerous, 2015-01-19.
May 18, 2016
March 20, 2016
It’s only a rumour rather than a definite stand, but it is a hopeful one for civil liberties:
The spirit of anarchy and anti-establishment still runs strong at Apple. Rather than comply with the government’s requests to develop a so-called “GovtOS” to unlock the iPhone 5c of San Bernardino shooter Syed Rizwan Farook, The New York Times‘ half-dozen sources say that some software engineers may quit instead. “It’s an independent culture and a rebellious one,” former Apple engineering manager Jean-Louis Gassée tells NYT. “If the government tries to compel testimony or action from these engineers, good luck with that.”
Former senior product manager for Apple’s security and privacy division Window Snyder agrees. “If someone attempts to force them to work on something that’s outside their personal values, they can expect to find a position that’s a better fit somewhere else.”
In another instance of Apple’s company culture clashing with what the federal government demands, the development teams are apparently relatively siloed off from one another. It isn’t until a product gets closer to release that disparate teams like hardware and software engineers come together for finalizing a given gizmo. NYT notes that the team of six to 10 engineers needed to develop the back door doesn’t currently exist and that forcing any sort of collaboration would be incredibly difficult, again, due to how Apple works internally.
November 17, 2015
September 30, 2015
Strategy Page on the less-than-perfect result of Russia’s attempt to get hackers to crack The Onion Router for a medium-sized monetary prize:
Back in mid-2014 Russia offered a prize of $111,000 for whoever could deliver, by August 20th 2014, software that would allow Russian security services to identify people on the Internet using Tor (The Onion Router), a system that enables users to access the Internet anonymously. On August 22nd Russia announced that an unnamed Russian contractor, with a top security clearance, had received the $111,000 prize. No other details were provided at the time. A year later is was revealed that the winner of the Tor prize is now spending even more on lawyers to try and get out of the contract to crack Tor’s security. It seems the winners found that their theoretical solution was too difficult to implement effectively. In part this was because the worldwide community of programmers and software engineers that developed Tor is constantly upgrading it. Cracking Tor security is firing at a moving target and one that constantly changes shape and is quite resistant to damage. Tor is not perfect but it has proved very resistant to attack. A lot of people are trying to crack Tor, which is also used by criminals and Islamic terrorists was well as people trying to avoid government surveillance. This is a matter of life and death in many countries, including Russia.
Similar to anonymizer software, Tor was even more untraceable. Unlike anonymizer software, Tor relies on thousands of people running the Tor software, and acting as nodes for email (and attachments) to be sent through so many Tor nodes that it was believed virtually impossible to track down the identity of the sender. Tor was developed as part of an American government program to create software that people living in dictatorships could use to avoid arrest for saying things on the Internet that their government did not like. Tor also enabled Internet users in dictatorships to communicate safely with the outside world. Tor first appeared in 2002 and has since then defied most attempts to defeat it. The Tor developers were also quick to modify their software when a vulnerability was detected.
But by 2014 it was believed that NSA had cracked TOR and others may have done so as well but were keeping quiet about it so that the Tor support community did not fix whatever aspect of the software that made it vulnerable. At the same time there were alternatives to Tor, as well as supplemental software that were apparently uncracked by anyone.
August 7, 2015
At The Register, John Leyden talks about the recent revelation that the Tesla Model S has known hacking vulnerabilities:
Security researchers have uncovered six fresh vulnerabilities with the Tesla S.
Kevin Mahaffey, CTO of mobile security firm Lookout, and Cloudflare’s principal security researcher Marc Rogers, discovered the flaws after physically examining a vehicle before working with Elon Musk’s firm to resolve security bugs in the electric automobile.
The vulnerabilities allowed the researchers to gain root (administrator) access to the Model S infotainment systems.
With access to these systems, they were able to remotely lock and unlock the car, control the radio and screens, display any content on the screens (changing map displays and the speedometer), open and close the trunk/boot, and turn off the car systems.
When turning off the car systems, Mahaffey and Rogers discovered that, if the car was below five miles per hour (8km/hr) or idling they were able to apply the emergency hand brake, a minor issue in practice.
If the car was going at any speed the technique could be used to cut power to the car while still allowing the driver to safely brake and steer. Consumer’s safety was still preserved even in cases, like the hand-brake issue, where the system ran foul of bugs.
Despite uncovering half a dozen security bugs the two researcher nonetheless came away impressed by Tesla’s infosec policies and procedures as well as its fail-safe engineering approach.
“Tesla takes a software-first approach to its cars, so it’s no surprise that it has key security features in place that minimised and contained the risk of the discovered vulnerabilities,” the researchers explain.
August 2, 2015
The Economist looks at the apparently unstoppable rush to internet-connect everything and why we should worry about security now:
Unfortunately, computer security is about to get trickier. Computers have already spread from people’s desktops into their pockets. Now they are embedding themselves in all sorts of gadgets, from cars and televisions to children’s toys, refrigerators and industrial kit. Cisco, a maker of networking equipment, reckons that there are 15 billion connected devices out there today. By 2020, it thinks, that number could climb to 50 billion. Boosters promise that a world of networked computers and sensors will be a place of unparalleled convenience and efficiency. They call it the “internet of things”.
Computer-security people call it a disaster in the making. They worry that, in their rush to bring cyber-widgets to market, the companies that produce them have not learned the lessons of the early years of the internet. The big computing firms of the 1980s and 1990s treated security as an afterthought. Only once the threats—in the forms of viruses, hacking attacks and so on—became apparent, did Microsoft, Apple and the rest start trying to fix things. But bolting on security after the fact is much harder than building it in from the start.
Of course, governments are desperate to prevent us from hiding our activities from them by way of cryptography or even moderately secure connections, so there’s the risk that any pre-rolled security option offered by a major corporation has already been riddled with convenient holes for government spooks … which makes it even more likely that others can also find and exploit those security holes.
… companies in all industries must heed the lessons that computing firms learned long ago. Writing completely secure code is almost impossible. As a consequence, a culture of openness is the best defence, because it helps spread fixes. When academic researchers contacted a chipmaker working for Volkswagen to tell it that they had found a vulnerability in a remote-car-key system, Volkswagen’s response included a court injunction. Shooting the messenger does not work. Indeed, firms such as Google now offer monetary rewards, or “bug bounties”, to hackers who contact them with details of flaws they have unearthed.
Thirty years ago, computer-makers that failed to take security seriously could claim ignorance as a defence. No longer. The internet of things will bring many benefits. The time to plan for its inevitable flaws is now.
August 1, 2015
Published on 6 Jan 2015
One of the biggest news stories this Christmas was the (un-)cancelled release of Sony Pictures’ movie The Interview. In the movie, Seth Rogan and James Franco try to assassinate North Korean dictator Kim Jong-Un. After terror threats against movie theatres showing the film, Sony cancelled the release of the movie. This ultimately increased the movies attention and made the later online release the most successful one this year. Actually, there is a name for this kind of phenomenon: the Streisand Effect. In this episode of INTO CONTEXT, Indy explains why it’s not always smart to try to hide things on the internet.
June 23, 2015
Megan McArdle on what she characterizes as possibly “the worst cyber-breach the U.S. has ever experienced”:
And yet, neither the government nor the public seems to be taking it all that seriously. It’s been getting considerably less play than the Snowden affair did, or the administration’s other massively public IT failure: the meltdown of the Obamacare exchanges. For that matter, Google News returns more hits on a papal encyclical about climate change that will have no obvious impact on anything than it does for a major security breach in the U.S. government. The administration certainly doesn’t seem that concerned. Yesterday, the White House told Reuters that President Obama “continues to have confidence in Office of Personnel Management Director Katherine Archuleta.”
I’m tempted to suggest that the confidence our president expresses in people who preside over these cyber-disasters, and the remarkable string of said cyber-disasters that have occurred under his presidency, might actually be connected. So tempted that I actually am suggesting it. President Obama’s administration has been marked by titanic serial IT disasters, and no one seems to feel any particular urgency about preventing the next one. By now, that’s hardly surprising. Kathleen Sebelius was eased out months after the Department of Health and Human Services botched the one absolutely crucial element of the Obamacare rollout. The NSA director’s offer to resign over the Snowden leak was politely declined. And now, apparently, Obama has full faith and confidence in the folks at OPM. Why shouldn’t he? Voters have never held Obama responsible for his administration’s appalling IT record, so why should he demand accountability from those below him?
Yes, yes, I know. You can’t say this is all Obama’s fault. Government IT is almost doomed to be terrible; the public sector can’t pay salaries that are competitive with the private sector, they’re hampered by government contracting rules, and their bureaucratic procedures make it hard to build good systems. And that’s all true. Yet note this: When the exchanges crashed on their maiden flight, the government managed to build a crudely functioning website in, basically, a month, a task they’d been systematically failing at for the previous three years. What was the difference? Urgency. When Obama understood that his presidency was on the line, he made sure it got done.
Update: It’s now asserted that the OPM hack exposed more than four times as many people’s personal data than the agency had previously admitted.
The personal data of an estimated 18 million current, former and prospective federal employees were affected by a cyber breach at the Office of Personnel Management – more than four times the 4.2 million the agency has publicly acknowledged. The number is expected to grow, according to U.S. officials briefed on the investigation.
FBI Director James Comey gave the 18 million estimate in a closed-door briefing to Senators in recent weeks, using the OPM’s own internal data, according to U.S. officials briefed on the matter. Those affected could include people who applied for government jobs, but never actually ended up working for the government.
The same hackers who accessed OPM’s data are believed to have last year breached an OPM contractor, KeyPoint Government Solutions, U.S. officials said. When the OPM breach was discovered in April, investigators found that KeyPoint security credentials were used to breach the OPM system.
Some investigators believe that after that intrusion last year, OPM officials should have blocked all access from KeyPoint, and that doing so could have prevented more serious damage. But a person briefed on the investigation says OPM officials don’t believe such a move would have made a difference. That’s because the OPM breach is believed to have pre-dated the KeyPoint breach. Hackers are also believed to have built their own backdoor access to the OPM system, armed with high-level system administrator access to the system. One official called it the “keys to the kingdom.” KeyPoint did not respond to CNN’s request for comment.
U.S. investigators believe the Chinese government is behind the cyber intrusion, which are considered the worst ever against the U.S. government.
May 9, 2015
BBC News picked up the story of a Guild Wars 2 player who’d been cheating on a massive scale:
A character controlled by a hacker who used exploits to dominate online game Guild Wars 2 has been put to death in the virtual world.
The character, called DarkSide, was stripped then forced to leap to their death from a high bridge.
The death sentence was carried out after players gathered evidence about the trouble the hacker had caused.
This helped the game’s security staff find the player, take over their account and kill them off.
Over the past three weeks many players of the popular multi-player game Guild Wars 2 have been complaining about the activities of a character called DarkSide. About four million copies of the game have been sold.
Via a series of exploits the character was able to teleport, deal massive damage, survive co-ordinated attacks by other players and dominate player-versus-player combat.
To spur Guild Wars‘ creator ArenaNet to react, players gathered videos of DarkSide’s antics and posted them on YouTube.
The videos helped ArenaNet’s security head Chris Cleary identify the player behind DarkSide, he said in a forum post explaining what action it had taken. Mr Cleary took over the account to carry out the punishment.
H/T to MassivelyOP for both the original story and the BBC News link.
May 8, 2015
Kim Zetter talks about some of the NSA’s more sneaky ways of intercepting communications:
Among all of the NSA hacking operations exposed by whistleblower Edward Snowden over the last two years, one in particular has stood out for its sophistication and stealthiness. Known as Quantum Insert, the man-on-the-side hacking technique has been used to great effect since 2005 by the NSA and its partner spy agency, Britain’s GCHQ, to hack into high-value, hard-to-reach systems and implant malware.
Quantum Insert is useful for getting at machines that can’t be reached through phishing attacks. It works by hijacking a browser as it’s trying to access web pages and forcing it to visit a malicious web page, rather than the page the target intend to visit. The attackers can then surreptitiously download malware onto the target’s machine from the rogue web page.
Quantum Insert has been used to hack the machines of terrorist suspects in the Middle East, but it was also used in a controversial GCHQ/NSA operation against employees of the Belgian telecom Belgacom and against workers at OPEC, the Organization of Petroleum Exporting Countries. The “highly successful” technique allowed the NSA to place 300 malicious implants on computers around the world in 2010, according to the spy agency’s own internal documents — all while remaining undetected.
But now security researchers with Fox-IT in the Netherlands, who helped investigate that hack against Belgacom, have found a way to detect Quantum Insert attacks using common intrusion detection tools such as Snort, Bro and Suricata.
April 28, 2015
At The Register, John Leyden offers a bit of doubt that Professor David Stupples can really be said to be an impartial observer:
The rollout of a next generation train signalling system across the UK could leave the network at greater risk of hack attacks, a university professor has claimed.
Prof David Stupples warns that plans to replace the existing (aging) signalling system with the new European Rail Traffic Management System (ERTMS) could open up the network to potential attacks, particularly from disgruntled employees or other rogue insiders.
ERTMS will manage how fast trains travel, so a hack attack might potentially cause trains to move too quickly. UK tests of the European Rail Traffic Management System have already begun ahead of the expected rollout.
By the 2020s the system will be in full control of trains on mainline routes. Other countries have already successfully rolled out the system and there are no reports, at least, of any meaningful cyber-attack to date.
Nonetheless, Prof Stupples is concerned that hacks against the system could cause “major disruption” or even a “nasty accident”.
ERTMS is designed to make networks safer by safeguarding against driver mistakes, a significant factor historically in rail accidents. Yet these benefits are offset by the risk of hackers manipulating control systems, Prof Stupples, of City University London, warned.
February 23, 2015
Cory Doctorow is concerned about some of the possible developments within the “Internet of Things” that should concern us all:
The digital world has been colonized by a dangerous idea: that we can and should solve problems by preventing computer owners from deciding how their computers should behave. I’m not talking about a computer that’s designed to say, “Are you sure?” when you do something unexpected — not even one that asks, “Are you really, really sure?” when you click “OK.” I’m talking about a computer designed to say, “I CAN’T LET YOU DO THAT DAVE” when you tell it to give you root, to let you modify the OS or the filesystem.
Case in point: the cell-phone “kill switch” laws in California and Minneapolis, which require manufacturers to design phones so that carriers or manufacturers can push an over-the-air update that bricks the phone without any user intervention, designed to deter cell-phone thieves. Early data suggests that the law is effective in preventing this kind of crime, but at a high and largely needless (and ill-considered) price.
To understand this price, we need to talk about what “security” is, from the perspective of a mobile device user: it’s a whole basket of risks, including the physical threat of violence from muggers; the financial cost of replacing a lost device; the opportunity cost of setting up a new device; and the threats to your privacy, finances, employment, and physical safety from having your data compromised.
The current kill-switch regime puts a lot of emphasis on the physical risks, and treats risks to your data as unimportant. It’s true that the physical risks associated with phone theft are substantial, but if a catastrophic data compromise doesn’t strike terror into your heart, it’s probably because you haven’t thought hard enough about it — and it’s a sure bet that this risk will only increase in importance over time, as you bind your finances, your access controls (car ignition, house entry), and your personal life more tightly to your mobile devices.
That is to say, phones are only going to get cheaper to replace, while mobile data breaches are only going to get more expensive.
It’s a mistake to design a computer to accept instructions over a public network that its owner can’t see, review, and countermand. When every phone has a back door and can be compromised by hacking, social-engineering, or legal-engineering by a manufacturer or carrier, then your phone’s security is only intact for so long as every customer service rep is bamboozle-proof, every cop is honest, and every carrier’s back end is well designed and fully patched.
February 15, 2015
Earlier this month, The Register‘s Iain Thomson summarized the rather disturbing report released by Senator Ed Markey (D-MA) on the self-reported security (or lack thereof) in modern automobile internal networks:
In short, as we’ve long suspected, the computers in today’s cars can be hijacked wirelessly by feeding specially crafted packets of data into their networks. There’s often no need for physical contact; no leaving of evidence lying around after getting your hands dirty.
This means, depending on the circumstances, the software running in your dashboard can be forced to unlock doors, or become infected with malware, and records on where you’ve have been and how fast you were going may be obtained. The lack of encryption in various models means sniffed packets may be readable.
Key systems to start up engines, the electronics connecting up vital things like the steering wheel and brakes, and stuff on the CAN bus, tend to be isolated and secure, we’re told.
The ability for miscreants to access internal systems wirelessly, cause mischief to infotainment and navigation gear, and invade one’s privacy, is irritating, though.
“Drivers have come to rely on these new technologies, but unfortunately the automakers haven’t done their part to protect us from cyber-attacks or privacy invasions,” said Markey, a member of the Senate’s Commerce, Science and Transportation Committee.
“Even as we are more connected than ever in our cars and trucks, our technology systems and data security remain largely unprotected. We need to work with the industry and cyber-security experts to establish clear rules of the road to ensure the safety and privacy of 21st-century American drivers.”
Of the 17 car makers who replied [PDF] to Markey’s letters (Tesla, Aston Martin, and Lamborghini didn’t) all made extensive use of computing in their 2014 models, with some carrying 50 electronic control units (ECUs) running on a series of internal networks.
BMW, Chrysler, Ford, General Motors, Honda, Hyundai, Jaguar Land Rover, Mazda, Mercedes-Benz, Mitsubishi, Nissan, Porsche, Subaru, Toyota, Volkswagen (with Audi), and Volvo responded to the study. According to the senator’s six-page dossier:
- Over 90 per cent of vehicles manufactured in 2014 had a wireless network of some kind — such as Bluetooth to link smartphones to the dashboard or a proprietary standard for technicians to pull out diagnostics.
- Only six automakers have any kind of security software running in their cars — such as firewalls for blocking connections from untrusted devices, or encryption for protecting data in transit around the vehicle.
- Just five secured wireless access points with passwords, encryption or proximity sensors that (in theory) only allow hardware detected within the car to join a given network.
- And only models made by two companies can alert the manufacturers in real time if a malicious software attack is attempted — the others wait until a technician checks at the next servicing.
There wasn’t much detail on the security of over-the-air updates for firmware, nor the use of crypto to protect personal data being phoned home from vehicles to an automaker’s HQ.
January 7, 2015
In Wired, Cory Doctorow explains why bad legal precedents from more than a decade ago are making us more vulnerable rather than safer:
We live in a world made of computers. Your car is a computer that drives down the freeway at 60 mph with you strapped inside. If you live or work in a modern building, computers regulate its temperature and respiration. And we’re not just putting our bodies inside computers — we’re also putting computers inside our bodies. I recently exchanged words in an airport lounge with a late arrival who wanted to use the sole electrical plug, which I had beat him to, fair and square. “I need to charge my laptop,” I said. “I need to charge my leg,” he said, rolling up his pants to show me his robotic prosthesis. I surrendered the plug.
You and I and everyone who grew up with earbuds? There’s a day in our future when we’ll have hearing aids, and chances are they won’t be retro-hipster beige transistorized analog devices: They’ll be computers in our heads.
And that’s why the current regulatory paradigm for computers, inherited from the 16-year-old stupidity that is the Digital Millennium Copyright Act, needs to change. As things stand, the law requires that computing devices be designed to sometimes disobey their owners, so that their owners won’t do something undesirable. To make this work, we also have to criminalize anything that might help owners change their computers to let the machines do that supposedly undesirable thing.
This approach to controlling digital devices was annoying back in, say, 1995, when we got the DVD player that prevented us from skipping ads or playing an out-of-region disc. But it will be intolerable and deadly dangerous when our 3-D printers, self-driving cars, smart houses, and even parts of our bodies are designed with the same restrictions. Because those restrictions would change the fundamental nature of computers. Speaking in my capacity as a dystopian science fiction writer: This scares the hell out of me.
December 19, 2014
Mark Steyn is never one to hold back an opinion:
I was barely aware of The Interview until, while sitting through a trailer for what seemed like just another idiotic leaden comedy, my youngest informed me that the North Koreans had denounced the film as “an act of war”. If it is, they seem to have won it fairly decisively: Kim Jong-Un has just vaporized a Hollywood blockbuster as totally as if one of his No Dong missiles had taken out the studio. As it is, the fellows with no dong turned out to be the executives of Sony Pictures.
I wouldn’t mind but this is the same industry that congratulates itself endlessly — not least in its annual six-hour awards ceremony — on its artists’ courage and bravery. Called on to show some for the first time in their lives, they folded like a cheap suit. As opposed to the bank-breaking suit their lawyers advised them they’d be looking at if they released the film and someone put anthrax in the popcorn. I think of all the occasions in recent years when I’ve found myself sharing a stage with obscure Europeans who’ve fallen afoul of Islam — Swedish artists, Danish cartoonists, Norwegian comediennes, all of whom showed more courage than these Beverly Hills bigshots.
While I often find Mark Steyn’s comments amusing and insightful, the real lesson here may not be the spineless response of Sony, but the impact of a legal system on the otherwise free actions of individuals and organizations: if Sony had gone ahead with the release and someone did attack one or more of the theatres where the movie was being shown, how would the legal system treat the situation? As an act of war by an external enemy or as an act of gross negligence by Sony and the theatre owners that would bankrupt every single company in the distribution chain (and probably lead to criminal charges against individual theatre managers and corporate officers)? While I disagree with Sony’s decision to fold under the pressure, I can’t imagine any corporate board being comfortable with that kind of stark legal threat … Sony’s executives may have been presented with no choice at all.
I see that, following the disappearance of The Interview, a Texan movie theater replaced it with a screening of Team America. That film wouldn’t get made today, either.
Hollywood has spent the 21st century retreating from storytelling into a glossy, expensive CGI playground in which nothing real is at stake. That’s all we’ll be getting from now on. Oh, and occasional Oscar bait about embattled screenwriters who stood up to the House UnAmerican Activities Committee six decades ago, even as their successors cave to, of all things, Kim’s UnKorean Activities Committee. American pop culture — supposedly the most powerful and influential force on the planet – has just surrendered to a one-man psycho-state economic basket-case that starves its own population.
Eugene Volokh makes some of the same points that Steyn raises:
Deadline Hollywood mentions several such theater chains. Yesterday, the Department of Homeland Security stated that there was “no credible intelligence” that such threatened terrorist attacks would take place, but unsurprisingly, some chains are being extra cautious here.
I sympathize with the theaters’ situation — they’re in the business of showing patrons a good time, and they’re rightly not interested in becoming free speech martyrs, even if there’s only a small chance that they’ll be attacked. Moreover, the very threats may well keep moviegoers away from theater complexes that are showing the movie, thus reducing revenue from all the screens at the complex.
But behavior that is rewarded is repeated. Thugs who oppose movies that are hostile to North Korea, China, Russia, Iran, the Islamic State, extremist Islam generally or any other country or religion will learn the lesson. The same will go as to thugs who are willing to use threats of violence to squelch expression they oppose for reasons related to abortion, environmentalism, animal rights and so on.