Q: What do Google, Facebook, Twitter, Apple, and Samsung all have in common?
A: Their business models involve interrupting you all day long.
Individually, each company’s interruptions are trivial. You can easily ignore them. But cumulatively, the interruptions from these and other companies can be crippling.
In the economy of the past, companies made money by being useful to customers. Now the biggest tech companies make their money by distracting you with ads and apps and notifications and whatnot. I don’t mean to sound like an alarmist, but I think this is the reason 80% of the adults I know are medicating. People are literally being driven crazy by a combination of complexity (too many choices) and the Interruption Economy.
There are days when my brain is flying in so many directions that I have to literally chant aloud what I need to do next in order to focus.
I’m wondering if you have as many distractions in your life. And if you do, can the chanting help you too? The next time you have a boring task that you know will be subject to lots of interruptions, try the chanting technique and let me know how it goes. It probably won’t cure your ADHD but it might help you ignore the tech industry’s distractions until you get your tasks done.
Bonus question: The economy has evolved from “How can I help you?” to “How can I distract you?” Can that trend lead anywhere but mass mental illness?
My hypothesis, based on observation alone, is that the business model of the tech industry, with its complexity, glut of options, and continuous interruptions are literally driving people to mental illness.
Scott Adams, “The Interruption Economy”, Scott Adams Blog, 2015-07-07.
March 15, 2017
February 28, 2017
Amy Alkon views with disdain a Quartz article on sexually harassing, inter alia, Alexa and Siri:
Quartz Seriously Wants To Know: Are You Sexually Harassing Your Phone?
There’s an unbelievable piece up at Quartz, reflecting a gone-mad sector of our society — ultimately driven by radical academic feminism (though typically not admitting or crediting its nutbag roots).
Feminism was supposed to be about women wanting equal treatment. Now, as I like to put it, feminist no longer demand that women be treated as equals but as eggshells.
This article is a case in point. “We tested bots like Siri and Alexa to see who would stand up to sexual harassment,” is the headline. […]
First of all, if I could have Siri in either a bitchy drag queen voice or an Indian accent (from India, that is), which I love, I would. French or Italian or Eastern European would be fun, too. Because Apple’s rather boring about this — probably to serve an increasingly humorless and humor-attacking public — I think I have it on the British guy right now.
But I hate Siri and never use it.
The point is, you can change Siri to a man and harass the fuck out of it. I yell profanity at automated telephone systems when they repeatedly won’t accept my answer — both because I’m kind of immature and because there was this (probably mythic) idea out there that swearing would trigger a live operator to come on.
And per these evolved sex differences — we go for different Achilles heels in men and women when we’re attacking them. That’s because men and women are biologically and psychologically different, and men are more likely to be leaders, for example, and women are more likely to be caretakers.
Though male brains and female brains are mostly similar, these evolved sex differences lead to some differences in our psychology and how we present ourselves in the world (including the roles women versus men tend to have).
January 29, 2017
ESR performs a useful service in pulling together a document on what all hackers used to need to know, regardless of the particular technical interest they followed. I was never technical enough to be a hacker, but I worked with many of them, so I had to know (or know where to find) much of this information, too.
One fine day in January 2017 I was reminded of something I had half-noticed a few times over the previous decade. That is, younger hackers don’t know the bit structure of ASCII and the meaning of the odder control characters in it.
This is knowledge every fledgling hacker used to absorb through their pores. It’s nobody’s fault this changed; the obsolescence of hardware terminals and the RS-232 protocol is what did it. Tools generate culture; sometimes, when a tool becomes obsolete, a bit of cultural commonality quietly evaporates. It can be difficult to notice that this has happened.
This document is a collection of facts about ASCII and related technologies, notably hardware terminals and RS-232 and modems. This is lore that was at one time near-universal and is no longer. It’s not likely to be directly useful today – until you trip over some piece of still-functioning technology where it’s relevant (like a GPS puck), or it makes sense of some old-fart war story. Even so, it’s good to know anyway, for cultural-literacy reasons.
One thing this collection has that tends to be indefinite in the minds of older hackers is calendar dates. Those of us who lived through all this tend to have remembered order and dependencies but not exact timing; here, I did the research to pin a lot of that down. I’ve noticed that people have a tendency to retrospectively back-date the technologies that interest them, so even if you did live through the era it describes you might get a few surprises from reading.
There are references to Unix in here because I am mainly attempting to educate younger open-source hackers working on Unix-derived systems such as Linux and the BSDs. If those terms mean nothing to you, the rest of this document probably won’t either.
October 13, 2016
I have no expertise in this area, but it appears to me that if the “Silicon Valley billionaires” are right and we are living in a simulated reality there are only two likely options. First, we’re (if you’ll pardon the simplification) “players in the game” — whether we’re aware of it within our simulation or not — and we can leave the simulation in the same way a World of Warcraft or Final Fantasy XIV or Guild Wars 2 player can log off and resume life in “meat space”. Second, most or all of us are actually NPCs and there’s no way to leave the simulation because (some|most|all) of us have no objective existence outside the simulation we currently occupy. If the second option is true … and mathematically it’s the one that’s overwhelmingly likely if we’re actually in a simulation, then there’s little point in discovering that it’s true, as we’ll all cease to exist when our home simulation is turned off.
August 22, 2016
I learned something this weekend about the high cost of the subtle delusion that creative technical problem-solving is the preserve of a priesthood of experts, using powers and perceptions beyond the ken of ordinary human beings.
Terry Pratchett is the author of the Discworld series of satirical fantasies. He is — and I don’t say this lightly, or without having given the matter thought and study — quite probably the most consistently excellent writer of intelligent humor in the last century in English. One has to go back as far as P.G. Wodehouse or Mark Twain to find an obvious equal in consistent quality, volume, and sly wisdom.
I’ve been a fan of Terry’s since before his first Discworld novel; I’m one of the few people who remembers Strata, his 1981 first experiment with the disc-world concept. The man has been something like a long-term acquaintance of mine for ten years — one of those people you’d like to call a friend, and who you think would like to call you a friend, if the two of you ever arranged enough concentrated hang time to get that close. But we’re both damn busy people, and live five thousand miles apart.
This weekend, Terry and I were both guests of honor at a hybrid SF convention and Linux conference called Penguicon held in Warren, Michigan. We finally got our hang time. Among other things, I taught Terry how to shoot pistols. He loves shooter games, but as a British resident his opportunities to play with real firearms are strictly limited. (I can report that Terry handled my .45 semi with remarkable competence and steadiness for a first-timer. I can also report that this surprised me not at all.)
During Terry’s Guest-of-Honor speech, he revealed his past as (he thought) a failed hacker. It turns out that back in the 1970s Terry used to wire up elaborate computerized gadgets from Timex Sinclair computers. One of his projects used a primitive memory chip that had light-sensitive gates to build a sort of perceptron that could actually see the difference between a circle and a cross. His magnum opus was a weather station that would log readings of temperature and barometric pressure overnight and deliver weather reports through a voice synthesizer.
But the most astonishing part of the speech was the followup in which Terry told us that despite his keen interest and elaborate homebrewing, he didn’t become a programmer or a hardware tech because he thought techies had to know mathematics, which he thought he had no talent for. He then revealed that he thought of his projects as a sort of bad imitation of programming, because his hardware and software designs were total lash-ups and he never really knew what he was doing.
I couldn’t stand it. “And you think it was any different for us?” I called out. The audience laughed and Terry passed off the remark with a quip. But I was just boggled. Because I know that almost all really bright techies start out that way, as compulsive tinkerers who blundered around learning by experience before they acquired systematic knowledge. “Oh ye gods and little fishes”, I thought to myself, “Terry is a hacker!”
Yes, I thought ‘is’ — even if Terry hasn’t actually tinkered any computer software or hardware in a quarter-century. Being a hacker is expressed through skills and projects, but it’s really a kind of attitude or mental stance that, once acquired, is never really lost. It’s a kind of intense, omnivorous playfulness that tends to color everything a person does.
So it burst upon me that Terry Pratchett has the hacker nature. Which, actually, explains something that has mildly puzzled me for years. Terry has a huge following in the hacker community — knowing his books is something close to basic cultural literacy for Internet geeks. One is actually hard-put to think of any other writer for whom this is as true. The question this has always raised for me is: why Terry, rather than some hard-SF writer whose work explicitly celebrates the technologies we play with?
Eric S. Raymond, “The Delusion of Expertise”, Armed and Dangerous, 2003-05-05.
July 28, 2016
Michael Geist explains why the federal government’s plans for digitization are so underwhelming:
Imagine going to your local library in search of Canadian books. You wander through the stacks but are surprised to find most shelves barren with the exception of books that are over a hundred years old. This sounds more like an abandoned library than one serving the needs of its patrons, yet it is roughly what a recently released Canadian National Heritage Digitization Strategy envisions.
Led by Library and Archives Canada and endorsed by Canadian Heritage Minister Mélanie Joly, the strategy acknowledges that digital technologies make it possible “for memory institutions to provide immediate access to their holdings to an almost limitless audience.”
Yet it stops strangely short of trying to do just that.
My weekly technology law column notes that rather than establishing a bold objective as has been the hallmark of recent Liberal government policy initiatives, the strategy sets as its 10-year goal the digitization of 90 per cent of all published heritage dating from before 1917 along with 50 per cent of all monographs published before 1940. It also hopes to cover all scientific journals published by Canadian universities before 2000, selected sound recordings, and all historical maps.
The strategy points to similar initiatives in other countries, but the Canadian targets pale by comparison. For example, the Netherlands plans to digitize 90 per cent of all books published in that country by 2018 along with many newspapers and magazines that pre-date 1940.
Canada’s inability to adopt a cohesive national digitization strategy has been an ongoing source of frustration and the subject of multiple studies which concluded that the country is falling behind. While there have been no shortage of pilot projects and useful initiatives from university libraries, Canada has thus far failed to articulate an ambitious, national digitization vision.
March 5, 2016
ESR posted this video on Google+, saying “Mind…utterly…blown. This is how computers worked before electronic gate logic. There’s a weird beauty of mathematics made tangible about it.”
Uploaded on 13 Jul 2011
A 1953 training film for a mechanical fire control computer aboard Navy Ships. Amazing how problems of mathematical computation were solved so elegantly in “permanent” mechanical form, before microprocessors became inexpensive and commonplace.
November 22, 2015
Published on 30 Dec 2013
A web app that works out how many seconds ago something happened. How hard can coding that be? Tom Scott explains how time twists and turns like a twisty-turny thing. It’s not to be trifled with!
H/T to Jeremy for the link.
September 14, 2015
Jerry Pournelle talks about his differing browser experiences on the Microsoft Surface:
Apple had their announcements today, but I had story conferences so I could not watch them live. I finished my fiction work about lunch time, so I thought to view some reports, and it is time I learned more about the new Windows and get more use to my Surface 3 Pro; a fitting machine to view new Apple products, particularly their new iPad Pro which is I expect their answer to the Surface Pro and Windows 10.
My usual browser is Firefox, which has features I don’t love but by and large I get along with it; but with the Surface it seemed appropriate to make a serious effort to use Edge, the new Microsoft Browser. Of course it has Microsoft Bing the default search engine. It also doesn’t really understand the size of the Pro. It gave me horizontal scrolling, even though I had Edge full screen. I looked up Apple announcements, and Bing gave me a nice list. Right click on the nice bent Microsoft pocket wireless mouse, and open a repost in a new screen. Lo, I have to do horizontal scrolling; Edge makes sure there are ads on screen at all times, so you have to horizontal screen the text to see all of it. Line by line. But I can always see some ads. Edge makes sure I don’t miss ads. It doesn’t care whether I can read the text I was looking for, but it is more careful about the ads. I’m sure that makes the advertisers happy, but I’m not so sure about the users. I thought I went looking for an article, not for ads.
Edge also kept doing things I hadn’t asked it to, and I’d lose the text. Eventually I found if I closed the window and went back to the Bing screen and right clicked to open that same window in a new tab, I was able to – carefully – screen through the text, and adjust the screen so all the text was on screen even though there was still horizontal scrolling possible. This is probably a function of inexperience, but using a touch screen and Edge is a new experience.
Even so it was a rough read. I gave up and went to Firefox on the Surface Pro. Firefox has Google as its default browser, and the top selections it offered me – all I could see on one screen – were different from the ones I saw with Bing. I had to do a bit of scrolling to find the article I had been trying to read, but eventually I found it. Right click to open it in a new Tab. Voila. All my text in the center. I could read it. Much easier. For the record: same site, adjusted to width in Firefox on the Surface Pro, horizontal scrolling of the same article viewed in Edge. Probably my fault, but I don’t know what I did wrong.
Now in Microsoft’s defense, I don’t know Edge very well; but if you are going to a Surface Pro, you may well find Firefox easier to use than Edge. A lot easier to use.
As to Google vs. Bing, in this one case I found Bing superior; what it offered me had more content. But Edge is advertiser friendly, not User friendly.
August 8, 2015
SF author (and former US Army officer) Tom Kratman answers a few questions about drones, artificial intelligence, and the threat/promise of intelligent, self-directed weapon platforms in the near future:
Ordinarily, in this space, I try to give some answers. I’m going to try again, in an area in which I am, at least at a technological level, admittedly inexpert. Feel free to argue.
Question 1: Are unmanned aerial drones going to take over from manned combat aircraft?
I am assuming here that at some point in time the total situational awareness package of the drone operator will be sufficient for him to compete or even prevail against a manned aircraft in aerial combat. In other words, the drone operator is going to climb into a cockpit far below ground and the only way he’ll be able to tell he’s not in an aircraft is that he’ll feel no inertia beyond the bare minimum for a touch of realism, to improve his situational awareness, but with no chance of blacking out due to high G maneuvers..
Still, I think the answer to the question is “no,” at least as long as the drones remain under the control of an operator, usually far, far to the rear. Why not? Because to the extent the things are effective they will invite a proportional, or even more than proportional, response to defeat or at least mitigate their effectiveness. That’s just in the nature of war. This is exacerbated by there being at least three or four routes to attack the remote controlled drone. One is by attacking the operator or the base; if the drone is effective enough, it will justify the effort of making those attacks. Yes, he may be bunkered or hidden or both, but he has a signal and a signature, which can probably be found. To the extent the drone is similar in size and support needs to a manned aircraft, that runway and base will be obvious.
The second target of attack is the drone itself. Both of these targets, base/operator and aircraft, are replicated in the vulnerabilities of the manned aircraft, itself and its base. However, the remote controlled drone has an additional vulnerability: the linkage between itself and its operator. Yes, signals can be encrypted. But almost any signal, to include the encryption, can be captured, stored, delayed, amplified, and repeated, while there are practical limits on how frequently the codes can be changed. Almost anything can be jammed. To the extent the drone is dependent on one or another, or all, of the global positioning systems around the world, that signal, too, can be jammed or captured, stored, delayed, amplified and repeated. Moreover, EMP, electro-magnetic pulse, can be generated with devices well short of the nuclear. EMP may not bother people directly, but a purely electronic, remote controlled device will tend to be at least somewhat vulnerable, even if it’s been hardened,
Question 2: Will unmanned aircraft, flown by Artificial Intelligences, take over from manned combat aircraft?
The advantages of the unmanned combat aircraft, however, ranging from immunity to high G forces, to less airframe being required without the need for life support, or, alternatively, for a greater fuel or ordnance load, to expendability, because Unit 278-B356 is no one’s precious little darling, back home, to the same Unit’s invulnerability, so far as I can conceive, to torture-induced propaganda confessions, still argue for the eventual, at least partial, triumph of the self-directing, unmanned, aerial combat aircraft.
Even, so, I’m going to go out on a limb and go with my instincts and one reason. The reason is that I have never yet met an AI for a wargame I couldn’t beat the digital snot out of, while even fairly dumb human opponents can present problems. Coupled with that, my instincts tell me that that the better arrangement is going to be a mix of manned and unmanned, possibly with the manned retaining control of the unmanned until the last second before action.
This presupposes, of course, that we don’t come up with something – quite powerful lasers and/or renunciation of the ban on blinding lasers – to sweep all aircraft from the sky.
August 2, 2015
The Economist looks at the apparently unstoppable rush to internet-connect everything and why we should worry about security now:
Unfortunately, computer security is about to get trickier. Computers have already spread from people’s desktops into their pockets. Now they are embedding themselves in all sorts of gadgets, from cars and televisions to children’s toys, refrigerators and industrial kit. Cisco, a maker of networking equipment, reckons that there are 15 billion connected devices out there today. By 2020, it thinks, that number could climb to 50 billion. Boosters promise that a world of networked computers and sensors will be a place of unparalleled convenience and efficiency. They call it the “internet of things”.
Computer-security people call it a disaster in the making. They worry that, in their rush to bring cyber-widgets to market, the companies that produce them have not learned the lessons of the early years of the internet. The big computing firms of the 1980s and 1990s treated security as an afterthought. Only once the threats—in the forms of viruses, hacking attacks and so on—became apparent, did Microsoft, Apple and the rest start trying to fix things. But bolting on security after the fact is much harder than building it in from the start.
Of course, governments are desperate to prevent us from hiding our activities from them by way of cryptography or even moderately secure connections, so there’s the risk that any pre-rolled security option offered by a major corporation has already been riddled with convenient holes for government spooks … which makes it even more likely that others can also find and exploit those security holes.
… companies in all industries must heed the lessons that computing firms learned long ago. Writing completely secure code is almost impossible. As a consequence, a culture of openness is the best defence, because it helps spread fixes. When academic researchers contacted a chipmaker working for Volkswagen to tell it that they had found a vulnerability in a remote-car-key system, Volkswagen’s response included a court injunction. Shooting the messenger does not work. Indeed, firms such as Google now offer monetary rewards, or “bug bounties”, to hackers who contact them with details of flaws they have unearthed.
Thirty years ago, computer-makers that failed to take security seriously could claim ignorance as a defence. No longer. The internet of things will bring many benefits. The time to plan for its inevitable flaws is now.
June 23, 2015
Megan McArdle on what she characterizes as possibly “the worst cyber-breach the U.S. has ever experienced”:
And yet, neither the government nor the public seems to be taking it all that seriously. It’s been getting considerably less play than the Snowden affair did, or the administration’s other massively public IT failure: the meltdown of the Obamacare exchanges. For that matter, Google News returns more hits on a papal encyclical about climate change that will have no obvious impact on anything than it does for a major security breach in the U.S. government. The administration certainly doesn’t seem that concerned. Yesterday, the White House told Reuters that President Obama “continues to have confidence in Office of Personnel Management Director Katherine Archuleta.”
I’m tempted to suggest that the confidence our president expresses in people who preside over these cyber-disasters, and the remarkable string of said cyber-disasters that have occurred under his presidency, might actually be connected. So tempted that I actually am suggesting it. President Obama’s administration has been marked by titanic serial IT disasters, and no one seems to feel any particular urgency about preventing the next one. By now, that’s hardly surprising. Kathleen Sebelius was eased out months after the Department of Health and Human Services botched the one absolutely crucial element of the Obamacare rollout. The NSA director’s offer to resign over the Snowden leak was politely declined. And now, apparently, Obama has full faith and confidence in the folks at OPM. Why shouldn’t he? Voters have never held Obama responsible for his administration’s appalling IT record, so why should he demand accountability from those below him?
Yes, yes, I know. You can’t say this is all Obama’s fault. Government IT is almost doomed to be terrible; the public sector can’t pay salaries that are competitive with the private sector, they’re hampered by government contracting rules, and their bureaucratic procedures make it hard to build good systems. And that’s all true. Yet note this: When the exchanges crashed on their maiden flight, the government managed to build a crudely functioning website in, basically, a month, a task they’d been systematically failing at for the previous three years. What was the difference? Urgency. When Obama understood that his presidency was on the line, he made sure it got done.
Update: It’s now asserted that the OPM hack exposed more than four times as many people’s personal data than the agency had previously admitted.
The personal data of an estimated 18 million current, former and prospective federal employees were affected by a cyber breach at the Office of Personnel Management – more than four times the 4.2 million the agency has publicly acknowledged. The number is expected to grow, according to U.S. officials briefed on the investigation.
FBI Director James Comey gave the 18 million estimate in a closed-door briefing to Senators in recent weeks, using the OPM’s own internal data, according to U.S. officials briefed on the matter. Those affected could include people who applied for government jobs, but never actually ended up working for the government.
The same hackers who accessed OPM’s data are believed to have last year breached an OPM contractor, KeyPoint Government Solutions, U.S. officials said. When the OPM breach was discovered in April, investigators found that KeyPoint security credentials were used to breach the OPM system.
Some investigators believe that after that intrusion last year, OPM officials should have blocked all access from KeyPoint, and that doing so could have prevented more serious damage. But a person briefed on the investigation says OPM officials don’t believe such a move would have made a difference. That’s because the OPM breach is believed to have pre-dated the KeyPoint breach. Hackers are also believed to have built their own backdoor access to the OPM system, armed with high-level system administrator access to the system. One official called it the “keys to the kingdom.” KeyPoint did not respond to CNN’s request for comment.
U.S. investigators believe the Chinese government is behind the cyber intrusion, which are considered the worst ever against the U.S. government.
May 14, 2015
In Bloomberg View, Virginia Postrel looks at the latest “Moore’s Law is over” notions:
Semiconductors are what economists call a “general purpose technology,” like electrical motors. Their effects spread through the economy, reorganizing industries and boosting productivity. The better and cheaper chips become, the greater the gains rippling through every enterprise that uses computers, from the special-effects houses producing Hollywood magic to the corner dry cleaners keeping track of your clothes.
Moore’s Law, which marked its 50th anniversary on Sunday, posits that computing power increases exponentially, with the number of components on a chip doubling every 18 months to two years. It’s not a law of nature, of course, but a kind of self-fulfilling prophecy, driving innovative efforts and customer expectations. Each generation of chips is far more powerful than the previous, but not more expensive. So the price of actual computing power keeps plummeting.
At least that’s how it seemed to be working until about 2008. According to the producer price index compiled by the Bureau of Labor Statistics, the price of the semiconductors used in personal computers fell 48 percent a year from 2000 to 2004, 29 percent a year from 2004 to 2008, and a measly 8 percent from 2008 to 2013.
The sudden slowdown presents a puzzle. It suggests that the semiconductor business isn’t as innovative as it used to be. Yet engineering measures of the chips’ technical capabilities have showed no letup in the rate of improvement. Neither have tests of how the semiconductors perform on various computing tasks.
April 19, 2015
Scott Alexander recently attended a local psychiatry conference, with some essential themes being emphasized:
This conference consisted of a series of talks about all the most important issues of the day, like ‘The Menace Of Psychologists Being Allowed To Prescribe Medication’, ‘How To Be An Advocate For Important Issues Affecting Your Patients Such As The Possibility That Psychologists Might Be Allowed To Prescribe Them Medication’, and ‘Protecting Members Of Disadvantaged Communities From Psychologists Prescribing Them Medication’.
As somebody who’s noticed that the average waiting list for a desperately ill person to see a psychiatrist is approaching the twelve month mark in some places, I was pretty okay with psychologists prescribing medication. The scare stories about how psychologists might prescribe medications unsafely didn’t have much effect on me, since I continue to believe that putting antidepressants in a vending machine would be a more safety-conscious system than what we have now (a vending machine would at least limit antidepressants to people who have $1.25 in change; the average primary care doctor is nowhere near that selective). Annnnnyway, this made me kind of uncomfortable at the conference and I Struck A Courageous Blow Against The Cartelization Of Medicine by sneaking out without putting my name on their mailing list.
But before I did, I managed to take some notes about what’s going on in the wider psychiatric world, including:
– The newest breakthrough in ensuring schizophrenic people take their medication (a hard problem!) is bundling the pills with an ingestable computer chip that transmits data from the patient’s stomach. It’s a bold plan, somewhat complicated by the fact that one of the most common symptoms of schizophrenia is the paranoid fear that somebody has implanted a chip in your body to monitor you. Can you imagine being a schizophrenic guy who has to explain to your new doctor that your old doctor put computer chips in your pills to monitor you? Yikes. If they go through with this, I hope they publish the results in the form of a sequel to The Three Christs of Ypsilanti.
– The same team is working on a smartphone app to detect schizophrenic relapses. The system uses GPS to monitor location, accelerometer to detect movements, and microphone to check tone of voice and speaking pattern, then throws it into a machine learning system that tries to differentiate psychotic from normal behavior (for example, psychotic people might speak faster, or rock back and forth a lot). Again, interesting idea. But again, one of the most common paranoid schizophrenic delusions is that their electronic devices are monitoring everything they do. If you make every one of a psychotic person’s delusions come true, such that they no longer have any beliefs that do not correspond to reality, does that technically mean you’ve cured them? I don’t know, but I’m glad we have people investigating this important issue.
February 23, 2015
Cory Doctorow is concerned about some of the possible developments within the “Internet of Things” that should concern us all:
The digital world has been colonized by a dangerous idea: that we can and should solve problems by preventing computer owners from deciding how their computers should behave. I’m not talking about a computer that’s designed to say, “Are you sure?” when you do something unexpected — not even one that asks, “Are you really, really sure?” when you click “OK.” I’m talking about a computer designed to say, “I CAN’T LET YOU DO THAT DAVE” when you tell it to give you root, to let you modify the OS or the filesystem.
Case in point: the cell-phone “kill switch” laws in California and Minneapolis, which require manufacturers to design phones so that carriers or manufacturers can push an over-the-air update that bricks the phone without any user intervention, designed to deter cell-phone thieves. Early data suggests that the law is effective in preventing this kind of crime, but at a high and largely needless (and ill-considered) price.
To understand this price, we need to talk about what “security” is, from the perspective of a mobile device user: it’s a whole basket of risks, including the physical threat of violence from muggers; the financial cost of replacing a lost device; the opportunity cost of setting up a new device; and the threats to your privacy, finances, employment, and physical safety from having your data compromised.
The current kill-switch regime puts a lot of emphasis on the physical risks, and treats risks to your data as unimportant. It’s true that the physical risks associated with phone theft are substantial, but if a catastrophic data compromise doesn’t strike terror into your heart, it’s probably because you haven’t thought hard enough about it — and it’s a sure bet that this risk will only increase in importance over time, as you bind your finances, your access controls (car ignition, house entry), and your personal life more tightly to your mobile devices.
That is to say, phones are only going to get cheaper to replace, while mobile data breaches are only going to get more expensive.
It’s a mistake to design a computer to accept instructions over a public network that its owner can’t see, review, and countermand. When every phone has a back door and can be compromised by hacking, social-engineering, or legal-engineering by a manufacturer or carrier, then your phone’s security is only intact for so long as every customer service rep is bamboozle-proof, every cop is honest, and every carrier’s back end is well designed and fully patched.