Quotulatiousness

April 17, 2014

QotD: User interface design for all ages

Filed under: Humour, Media, Technology — Tags: , , — Nicholas Russon @ 07:51

As your body staggers down the winding road to death, user interfaces that require fighter pilot-grade eyesight, the dexterity of a neurosurgeon, and the mental agility of Derren Brown, are going to screw with you at some point.

Don’t kid yourself otherwise — disability, in one form or another, can strike at any moment.

Given that people are proving ever harder to kill off, you can expect to have decades of life ahead of you — during which you’ll be battling to figure out where on the touchscreen that trendy transdimensional two-pixel wide “OK” button is hiding.

Can you believe, people born today will spend their entire lives having to cope with this crap? The only way I can explain the web design of many Google products today is that some wannabe Picasso stole Larry Page’s girl when they were all 13, and is only now exacting his revenge. Nobody makes things that bad by accident, surely?

Dominic Connor, “Is tech the preserve of the young able-bodied? Let’s talk over a fine dinner and claret”, The Register, 2014-04-17

April 16, 2014

QotD: The wizards of the web

Filed under: Business, Quotations, Technology — Tags: , , , , — Nicholas Russon @ 08:28

You would have thought this would have sunk in by now. The fact that it hasn’t shows what an extraordinary machine the internet is — quite different to any technology that has gone before it. When the Lovebug struck, few of us lived our lives online. Back then we banked in branches, shopped in shops, met friends and lovers in the pub and obtained jobs by posting CVs. Tweeting was for the birds. Cyberspace was marginal. Now, for billions, the online world is their lives. But there is a problem. Only a tiny, tiny percentage of the people who use the internet have even the faintest clue about how any of it works. “SSL”, for instance, stands for “Secure Sockets Layer”.

I looked it up and sort of understood it — for about five minutes. While most drivers have at least a notion of how an engine works (something about petrol exploding in cylinders and making pistons go up and down and so forth) the very language of the internet — “domain names” and “DNS codes”, endless “protocols” and so forth — is arcane, exclusive; it is, in fact, the language of magic. For all intents and purposes the internet is run by wizards.

And the trouble with letting wizards run things is that when things go wrong we are at their mercy. The world spends several tens of billions of pounds a year on anti-malware programs, which we are exhorted to buy lest the walls of our digital castles collapse around us. Making security software is a huge industry, and whenever there is a problem — either caused by viruses or by a glitch like Heartbleed — the internet security companies rush to be quoted in the media. And guess what, their message is never “keep calm and carry on”. As Professor Ross Anderson of Cambridge University says: “Almost all the cost of cybercrime is the cost of anticipation.”

Michael Hanlon, “Relax, Mumsnet users: don’t lose sleep over Heartbleed hysteria”, Telegraph, 2014-04-16

April 9, 2014

XKCD on the impact of “Heartbleed”

Filed under: Technology — Tags: , , , , , — Nicholas Russon @ 11:00

Update: In case you’re not concerned about the seriousness of this issue, The Register‘s John Leyden would like you to think again.

The catastrophic crypto key password vulnerability in OpenSSL affects far more than web servers, with everything from routers to smartphones also affected.

The so-called “Heartbleed” vulnerability (CVE-2014-0160) can be exploited to extract information from the servers running vulnerable version of OpenSSL, and this includes email servers and Android smartphones as well as routers.

Hackers could potentially gain access to private encryption key before using this information to decipher the encrypted traffic to and from vulnerable websites.

Web sites including Yahoo!, Flickr and OpenSSL were among the many left vulnerable to the megabug that exposed encryption keys, passwords and other sensitive information.

Preliminary tests suggested 47 of the 1000 largest sites are vulnerable to Heartbleed and that’s only among the less than half that provide support for SSL or HTTPS at all. Many of the affected sites – including Yahoo! – have since patched the vulnerability. Even so, security experts – such as Graham Cluley – remain concerned.

OpenSSL is a widely used encryption library that is a key component of technology that enables secure (https) website connections.

The bug exists in the OpenSSL 1.0.1 source code and stems from coding flaws in a fairly new feature known as the TLS Heartbeat Extension. “TLS heartbeats are used as ‘keep alive’ packets so that the ends of an encrypted connection can agree to keep the session open even when they don’t have any official data to exchange,” explains security veteran Paul Ducklin in a post on Sophos’ Naked Security blog.

The Heartbleed vulnerability in the OpenSSL cryptographic library might be exploited to reveal contents of secured communication exchanges. The same flaw might also be used to lift SSL keys.

This means that sites could still be vulnerable to attacks after installing the patches in cases where a private key has been stolen. Sites therefore need to revoke exposed keys, reissue new keys, and invalidate all session keys and session cookies.

Bruce Schneier:

“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.

Half a million sites are vulnerable, including my own. Test your vulnerability here.

The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.

At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.

April 7, 2014

Big data’s promises and limitations

Filed under: Economics, Science, Technology — Tags: , — Nicholas Russon @ 07:06

In the New York Times, Gary Marcus and Ernest Davis examine the big claims being made for the big data revolution:

Is big data really all it’s cracked up to be? There is no doubt that big data is a valuable tool that has already had a critical impact in certain areas. For instance, almost every successful artificial intelligence computer program in the last 20 years, from Google’s search engine to the I.B.M. Jeopardy! champion Watson, has involved the substantial crunching of large bodies of data. But precisely because of its newfound popularity and growing use, we need to be levelheaded about what big data can — and can’t — do.

The first thing to note is that although big data is very good at detecting correlations, especially subtle correlations that an analysis of smaller data sets might miss, it never tells us which correlations are meaningful. A big data analysis might reveal, for instance, that from 2006 to 2011 the United States murder rate was well correlated with the market share of Internet Explorer: Both went down sharply. But it’s hard to imagine there is any causal relationship between the two. Likewise, from 1998 to 2007 the number of new cases of autism diagnosed was extremely well correlated with sales of organic food (both went up sharply), but identifying the correlation won’t by itself tell us whether diet has anything to do with autism.

Second, big data can work well as an adjunct to scientific inquiry but rarely succeeds as a wholesale replacement. Molecular biologists, for example, would very much like to be able to infer the three-dimensional structure of proteins from their underlying DNA sequence, and scientists working on the problem use big data as one tool among many. But no scientist thinks you can solve this problem by crunching data alone, no matter how powerful the statistical analysis; you will always need to start with an analysis that relies on an understanding of physics and biochemistry.

March 28, 2014

McGuinty staffer alleged to have wiped key computer hard drives

Filed under: Cancon — Tags: , , , , , — Nicholas Russon @ 08:43

I’m sure there’s a perfectly simple, non-suspicious reason for the outgoing chief of staff of a provincial premier to arrange a non-government employee having access to key computers at a change of administration… because otherwise this would look particularly bad:

The Kathleen Wynne minority government went into serious damage control mode after the release of an OPP warrant which alleges criminal behaviour in the office of the premier.

The explosive document, made public by a judge Thursday but not proven in court, alleges a former chief of staff for ex-premier Dalton McGuinty committed a criminal breach of trust by arranging for another staffer’s techie boyfriend to access 24 desktop computers in the premier’s office as Wynne took over the reins in 2013.

A committee investigating the Ontario Liberals’ cancellation of gas plants in Oakville and Mississauga, at a loss of up to $1.1 billion, had already ordered the government to turn over all records related to that decision.

Wynne said the allegations, if true, are “disturbing” but she was not aware of and would not have condoned such activity.

“I was not in charge of the former chief of staff, I did not direct the former chief of staff, I did not direct anyone in my office to destroy information, nor would I ever do that,” Wynne said. “And, in fact, we have changed the rules about the retention of information.”

OPP investigators probing the alleged illegal deletion of e-mails executed a search warrant last month on a Mississauga data storage facility used by the Ontario government.

March 26, 2014

Oculus in the news

Filed under: Business, Media, Technology — Tags: , , , — Nicholas Russon @ 07:33

Raph Koster reflects on the promise of Oculus:

Metaverse RoadmapRendering was never the point.

Oh, it’s hard. But it’s rapidly becoming commodity hardware. That was in fact the basic premise of the Oculus Rift: that the mass market commodity solution for a very old dream was finally approaching a price point where it made sense. The patents were expiring; the panels were cheap and getting better by the month. The rest was plumbing. Hard plumbing, the sort that calls for a Carmack, maybe, but plumbing.

[...]

Look, there are a few big visions for the future of computing doing battle.

There’s a wearable camp, full of glasses and watches. It’s still nascent, but its doom is already waiting in the wings; biocomputing of various sorts (first contacts, then implants, nano, who knows) will unquestionably win out over time, just because glasses and watches are what tech has been removing from us, not getting us to put back on. Google has its bets down here.

There’s a beacon-y camp, one where mesh networks and constant broadcasts label and dissect everything around us, blaring ads and enticing us with sales coupons as we walk through malls. In this world, everything is annotated and shouting at a digital level, passing messages back and forth. It’s an ubicomp environment where everything is “smart.” Apple has its bets down here.

These two things are going to get married. One is the mouth, the other the ears. One is the poke, the other the skin. And then we’re in a cyberpunk dream of ads that float next to us as we walk, getting between us and the other people, our every movement mined for Big Data.

[...]

The virtue of Oculus lies in presence. A startling, unusual sort of presence. Immersion is nice, but presence is something else again. Presence is what makes Facebook feel like a conversation. Presence is what makes you hang out on World of Warcraft. Presence is what makes offices persist in the face of more than enough capability for remote work. Presence is why a video series can out-draw a text-based MOOC and presence is why live concerts can make more money than album sales.

Facebook is laying its bet on people, instead of smart objects. It’s banking on the idea that doing things with one another online — the thing that has fueled it all this time — is going to keep being important. This is a play to own walking through Machu Picchu without leaving home, a play to own every classroom and every museum. This is a play to own what you do with other people.

Update: Apparently some of the folks who backed the original Kickstarter campaign have their panties in a bunch now that there’s big money involved.


Attendees wear Oculus Rift HD virtual reality head-mounted displays as they play EVE: Valkyrie, a multiplayer virtual reality dogfighting shooter game, at the Intel booth at the 2014 International CES, January 9, 2014 in Las Vegas, Nevada. ROBYN BECK/AFP/Getty Images

Facebook’s purchase of virtual reality company Oculus for $2bn in stocks and shares is big news for a third company: Kickstarter, which today celebrates the first billion-dollar exit of a company formed through the crowdfunding platform.

Oculus raised $2.4m for its Rift headset in September 2012, exceeding its initial fundraising goal by 10 times. It remains one of the largest ever Kickstarter campaigns.

But as news of the acquisition broke Tuesday night, some of the 9,500 people who backed the project for sums of up to $5,000 a piece (the most popular package, containing an early prototype of the Rift, was backed by 5,600 for a more reasonable $300) were rethinking their support.

[...]

For Kickstarter itself, the purchase raises awkward questions. The company has always maintained that it should not be viewed as a storefront for pre-ordering products; instead, a backer should be aware that they are giving money to a struggling artist or designer, and view the reward as a thanks rather than a purchase.

Kickstarter Is Not a Store” is how the New York-based company put it in 2012, shortly after the Oculus Rift campaign closed. Instead, the company explained: “It’s a new way for creators and audiences to work together to make things.”

But if Kickstarter isn’t a store, and if backers also aren’t getting equity in the company which uses their money to build a $2bn business, then what are they actually paying for?

“Structurally I have an issue with it,” explains Buckenham, “in that the backer takes on a great deal of risk for relatively little upside and that the energy towards exciting things is formalised into a necessarily cash-based relationship in a way that enforces and extends capitalism into places where it previously didn’t have total dominion.”

March 16, 2014

Defining hackers and hacker culture

Filed under: History, Technology — Tags: , , , — Nicholas Russon @ 09:49

ESR put this together as a backgrounder for a documentary film maker:

In its original and still most correct sense, the word “hacker” describes a member of a tribe of expert and playful programmers with roots in 1960s and 1970s computer-science academia, the early microcomputer experimenters, and several other contributory cultures including science-fiction fandom.

Through a historical process I could explain in as much detail as you like, this hacker culture became the architects of today’s Internet and evolved into the open-source software movement. (I had a significant role in this process as historian and activist, which is why my friends recommended that you talk to me.)

People outside this culture sometimes refer to it as “old-school hackers” or “white-hat hackers” (the latter term also has some more specific shades of meaning). People inside it (including me) insist that we are just “hackers” and using that term for anyone else is misleading and disrespectful.

Within this culture, “hacker” applied to an individual is understood to be a title of honor which it is arrogant to claim for yourself. It has to be conferred by people who are already insiders. You earn it by building things, by a combination of work and cleverness and the right attitude. Nowadays “building things” centers on open-source software and hardware, and on the support services for open-source projects.

There are — seriously — people in the hacker culture who refuse to describe themselves individually as hackers because they think they haven’t earned the title yet — they haven’t built enough stuff. One of the social functions of tribal elders like myself is to be seen to be conferring the title, a certification that is taken quite seriously; it’s like being knighted.

[...]

There is a cluster of geek subcultures within which the term “hacker” has very high prestige. If you think about my earlier description it should be clear why. Building stuff is cool, it’s an achievement.

There is a tendency for members of those other subcultures to try to appropriate hacker status for themselves, and to emulate various hacker behaviors — sometimes superficially, sometimes deeply and genuinely.

Imitative behavior creates a sort of gray zone around the hacker culture proper. Some people in that zone are mere posers. Some are genuinely trying to act out hacker values as they (incompletely) understand them. Some are ‘hacktivists’ with Internet-related political agendas but who don’t write code. Some are outright criminals exploiting journalistic confusion about what “hacker” means. Some are ambiguous mixtures of several of these types.

March 3, 2014

The origins of hacking and the myth of a lost Eden of open source code

Filed under: History, Technology — Tags: , , , — Nicholas Russon @ 09:40

Gather round you kids, ’cause Uncle Eric is going to tell you about the dim, distant days of hacking before open source:

I was a historian before I was an activist, and I’ve been reminded recently that a lot of younger hackers have a simplified and somewhat mythologized view of how our culture evolved, one which tends to back-project today’s conditions onto the past.

In particular, many of us never knew – or are in the process of forgetting – how dependent we used to be on proprietary software. I think by failing to remember that past we are risking that we will misunderstand the present and mispredict the future, so I’m going to do what I can to set the record straight.

[...]

Without the Unix-spawned framework of concepts and technologies, having source code simply didn’t help very much. This is hard for younger hackers to realize, because they have no experience of the software world before retargetable compilers and code portability became relatively common. It’s hard for a lot of older hackers to remember because we mostly cut our teeth on Unix environments that were a few crucial years ahead of the curve.

But we shouldn’t forget. One very good reason is that believing a myth of the fall obscures the remarkable rise that we actually accomplished, bootstrapping ourselves up through a series of technological and social inventions to where open source on everyone’s desk and in everyone’s phone and ubiquitous in the Internet infrastructure is now taken for granted.

We didn’t get here because we failed in our duty to protect a prelapsarian software commons, but because we succeeded in creating one. That is worth remembering.

Update: In a follow-up post, ESR talks about closed source “sharecroppers” and Unix “nomads”.

Like the communities around SHARE (IBM mainframe users) and DECUS (DEC minicomputers) in the 1960s and 1970s, whatever community existed around ESPOL was radically limited by its utter dependence on the permissions and APIs that a single vendor was willing to provide. The ESPOL compiler was not retargetable. Whatever community developed around it could neither develop any autonomy nor survive the death of its hardware platform; the contributors had no place to retreat to in the event of predictable single-point failures.

I’ll call this sort of community “sharecroppers”. That term is a reference to SHARE, the oldest such user group. It also roughly expresses the relationship between these user groups and contributors, on the one hand, and the vendor on the other. The implied power relationship was pretty totally asymmetrical.

Contrast this with early Unix development. The key difference is that Unix-hosted code could survive the death of not just original hardware platforms but entire product lines and vendors, and contributors could develop a portable skillset and toolkits. The enabling technology – retargetable C compilers – made them not sharecroppers but nomads, able to evade vendor control by leaving for platforms that were less locked down and taking their tools with them.

I understand that it’s sentimentally appealing to retrospectively sweep all the early sharecropper communities into “open source”. But I think it’s a mistake, because it blurs the importance of retargetability, the ability to resist or evade vendor lock-in, and portable tools that you can take away with you.

Without those things you cannot have anything like the individual mental habits or collective scale of contributions that I think is required before saying “an open-source culture” is really meaningful.

February 7, 2014

Hackers, “technologists”, … and girls

Filed under: Technology — Tags: , , , , — Nicholas Russon @ 13:35

An interesting post by Susan Sons illustrating some of the reasons women do not become hackers in the same proportion that men do:

Looking around at the hackers I know, the great ones started before puberty. Even if they lacked computers, they were taking apart alarm clocks, repairing pencil sharpeners or tinkering with ham radios. Some of them built pumpkin launchers or LEGO trains. I started coding when I was six years old, sitting in my father’s basement office, on the machine he used to track inventory for his repair service. After a summer of determined trial and error, I’d managed to make some gorillas throw things other than exploding bananas. It felt like victory!

[...]

Twelve-year-old girls today don’t generally get to have the experiences that I did. Parents are warned to keep kids off the computer lest they get lured away by child molesters or worse — become fat! That goes doubly for girls, who then grow up to be liberal arts majors. Then, in their late teens or early twenties, someone who feels the gender skew in technology communities is a problem drags them to a LUG meeting or an IRC channel. Shockingly, this doesn’t turn the young women into hackers.

Why does anyone, anywhere, think this will work? Start with a young woman who’s already formed her identity. Dump her in a situation that operates on different social scripts than she’s accustomed to, full of people talking about a subject she doesn’t yet understand. Then tell her the community is hostile toward women and therefore doesn’t have enough of them, all while showing her off like a prize poodle so you can feel good about recruiting a female. This is a recipe for failure.

[...]

I’ve never had a problem with old-school hackers. These guys treat me like one of them, rather than “the woman in the group”, and many are old enough to remember when they worked on teams that were about one third women, and no one thought that strange. Of course, the key word here is “old” (sorry guys). Most of the programmers I like are closer to my father’s age than mine.

The new breed of open-source programmer isn’t like the old. They’ve changed the rules in ways that have put a spotlight on my sex for the first time in my 18 years in this community.

When we call a man a “technologist”, we mean he’s a programmer, system administrator, electrical engineer or something like that. The same used to be true when we called a woman a “technologist”. However, according to the new breed, a female technologist might also be a graphic designer or someone who tweets for a living. Now, I’m glad that there are social media people out there — it means I can ignore that end of things — but putting them next to programmers makes being a “woman in tech” feel a lot like the Programmer Special Olympics.

January 31, 2014

Security theatre special edition – destroying hard drives that held Snowden’s documents

Filed under: Britain, Government — Tags: , , , , — Nicholas Russon @ 09:30

It may have been pointless — and it was! — but the British government not only felt it had to do something, but that it had to be seen to be doing something:

New video footage has been released for the first time of the moment Guardian editors destroyed computers used to store top-secret documents leaked by the NSA whistleblower Edward Snowden.

Under the watchful gaze of two technicians from the British government spy agency GCHQ, the journalists took angle-grinders and drills to the internal components, rendering them useless and the information on them obliterated.

The bizarre episode in the basement of the Guardian‘s London HQ was the climax of Downing Street’s fraught interactions with the Guardian in the wake of Snowden’s leak — the biggest in the history of western intelligence. The details are revealed in a new book — The Snowden Files: The Inside Story of the World’s Most Wanted Man — by the Guardian correspondent Luke Harding. The book, published next week, describes how the Guardian took the decision to destroy its own Macbooks after the government explicitly threatened the paper with an injunction.

In two tense meetings last June and July the cabinet secretary, Jeremy Heywood, explicitly warned the Guardian‘s editor, Alan Rusbridger, to return the Snowden documents.

Heywood, sent personally by David Cameron, told the editor to stop publishing articles based on leaked material from American’s National Security Agency and GCHQ. At one point Heywood said: “We can do this nicely or we can go to law”. He added: “A lot of people in government think you should be closed down.”

January 20, 2014

XKCD on the problem with attempting to automate tasks

Filed under: Humour, Technology — Tags: , — Nicholas Russon @ 09:47

xkcd_automation

I’m not a programmer, although I’ve spent much of my working life around programmers, which is why I recognize the pattern so well: I’ve seen it in action so often.

The few times I’ve needed to create a program to do something (usually a text transformation of one sort or another), this has been exactly the way the “labour-saving” automation has gone. My personal version of the chart would have an additional phase at the beginning: I have to begin by learning or re-learning the tool I need to use. I learn just enough of how to use a given tool to do the task at hand, then the knowledge atrophies from lack of use and the next time I need to do something similar, the first priority is figuring out the right tool and then learning the same basic tasks all over again.

I started out with REXX when I was a co-op student at IBM. Several years later, I needed to convert a large set of documents from one markup language to another on a Unix system and that meant learning (just enough) shell scripting, sed and awk. A few years after that the right tool seemed to be Perl. In every case, the knowledge doesn’t stick with me because I don’t need to do anything with the language after I’ve finished the immediate task. I remember being able to do it but I don’t recall exactly how to do it.

December 17, 2013

Legal precedents and technological change

Filed under: Law, Liberty, Technology, USA — Tags: , , , — Nicholas Russon @ 10:04

At Ace of Spades HQ, Ace explains why a court decision from the 1970s set a very bad precedent for today’s legal and technological world:

Fifty years ago the police had a very limited ability to utilize your fingerprints record to harm you. If you became a suspect in a case — and only in that case — they could painstakingly compare your fingerprints to those found at a crime scene using slow, precious human labor resources.

There were serious practical limits on what could be done with citizen data held in government files. Yes, the government could use that data to put people in jail, but analysis and comparison was a labor intensive process that at least served as a naturally-existing limiting principle on government intrusion: Sure, the government could search your personally-identifying data to connect you with a crime, but, as a practical matter, it was so time-consuming to do so that they generally would not do so, not unless they had a strong suspicion you were actually a culprit.

They wouldn’t just compare every fingerprint on file with every fingerprint found at unsolved crime scenes, after all.

Well, today, they can — and do — actually do that. So there is no longer any practical limitation on the government’s ability to use your DNA to connect you with unknown DNA found at a crime. They can run everyone’s DNA through the database with virtually no effort.

I exaggerate; there is some lab work needed to process the DNA and reduce it to a 13 allele “genetic fingerprint.” Nevertheless, this can all be done fairly inexpensively, and running it through the database once reduced to a short code is very nearly cost-free.

But within the next ten years all of this will become entirely cost-free.

This is why I disagreed with the Supreme Court’s reliance on an old precedent in claiming that the police can take a DNA sample from every single person arrested. Merely arrested, not convicted. They relied on a precedent established at the dawn of investigatory police science, that every arrestee’s fingerprints may be collected and catalogued.

But way ‘back then, there were natural limitations on the State’s power to make use of such data which simply no longer exist. What would have been considered a silly hypothetical sci-fi objection back then — “But what stops the state from merely searching these fingerprints against every fingerprint ever lifted at a crime scene?” — is actual reality now.

The same arguments apply to all police/FBI/NSA mass data collection: cell-phone usage, internet activity, license plate scanning, facial recognition software, and so on. It resets the baseline assumptions of civil society, where the authorities only look for suspects in actual criminal cases, rather than tracking everyone all the time and deducing “criminal” actions without needing to detect the crime. If your first reaction is to think “if you’ve done nothing wrong, you’ve got nothing to fear”, remember that you cannot possibly know all the laws of your country and that statistically speaking, you probably violate one or more laws every day without realizing it (one author suggests it’s actually three felonies per day).

Update: Ayn Rand explained this phenomenon fictionally in Atlas Shrugged.

“Did you really think that we want those laws to be observed?” said Dr. Ferris. “We want them broken. You’d better get it straight that it’s not a bunch of boy scouts you’re up against — then you’ll know that this is not the age of beautiful gestures. We’re after power and we mean it. You fellows were pikers, but we know the real trick, and you’d better get wise to it. There’s no way to rule innocent men. The only power any government has is the power to crack down on criminals. Well, when there aren’t enough criminals, one ‘makes’ them. One declares so many things to be a crime that it becomes impossible for men to live without breaking laws. Who wants a nation of law-abiding citizens? What’s there in that for anyone? But just pass the kind of laws that can neither be observed nor enforced nor objectively interpreted — and you create a nation of law-breakers and then you cash in on the guilt. Now that’s the system, Mr. Rearden, that’s the game, and once you understand it, you’ll be much easier to deal with.”

December 12, 2013

Paranoid? You’re probably not paranoid enough

Filed under: Technology — Tags: , , , , — Nicholas Russon @ 09:26

Charles Stross has a few adrenaline shots for your paranoia gland this morning:

The internet of things may be coming to us all faster and harder than we’d like.

Reports coming out of Russia suggest that some Chinese domestic appliances, notably kettles, come kitted out with malware — in the shape of small embedded computers that leech off the mains power to the device. The covert computational passenger hunts for unsecured wifi networks, connects to them, and joins a spam and malware pushing botnet. The theory is that a home computer user might eventually twig if their PC is a zombie, but who looks inside the base of their electric kettle, or the casing of their toaster? We tend to forget that the Raspberry Pi is as powerful as an early 90s UNIX server or a late 90s desktop; it costs £25, is the size of a credit card, and runs off a 5 watt USB power source. And there are cheaper, less competent small computers out there. Building them into kettles is a stroke of genius for a budding crime lord looking to build a covert botnet.

But that’s not what I’m here to talk about.

[...]

I’m dozy and slow on the uptake: I should have been all over this years ago.

And it’s not just keyboards. It’s ebook readers. Flashlights. Not your smartphone, but the removable battery in your smartphone. (Have you noticed it running down just a little bit faster?) Your toaster and your kettle are just the start. Could your electric blanket be spying on you? Koomey’s law is going to keep pushing the power consumption of our devices down even after Moore’s law grinds to a halt: and once Moore’s law ends, the only way forward is to commoditize the product of those ultimate fab lines, and churn out chips for pennies. In another decade, we’ll have embedded computers running some flavour of Linux where today we have smart inventory control tags — any item in a shop that costs more than about £50, basically. Some of those inventory control tags will be watching and listening to us; and some of their siblings will, repurposed, be piggy-backing a ride home and casing the joint.

The possibilities are endless: it’s the dark side of the internet of things. If you’ll excuse me now, I’ve got to go wallpaper my apartment in tinfoil …

December 9, 2013

Admiral Grace Hopper on Letterman

Filed under: History, Military, Technology — Tags: , , , , — Nicholas Russon @ 17:23

November 26, 2013

The illusion of omnicompetence

Filed under: Business, Humour, Technology — Tags: , , , — Nicholas Russon @ 08:38

I’ve expressed this as variations on “the deeper the specialization, the more those specialists feel they’re experts on much wider subjects”. Megan McArdle‘s formulation is rather neater than that:

Amid the chaos, I got a call from the secretary of a very senior executive at the firm. His new voice-recognition software wasn’t working, and he needed me to come up right away.

I had servers that weren’t working right and a bunch of workstations that couldn’t access the network. “He should call the help desk,” I told her.

Her tone was arctic.

“He doesn’t deal with help desk personnel,” she said. “Please come up here right away.”

So I went to the office of Mr. Senior Executive. He was not at his desk. I played with his new software, which seemed to be working fine — a bit slow, but in 1998, voice-recognition software took a while to become acclimated to your voice. I told the secretary it seemed to be working, and I left my pager number. It went off as I got to the elevator bank. I trekked wearily back to the office, where Mr. Senior Executive gestured at his computer. “It still doesn’t work right,” he said, and started to leave the office again.

“Hold on, please,” I said. “Can you show me exactly what’s not working?”

“It’s not doing what I want,” he said.

“What do you want?” I asked.

“I want it to be,” he replied, “like the computer on Star Trek: The Next Generation.”

“Sir, that’s an actor,” I replied evenly, despite being on the sleepless verge of hysteria. With even more heroic self-restraint, I did not add “We can get you an actor to sit under your desk. But we’d have to pay SAG rates.”

Now, when I used to tell this story to tech people, the moral was that executives are idiots. No, make that “users are idiots.” Tech people tend to regard their end-users as a sort of intermediate form of life between chimps and information-technology staffers: They’ve stopped throwing around their feces, but they can’t really be said to know how to use tools.

And, of course, users can do some idiotic things. But this particular executive was not an idiot. He was, in fact, a very smart man who had led financial institutions on two continents. None of the IT staffers laughing at his elementary mistake would have lasted for a week in his job.

Call it “the illusion of omnicompetence.” When you know a lot about one thing, you spend a lot of time watching the less knowledgeable make elementary errors. You can easily infer from this that you are very smart, and they are very stupid. Presumably, our bank executive knew that the phasers and replicators on Star Trek are fake; why did he think that the talking computer would be any more real?

Older Posts »
« « Amtrak even manages to lose money on food services| Never-let-a-crisis-go-to-waste department – the modern slavery bill » »

Powered by WordPress

%d bloggers like this: