Not all women can code … but neither can all men. Pretending that because all women can’t code means no women can code is an exercise of idiots that is easily dismissed by the very existence of Admiral Grace Hopper, USN:
February 9, 2015
January 21, 2015
James Lileks takes a nap. It therefore (of course) provides the basis of a “Bleat” posting:
Another item of no surprise to any readers of this site is my enjoyment of, and insistence upon, and devotion to, difficult sentence structures. Also naps. I love naps. Didn’t use to; then we had a child. At first I napped on the floor, thinking it Spartan and manly, but eventually I saw the case for sleeping on a surface that did not leave flat indentations on my skill if I slept for more than 20 minutes. I don’t believe in napping on the sofa, Dagwood style; I don’t believe in napping while reclining in a chair. There’s a reason we sleep in beds. No one ever says “I don’t know how much sleep I’ll get tonight, so maybe I’d better sit in a chair and see how it works.” Bed. The humidifier for white noise. Phone on Airplane Mode. Set the alarm, and see you later.
It’s never occurred to me to study my naps, or chart them, or pick them apart for quality. There are good naps and bad ones. There are short naps that leave you refreshed, and short ones that leave you groggy. Long ones that seem to add a year to your life, and long ones that make you feel as though you emerged from a bog of tar. To be fair, long naps never leave me logy. Short naps can make me feel angry, because they weren’t longer naps.
But. I read a review for an app called Power Nap HQ, and it seemed interesting: it took nap data, based on your movements. You entered how much time you wanted to sleep, set a backup alarm, chose a sequence of sounds, and laid it next to you. It would report back on your movements, indicating the depth of the nap, and it would also record any abrupt sounds you made. Nicely designed, too. A buck. Bought it.
Calibrated the device, set all the options, and pressed the button to start the nap. Laid it next to me.
Got itchy. Dry skin. Scratched a little, and wondered if this would register on the device. This was the signal for my upper lip to report in as “slightly chapped,” requiring more minor motion, and I thought I might be confusing the app, which thinks this is light sleep. Or perhaps it doesn’t take any motion seriously until I’m inert for a long period of time. So I laid still.
Then I thought: now it’s going to think I’m asleep.
This nap wasn’t working out very well. You start to think about napping, napping doesn’t happen. You start to wait for the between-two-worlds moment when you’re aware that you’re having a dream, or are thinking of something you certainly did start but grew out of something you’d already forgotten, then the moment never comes. But the next thing I knew I was awake.
Sort of. Half awake. The alarm had not gone off, so I had not reached the desired quantity of sleep. I was up because my body was done with the noon ration of Diet Lime Coke, and wished to offload it. This I did, wondering how the app would read my absence. It would detect the motion, then the absence of motion, then motion, then – providing I got back to sleep – the absence of motion. I did what a man’s gotta do, then returned to bed to complete the nap. Fell back asleep. No dreams.
Woke, and thought: damn, I beat the alarm. Must be close. If I have one superpower, it is the ability to gauge the passage of time; if I knew what time it was 35 minutes ago, I can tell you what time it is now within a minute or so. This extends to naps: if I wake before the alarm, I usually know what the time will be. I laid there, waking, considering how the rest of the day would play out, then realized that the app would interpret my motionlessness as sleep. THE DATA WOULD BE IMPRECISE.
So I picked up the phone to see how long I’d actually slept.
I had overslept by 40 minutes.
The alarm had not gone off. The backup alarm had not gone off. It had not collected data. Other than that, best dollar I ever spent. Now I can remove it from my phone and sleep without worries.
January 14, 2015
Cory Doctorow explains why David Cameron’s proposals are not just dumb, but doubleplus-dumb:
What David Cameron thinks he’s saying is, “We will command all the software creators we can reach to introduce back-doors into their tools for us.” There are enormous problems with this: there’s no back door that only lets good guys go through it. If your Whatsapp or Google Hangouts has a deliberately introduced flaw in it, then foreign spies, criminals, crooked police (like those who fed sensitive information to the tabloids who were implicated in the hacking scandal — and like the high-level police who secretly worked for organised crime for years), and criminals will eventually discover this vulnerability. They — and not just the security services — will be able to use it to intercept all of our communications. That includes things like the pictures of your kids in your bath that you send to your parents to the trade secrets you send to your co-workers.
But this is just for starters. David Cameron doesn’t understand technology very well, so he doesn’t actually know what he’s asking for.
For David Cameron’s proposal to work, he will need to stop Britons from installing software that comes from software creators who are out of his jurisdiction. The very best in secure communications are already free/open source projects, maintained by thousands of independent programmers around the world. They are widely available, and thanks to things like cryptographic signing, it is possible to download these packages from any server in the world (not just big ones like Github) and verify, with a very high degree of confidence, that the software you’ve downloaded hasn’t been tampered with.
This, then, is what David Cameron is proposing:
* All Britons’ communications must be easy for criminals, voyeurs and foreign spies to intercept
* Any firms within reach of the UK government must be banned from producing secure software
* All major code repositories, such as Github and Sourceforge, must be blocked
* Search engines must not answer queries about web-pages that carry secure software
* Virtually all academic security work in the UK must cease — security research must only take place in proprietary research environments where there is no onus to publish one’s findings, such as industry R&D and the security services
* All packets in and out of the country, and within the country, must be subject to Chinese-style deep-packet inspection and any packets that appear to originate from secure software must be dropped
* Existing walled gardens (like Ios and games consoles) must be ordered to ban their users from installing secure software
* Anyone visiting the country from abroad must have their smartphones held at the border until they leave
* Proprietary operating system vendors (Microsoft and Apple) must be ordered to redesign their operating systems as walled gardens that only allow users to run software from an app store, which will not sell or give secure software to Britons
* Free/open source operating systems — that power the energy, banking, ecommerce, and infrastructure sectors — must be banned outright
David Cameron will say that he doesn’t want to do any of this. He’ll say that he can implement weaker versions of it — say, only blocking some “notorious” sites that carry secure software. But anything less than the programme above will have no material effect on the ability of criminals to carry on perfectly secret conversations that “we cannot read”. If any commodity PC or jailbroken phone can run any of the world’s most popular communications applications, then “bad guys” will just use them. Jailbreaking an OS isn’t hard. Downloading an app isn’t hard. Stopping people from running code they want to run is — and what’s more, it puts the whole nation — individuals and industry — in terrible jeopardy.
December 27, 2014
Eric S. Raymond acknowledges the strong influence of evolutionary psychology on the development of open source theory:
Yesterday I realized, quite a few years after I should have, that I have never identified in public where I got the seed of the idea that I developed into the modern economic theory of open-source software – that is, how open-source “altruism” could be explained as an emergent result of selfish incentives felt by individuals. So here is some credit where credit is due.
Now, in general it should be obvious that I owed a huge debt to thinkers in the classical-liberal tradition, from Adam Smith down to F. A. Hayek and Ayn Rand. The really clueful might also notice some connection to Robert Trivers’s theory of reciprocal altruism under natural selection and Robert Axelrod’s work on tit-for-tat interactions and the evolution of cooperation.
These were all significant; they gave me the conceptual toolkit I could apply successfully once I’d had my initial insight. But there’s a missing piece – where my initial breakthrough insight came from, the moment when I realized I could apply all those tools.
The seed was in the seminal 1992 anthology The Adapted Mind: Evolutionary Psychology and the Generation of Culture. That was full of brilliant work; it laid the foundations of evolutionary psychology and is still worth a read.
(I note as an interesting aside that reading science fiction did an excellent job of preparing me for the central ideas of evolutionary psychology. What we might call “hard adaptationism” – the search for explanations of social behavior in evolution under selection – has been a central theme in SF since the 1940s, well before even the first wave of academic sociobiological thinking in the early 1970s and, I suspect, strongly influencing that wave. It is implicit in, as a leading example, much of Robert Heinlein’s work.)
The specific paper that changed my life was this one: Two Nonhuman Primate Models for the Evolution of Human Food Sharing: Chimpanzees and Callitrichids by W.C. McGrew and Anna T.C. Feistner.
In it, the authors explained food sharing as a hedge against variance. Basically, foods that can be gathered reliably were not shared; high-value food that could only be obtained unreliably was shared.
The authors went on to observe that in human hunter-gatherer cultures a similar pattern obtains: gathered foods (for which the calorie/nutrient value is a smooth function of effort invested) are not typically shared, whereas hunted foods (high variance of outcome in relation to effort) are shared. Reciprocal altruism is a hedge against uncertainty of outcomes.
November 30, 2014
I managed to miss the initial controversy about a typographical hoax that might not have been so hoax-y:
According to the website of the Independent newspaper, LEGO UK has verified the 1970s ‘letter to parents’ that was widely tweeted last weekend and almost as widely dismissed as fake. Business as usual in the Twittersphere — but there are some lessons here about dating type.
‘The urge to create is equally strong in all children. Boys and girls.’ It’s a sentiment from the 1970s that’s never been more relevant. Or was it?
Those of us who produce or handle documents for a living will often glance at an example and have an immediate opinion on whether it’s real or fake. That first instinct is worth holding on to, because it comes from the brain’s evolved ability to reach a quick conclusion from a whole bunch of subtle clues before your conscious awareness catches up. It’s OK to be inside the nearest cave getting your breath back when you start asking yourself what kind of snake.
But sometimes you will flinch at shadows. Why did this document strike us as wrong when it wasn’t?
First, because the type is badly set in exactly the way early consumer DTP apps, and word processor apps to this day (notably Microsoft Word), set type badly — at least without the intervention of skilled users. I started typesetting on an Atari ST, the poor man’s Mac, in 1987. The first desktop publishing program for that platform was newly released, running under Digital Research’s GEM operating system. It came with a version of Times New Roman, and almost nothing else. Me and badly set Times have history.
In the LEGO document, the kerning of the headline is lumpy and the word spacing excessive. The ‘T’ seems out of alignment with the left margin, even after allowing for a lack of optical adjustment. The paragraph indent on the body text has been applied from the start, contrary to modern British typesetting practice; the first line should be full-out. The leading (vertical space between lines of text) is not quite enough for comfort, more appropriate to a dense newspaper column than this short blurb.
There’s also an error in the copy: ‘dolls houses’ needs an apostrophe. Either before or after the last letter of ‘dolls’ would be fine, depending on whether you think you mean a house for a doll or a house for dolls. But it definitely needs to be possessive.
It wasn’t just that the type looked careless. It was that it stank of the careless use of tools that shouldn’t have been available to its creators.
November 23, 2014
Eric S. Raymond has been asked to write this document for years, and he’s finally given in to the demand:
What Is Hacking?
The “hacking” we’ll be talking about in this document is exploratory programming in an open-source environment. If you think “hacking” has anything to do with computer crime or security breaking and came here to learn that, you can go away now. There’s nothing for you here.
Hacking is a style of programming, and following the recommendations in this document can be an effective way to acquire general-purpose programming skills. This path is not guaranteed to work for everybody; it appears to work best for those who start with an above-average talent for programming and a fair degree of mental flexibility. People who successfully learn this style tend to become generalists with skills that are not strongly tied to a particular application domain or language.
Note that one can be doing hacking without being a hacker. “Hacking”, broadly speaking, is a description of a method and style; “hacker” implies that you hack, and are also attached to a particular culture or historical tradition that uses this method. Properly, “hacker” is an honorific bestowed by other hackers.
Hacking doesn’t have enough formal apparatus to be a full-fledged methodology in the way the term is used in software engineering, but it does have some characteristics that tend to set it apart from other styles of programming.
- Hacking is done on open source. Today, hacking skills are the individual micro-level of what is called “open source development” at the social macrolevel. A programmer working in the hacking style expects and readily uses peer review of source code by others to supplement and amplify his or her individual ability.
- Hacking is lightweight and exploratory. Rigid procedures and elaborate a-priori specifications have no place in hacking; instead, the tendency is try-it-and-find-out with a rapid release tempo.
- Hacking places a high value on modularity and reuse. In the hacking style, you try hard never to write a piece of code that can only be used once. You bias towards making general tools or libraries that can be specialized into what you want by freezing some arguments/variables or supplying a context.
- Hacking favors scrap-and-rebuild over patch-and-extend. An essential part of hacking is ruthlessly throwing away code that has become overcomplicated or crufty, no matter how much time you have invested in it.
The hacking style has been closely associated with the technical tradition of the Unix operating system.
Recently it has become evident that hacking blends well with the “agile programming” style. Agile techniques such as pair programming and feature stories adapt readily to hacking and vice-versa. In part this is because the early thought leaders of agile were influenced by the open source community. But there has since been traffic in the other direction as well, with open-source projects increasingly adopting techniques such as test-driven development.
November 16, 2014
In Kotaku, Luke Plunkett explains why of all the AI leaders in the game, none are more likely to espouse the philosophy “nuke ‘em ’till they glow, then shoot ‘em in the dark” than India’s Gandhi:
In the original Civilization, it was because of a bug. Each leader in the game had an “aggression” rating, and Gandhi – to best reflect his real-world persona – was given the lowest score possible, a 1, so low that he’d rarely if ever go out of his way to declare war on someone.
Only, there was a problem. When a player adopted democracy in Civilization, their aggression would be automatically reduced by 2. Code being code, if Gandhi went democratic his aggression wouldn’t go to -1, it looped back around to the ludicrously high figure of 255, making him as aggressive as a civilization could possibly be.
In later games this bug was obviously not an issue, but as a tribute/easter egg of sorts, parts of his white-hot rage have been kept around. In Civilization V, for example, while Gandhi’s regular diplomatic approach is more peaceful than other leaders, he’s also the most likely to go dropping a-bombs when pushed, with a nuke “rating” of 12 putting him well ahead of the competition (the next three most likely to go nuclear have a rating of 8, with most leaders around the 4-6 region).
Update, 16 November: Fixed the broken link.
October 11, 2014
We are the heirs of the Industrial Revolution, and, of course, the Industrial Revolution was all about economies of scale. Its efficiencies and advances were made possible by banding people together in larger and larger amalgamations, and we invented all sorts of institutions — from corporations to municipal governments — to do just that.
This process continues to this day. In its heyday, General Motors employed about 500,000 people; Wal-Mart employs more than twice that now. We continue to urbanize, depopulating the Great Plains and repopulating downtowns. Our most successful industry — the technology company — is driven by unprecedented economies of scale that allow a handful of programmers to make squintillions selling some software applications to half the world’s population.
This has left us, I think, with a cultural tendency to assume that everything is subject to economies of scale. You find this as much on the left as the right, about everything from government programs to corporations. People just take it as naturally given that making a company or an institution or a program bigger will drive cost efficiencies that allow them to get bigger still.
Of course, this often is the case. Facebook is better off with 2 billion customers than 1 billion, and a program that provides health insurance to everyone over the age of 65 has lower per-user overhead than a program that provides health insurance to 200 homeless drug users in Atlanta. I’m not trying to suggest that economies of scale don’t exist, only that not every successful model enjoys them. In fact, many successful models enjoy diseconomies of scale: After a certain point, the bigger you get, the worse you do.
Megan McArdle, “In-N-Out Doesn’t Want to Be McDonald’s”, Bloomberg View, 2014-10-02.
September 30, 2014
ESR talks about the visibility problem in software bugs:
The first thing to notice here is that these bugs were found – and were findable – because of open-source scrutiny.
There’s a “things seen versus things unseen” fallacy here that gives bugs like Heartbleed and Shellshock false prominence. We don’t know – and can’t know – how many far worse exploits lurk in proprietary code known only to crackers or the NSA.
What we can project based on other measures of differential defect rates suggests that, however imperfect “many eyeballs” scrutiny is, “few eyeballs” or “no eyeballs” is far worse.
September 20, 2014
David Akin posted a list of questions posed by John Gilmore, challenging the Apple iOS8 cryptography promises:
Gilmore considered what Apple said and considered how Apple creates its software — a closed, secret, proprietary method — and what coders like him know about the code that Apple says protects our privacy — pretty much nothing — and then wrote the following for distribution on Dave Farber‘s Interesting People listserv. I’m pretty sure neither Farber nor Gilmore will begrudge me reproducing it.
And why do we believe [Apple]?
- Because we can read the source code and the protocol descriptions ourselves, and determine just how secure they are?
- Because they’re a big company and big companies never lie?
- Because they’ve implemented it in proprietary binary software, and proprietary crypto is always stronger than the company claims it to be?
- Because they can’t covertly send your device updated software that would change all these promises, for a targeted individual, or on a mass basis?
- Because you will never agree to upgrade the software on your device, ever, no matter how often they send you updates?
- Because this first release of their encryption software has no security bugs, so you will never need to upgrade it to retain your privacy?
- Because if a future update INSERTS privacy or security bugs, we will surely be able to distinguish these updates from future updates that FIX privacy or security bugs?
- Because if they change their mind and decide to lessen our privacy for their convenience, or by secret government edict, they will be sure to let us know?
- Because they have worked hard for years to prevent you from upgrading the software that runs on their devices so that YOU can choose it and control it instead of them?
- Because the US export control bureacracy would never try to stop Apple from selling secure mass market proprietary encryption products across the border?
- Because the countries that wouldn’t let Blackberry sell phones that communicate securely with your own corporate servers, will of course let Apple sell whatever high security non-tappable devices it wants to?
- Because we’re apple fanboys and the company can do no wrong?
- Because they want to help the terrorists win?
- Because NSA made them mad once, therefore they are on the side of the public against NSA?
- Because it’s always better to wiretap people after you convince them that they are perfectly secure, so they’ll spill all their best secrets?
There must be some other reason, I’m just having trouble thinking of it.
July 11, 2014
This is a story that rightfully should have been published at the beginning of April (except it actually happened):
A year 2000-related bug has caused the US military to send more than 14,000 letters of conscription to men who were all born in the 1800s and died decades ago.
Shocked residents of Pennsylvania began receiving letters ordering their great grandparents to register for the US military draft by pain of “fine and imprisonment.”
“I said, ‘Geez, what the hell is this about?’” Chuck Huey, 73, of Kingston, Pennsylvania told the Associated Press when he received a letter for his late grandfather Bert Huey, born in 1894 and a first world war veteran who died at the age of 100 in 1995.
“It said he was subject to heavy fines and imprisonment if he didn’t sign up for the draft board,” exclaimed Huey. “We were just totally dumbfounded.”
The US Selective Service System, which sent the letters in error, automatically handles the drafting of US citizens and other US residents that are applicable for conscription. The cause of the error was narrowed down to a Y2K-like bug in the Pennsylvania department of transportation (PDT).
A clerk at the PDT failed to select a century during the transfer of 400,000 records to the Selective Service, producing 1990s records for men born a century earlier.
June 18, 2014
Tim Worstall asks when it would be appropriate for your driverless car to kill you:
Owen Barder points out a quite delightful problem that we’re all going to have to come up with some collective answer to over the driverless cars coming from Google and others. Just when is it going to be acceptable that the car kills you, the driver, or someone else? This is a difficult public policy question and I’m really not sure who the right people to be trying to solve it are. We could, I guess, given that it is a public policy question, turn it over to the political process. It is, after all, there to decide on such questions for us. But given the power of the tort bar over that process I’m not sure that we’d actually like the answer we got. For it would most likely mean that we never do get driverless cars, at least not in the US.
The basic background here is that driverless cars are likely to be hugely safer than the current human directed versions. For most accidents come about as a result of driver error. So, we expect the number of accidents to fall considerably as the technology rolls out. This is great, we want this to happen. However, we’re not going to end up with a world of no car accidents. Which leaves us with the problem of how do we program the cars to work when there is unavoidably going to be an accident?
So we actually end up with two problems here. The first being the one that Barder has outlined, which is that there’s an ethical question to be answered over how the programming decisions are made. Seriously, under what circumstances should a driverless car, made by Google or anyone else, be allowed to kill you or anyone else? The basic Trolly Problem is easy enough, kill fewer people by preference. But when one is necessary which one? And then a second problem which is that the people who have done the coding are going to have to take legal liability for that decision they’ve made. And given the ferocity of the plaintiff’s bar at times I’m not sure that anyone will really be willing to make that decision and thus adopt that potential liability.
Clearly, this needs to be sorted out at the political level. Laws need to be made clarifying the situation. And hands up everyone who thinks that the current political gridlock is going to manage that in a timely manner?
June 9, 2014
If we think of ourselves as empiricists who judge the value of the theory on the basis of how well it predicts, then we should have ditched economic models years ago. Never have our models managed with data to predict the major turning points, ever, in the history of capitalism. So if we were honest, we should simply accept that and rethink our approach.
But actually, I think they’re even worse. We can’t even predict the past very well using our models. Economic models are failing to model the past in a way that can explain the past. So what we end up doing with our economic models is retrofitting the data and our own prejudices about how the economy works.
This is why I’m saying that this profession of mine is not really anywhere near astronomy. It’s much closer to mathematized superstition, organized superstition, which has a priesthood to replicate on the basis of how well we learn the rituals.
Yanis Varoufakis, talking to Peter Suderman, “A Multiplayer Game Environment Is Actually a Dream Come True for an Economist”, Reason, 2014-05-30.
June 4, 2014
Charles Stross discusses some of the second-order effects should the US Secret Service actually get the sarcasm-detection software they’re reportedly looking for:
… But then the Internet happened, and it just so happened to coincide with a flowering of highly politicized and canalized news media channels such that at any given time, whoever is POTUS, around 10% of the US population are convinced that they’re a baby-eating lizard-alien in a fleshsuit who is plotting to bring about the downfall of civilization, rather than a middle-aged male politician in a business suit.
Well now, here’s the thing: automating sarcasm detection is easy. It’s so easy they teach it in first year computer science courses; it’s an obvious application of AI. (You just get your Turing-test-passing AI that understands all the shared assumptions and social conventions that human-human conversation rely on to identify those statements that explicitly contradict beliefs that the conversationalist implicitly holds. So if I say “it’s easy to earn a living as a novelist” and the AI knows that most novelists don’t believe this and that I am a member of the set of all novelists, the AI can infer that I am being sarcastic. Or I’m an outlier. Or I’m trying to impress a date. Or I’m secretly plotting to assassinate the POTUS.)
Of course, we in the real world know that shaved apes like us never saw a system we didn’t want to game. So in the event that sarcasm detectors ever get a false positive rate of less than 99% (or a false negative rate of less than 1%) I predict that everybody will start deploying sarcasm as a standard conversational gambit on the internet.
Wait … I thought everyone already did?
Trolling the secret service will become a competitive sport, the goal being to not receive a visit from the SS in response to your totally serious threat to kill the resident of 1600 Pennsylvania Avenue. Al Qaida terrrrst training camps will hold tutorials on metonymy, aggressive irony, cynical detachment, and sarcasm as a camouflage tactic for suicide bombers. Post-modernist pranks will draw down the full might of law enforcement by mistake, while actual death threats go encoded as LOLCat macros. Any attempt to algorithmically detect sarcasm will fail because sarcasm is self-referential and the awareness that a sarcasm detector may be in use will change the intent behind the message.
As the very first commenter points out, a problem with this is that a substantial proportion of software developers (as indicated by their position on the Asperger/Autism spectrum) find it very difficult to detect sarcasm in real life…
Reposting at his own site an article he did for The Mark News:
The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.
It was a software insecurity, but the problem was entirely human.
Software has vulnerabilities because it’s written by people, and people make mistakes — thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.
In retrospect, the mistake should have been obvious, and it’s amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.
The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google’s security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.
When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it’s announced.