I learned something this weekend about the high cost of the subtle delusion that creative technical problem-solving is the preserve of a priesthood of experts, using powers and perceptions beyond the ken of ordinary human beings.
Terry Pratchett is the author of the Discworld series of satirical fantasies. He is — and I don’t say this lightly, or without having given the matter thought and study — quite probably the most consistently excellent writer of intelligent humor in the last century in English. One has to go back as far as P.G. Wodehouse or Mark Twain to find an obvious equal in consistent quality, volume, and sly wisdom.
I’ve been a fan of Terry’s since before his first Discworld novel; I’m one of the few people who remembers Strata, his 1981 first experiment with the disc-world concept. The man has been something like a long-term acquaintance of mine for ten years — one of those people you’d like to call a friend, and who you think would like to call you a friend, if the two of you ever arranged enough concentrated hang time to get that close. But we’re both damn busy people, and live five thousand miles apart.
This weekend, Terry and I were both guests of honor at a hybrid SF convention and Linux conference called Penguicon held in Warren, Michigan. We finally got our hang time. Among other things, I taught Terry how to shoot pistols. He loves shooter games, but as a British resident his opportunities to play with real firearms are strictly limited. (I can report that Terry handled my .45 semi with remarkable competence and steadiness for a first-timer. I can also report that this surprised me not at all.)
During Terry’s Guest-of-Honor speech, he revealed his past as (he thought) a failed hacker. It turns out that back in the 1970s Terry used to wire up elaborate computerized gadgets from Timex Sinclair computers. One of his projects used a primitive memory chip that had light-sensitive gates to build a sort of perceptron that could actually see the difference between a circle and a cross. His magnum opus was a weather station that would log readings of temperature and barometric pressure overnight and deliver weather reports through a voice synthesizer.
But the most astonishing part of the speech was the followup in which Terry told us that despite his keen interest and elaborate homebrewing, he didn’t become a programmer or a hardware tech because he thought techies had to know mathematics, which he thought he had no talent for. He then revealed that he thought of his projects as a sort of bad imitation of programming, because his hardware and software designs were total lash-ups and he never really knew what he was doing.
I couldn’t stand it. “And you think it was any different for us?” I called out. The audience laughed and Terry passed off the remark with a quip. But I was just boggled. Because I know that almost all really bright techies start out that way, as compulsive tinkerers who blundered around learning by experience before they acquired systematic knowledge. “Oh ye gods and little fishes”, I thought to myself, “Terry is a hacker!”
Yes, I thought ‘is’ — even if Terry hasn’t actually tinkered any computer software or hardware in a quarter-century. Being a hacker is expressed through skills and projects, but it’s really a kind of attitude or mental stance that, once acquired, is never really lost. It’s a kind of intense, omnivorous playfulness that tends to color everything a person does.
So it burst upon me that Terry Pratchett has the hacker nature. Which, actually, explains something that has mildly puzzled me for years. Terry has a huge following in the hacker community — knowing his books is something close to basic cultural literacy for Internet geeks. One is actually hard-put to think of any other writer for whom this is as true. The question this has always raised for me is: why Terry, rather than some hard-SF writer whose work explicitly celebrates the technologies we play with?
Eric S. Raymond, “The Delusion of Expertise”, Armed and Dangerous, 2003-05-05.
August 22, 2016
July 28, 2016
Michael Geist explains why the federal government’s plans for digitization are so underwhelming:
Imagine going to your local library in search of Canadian books. You wander through the stacks but are surprised to find most shelves barren with the exception of books that are over a hundred years old. This sounds more like an abandoned library than one serving the needs of its patrons, yet it is roughly what a recently released Canadian National Heritage Digitization Strategy envisions.
Led by Library and Archives Canada and endorsed by Canadian Heritage Minister Mélanie Joly, the strategy acknowledges that digital technologies make it possible “for memory institutions to provide immediate access to their holdings to an almost limitless audience.”
Yet it stops strangely short of trying to do just that.
My weekly technology law column notes that rather than establishing a bold objective as has been the hallmark of recent Liberal government policy initiatives, the strategy sets as its 10-year goal the digitization of 90 per cent of all published heritage dating from before 1917 along with 50 per cent of all monographs published before 1940. It also hopes to cover all scientific journals published by Canadian universities before 2000, selected sound recordings, and all historical maps.
The strategy points to similar initiatives in other countries, but the Canadian targets pale by comparison. For example, the Netherlands plans to digitize 90 per cent of all books published in that country by 2018 along with many newspapers and magazines that pre-date 1940.
Canada’s inability to adopt a cohesive national digitization strategy has been an ongoing source of frustration and the subject of multiple studies which concluded that the country is falling behind. While there have been no shortage of pilot projects and useful initiatives from university libraries, Canada has thus far failed to articulate an ambitious, national digitization vision.
July 27, 2016
When it comes to computer security, you should always listen to what Bruce Schneier has to say, especially when it comes to the “Internet of things”:
Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).
So far, internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by — presumptively — China in 2015.
On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door — or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location.
With the advent of the Internet of Things and cyber-physical systems in general, we’ve given the internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete.
Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.
The increased risks come from three things: software control of systems, interconnections between systems, and automatic or autonomous systems. Let’s look at them in turn
I’m usually a pretty tech-positive person, but I actively avoid anything that bills itself as being IoT-enabled … call me paranoid, but I don’t want to hand over local control of my environment, my heating or cooling system, or pretty much anything else on my property to an outside agency (whether government or corporate).
July 4, 2016
Published on 4 Jun 2016
How did the ancient civilization of Sumer first develop the concept of the written word? It all began with simple warehouse tallies in the temples, but as the scribes sought more simple ways to record information, those tallies gradually evolved from pictograms into cuneiform text which could be used to convey complex, abstract, or even lyrical ideas.
Sumer was the land of the first real cities, and those cities required complex administration. The temples which kept people together were not only religious places, but also warehouses which stored the community’s collective wealth until it was needed to get through lean years. As the donations came in, scribes would count the items and draw pictures of them on clay tablets. The images quickly became abstract as the scribes needed to rush, and they also morphed to represent not just an image but the word itself – more specifically, the sound of the word, which meant that it could also be written to represent other words that sounded similar (homophones). Sumerian language often put words together to express new ideas, and the same concept applied to their writing. As people came to use this system more, the scribes began to write from left to right instead of top to bottom since they were less likely to mess up their clay tablets that way. Those who read the tablets didn’t appreciate this change, so the scribes rotated the words 90 degrees allowing tablets to be rotated if the reader preferred – but this made the images even more abstract, until eventually the pictograms vanished entirely to be replaced by wedge-shaped stylus marks: cuneiform. Many of Sumer’s neighbors adopted this invention and helped it spread throughout the region, though completely different writing systems developed independently in cultures situated in places like China and South America!
June 29, 2016
Scratch the surface of “Silicon Valley culture” and you’ll find dozens of subcultures beneath. One means of production unites many tribes, but that’s about all that unites them. At a company the size of Google or even GitHub, you can expect to find as many varieties of cliques as you would in an equivalently sized high school, along with a “corporate culture” that’s as loudly promoted and roughly as genuine as the “school spirit” on display at every pep rally you were ever forced to sit through. One of those groups will invariably be the weirdoes.
Humans are social animals, and part of what makes a social species social is that its members place a high priority on signaling their commitment to other members of their species. Weirdoes’ priorities are different; our primary commitment is to an idea or a project or a field of inquiry. Species-membership commitment doesn’t just take a back seat, it’s in the trunk with a bag over its head.
Not only that, our primary commitments are so consuming that they leak over into everything we think, say, and do. This makes us stick out like the proverbial sore thumb: We’re unable to hide that our deepest loyalties aren’t necessarily to the people immediately around us, even if they’re around us every day. We have a name for people whose loyalties adhere to the field of technology — and to the society of our fellow weirdoes who we meet and befriend in technology-mediated spaces — rather than to the hairless apes nearby. I prefer this term to “weird nerds,” and so I’ll use it here: hackers.
You might not consider hackers to be a tribe apart, but I guarantee you that many — if not most — hackers themselves do. Eric S. Raymond’s “A Brief History of Hackerdom,” whose first draft dates to 1992, contains a litany of descriptions that speak to this:
They wore white socks and polyester shirts and ties and thick glasses and coded in machine language and assembler and FORTRAN and half a dozen ancient languages now forgotten .…
The mainstream of hackerdom, (dis)organized around the Internet and by now largely identified with the Unix technical culture, didn’t care about the commercial services. These hackers wanted better tools and more Internet ….
[I]nstead of remaining in isolated small groups each developing their own ephemeral local cultures, they discovered (or re-invented) themselves as a networked tribe.
Meredith Patterson, “When Nerds Collide: My intersectionality will have weirdoes or it will be bullshit”, Medium.com, 2014-04-23.
June 22, 2016
Of all the sound, fury, and quiet voices of reason in the storm of controversy about tech culture and what is to become of it, quiet voice of reason Zeynep Tufekci’s “No, Nate, brogrammers may not be macho, but that’s not all there is to it” moves the discussion farther forward than any other contribution I’ve seen to date. Sadly, though, it still falls short of truly bridging the conceptual gap between nerds and “weird nerds.” Speaking as a lifelong member of the weird-nerd contingent, it’s truly surreal that this distinction exists at all. I’m slightly older than Nate Silver and about a decade younger than Paul Graham, so it wouldn’t surprise me if either or both find it just as puzzling. There was no cultural concept of cool nerds, or even not-cool-but-not-that-weird nerds, when we were growing up, or even when we were entering the workforce.
That’s no longer true. My younger colleague @puellavulnerata observes that for a long time, there were only weird nerds, but when our traditional pursuits (programming, electrical engineering, computer games, &c) became a route to career stability, nerdiness and its surface-level signifiers got culturally co-opted by trend-chasers who jumped on the style but never picked up on the underlying substance that differentiates weird nerds from the culture that still shuns them. That doesn’t make them “fake geeks,” boy, girl, or otherwise — you can adopt geek interests without taking on the entire weird-nerd package — but it’s still an important distinction. Indeed, the notion of “cool nerds” serves to erase the very existence of weird nerds, to the extent that many people who aren’t weird nerds themselves only seem to remember we exist when we commit some faux pas by their standards.
Even so, science, technology, and mathematics continue to attract the same awkward, isolated, and lonely personalities they have always attracted. Weird nerds are made, not born, and our society turns them out at a young age. Tufekci argues that “life’s not just high school,” but the process of unlearning lessons ingrained from childhood takes a lot more than a cap and gown or even a $10 million VC check, especially when life continues to reinforce those lessons well into adulthood. When weird nerds watch the cool kids jockeying for social position on Twitter, we see no difference between these status games and the ones we opted out of in high school. No one’s offered evidence to the contrary, so what incentive do we have to play that game? Telling us to grow up, get over it, and play a game we’re certain to lose is a demand that we deny the evidence of our senses and an infantilising insult rolled into one.
This phenomenon explains much of the backlash from weird nerds against “brogrammers” and “geek feminists” alike. (If you thought the conflict was only between those two groups, or that someone who criticises one group must necessarily be a member of the other, then you haven’t been paying close enough attention.) Both groups are latecomers barging in on a cultural space that was once a respite for us, and we don’t appreciate either group bringing its cultural conflicts into our space in a way that demands we choose one side or the other. That’s a false dichotomy, and false dichotomies make us want to tear our hair out.
Meredith Patterson, “When Nerds Collide: My intersectionality will have weirdoes or it will be bullshit”, Medium.com, 2014-04-23.
May 26, 2016
Ten years ago, Terry Teachout finally got around to watching D.W. Griffith’s The Birth of a Nation, and found (to his relief) that it was just as offensively racist as everyone had always said. He also discovered that silent movies are becoming terra incognita even to those who love old movies:
None of this, however, interested me half so much as the fact that The Birth of a Nation progresses with the slow-motion solemnity of a funeral march. Even the title cards stay on the screen for three times as long as it takes to read them. Five minutes after the film started, I was squirming with impatience, and after another five minutes passed, I decided out of desperation to try an experiment: I cranked the film up to four times its normal playing speed and watched it that way. It was overly brisk in two or three spots, most notably the re-enactment of Lincoln’s assassination (which turned out to be quite effective – it’s the best scene in the whole film). For the most part, though, I found nearly all of The Birth of a Nation to be perfectly intelligible at the faster speed.
Putting aside for a moment the insurmountable problem of its content, it was the agonizingly slow pace of The Birth of a Nation that proved to be the biggest obstacle to my experiencing it as an objet d’art. Even after I sped it up, my mind continued to wander, and one of the things to which it wandered was my similar inability to extract aesthetic pleasure out of medieval art. With a few exceptions, medieval and early Renaissance art and music don’t speak to me. The gap of sensibility is too wide for me to cross. I have a feeling that silent film – not just just The Birth of a Nation, but all of it – is no more accessible to most modern sensibilities. (The only silent movies I can watch with more than merely antiquarian interest are the comedies of Buster Keaton.) Nor do I think the problem is solely, or even primarily, that it’s silent: I have no problem with plotless dance, for instance. It’s that silent film “speaks” to me in an alien tongue, one I can only master in an intellectual way. That’s not good enough for me when it comes to art, whose immediate appeal is not intellectual but visceral (though the intellect naturally enters into it).
As for The Birth of a Nation, I’m glad I saw it once. My card is now officially punched. On the other hand, I can’t imagine voluntarily seeing it again, any more than I’d attend the premiere of an opera by Philip Glass other than at gunpoint. It is the quintessential example of a work of art that has fulfilled its historical purpose and can now be put aside permanently – and I don’t give a damn about history, at least not in my capacity as an aesthete. I care only for the validity of the immediate experience.
[…] Thrill me and all is forgiven. Bore me and you’ve lost me. That’s why I think it’s now safe to file and forget The Birth of a Nation. Yes, it’s still historically significant, and yes, it tells us something important about the way we once were. But it’s boring — and thank God for that.
May 18, 2016
May 9, 2016
James Pinkstone talks about the time he discovered that Apple Music had helpfully deleted over 100 Gb of his music files on his local hard drive:
What Amber explained was exactly what I’d feared: through the Apple Music subscription, which I had, Apple now deletes files from its users’ computers. When I signed up for Apple Music, iTunes evaluated my massive collection of Mp3s and WAV files, scanned Apple’s database for what it considered matches, then removed the original files from my internal hard drive. REMOVED them. Deleted. If Apple Music saw a file it didn’t recognize — which came up often, since I’m a freelance composer and have many music files that I created myself — it would then download it to Apple’s database, delete it from my hard drive, and serve it back to me when I wanted to listen, just like it would with my other music files it had deleted.
This led to four immediate problems:
1. If Apple serves me my music, that means that when I don’t have wifi access, I can’t listen to it. When I say “my music,” I don’t just mean the music that, over twenty years (since before iTunes existed), I painstakingly imported from thousands of CDs and saved to my computer’s internal hard drive. I also mean original music that I recorded and saved to my computer. Apple and wifi access now decide if I can hear it, and where, and when.
2. What Apple considers a “match” often isn’t. That rare, early version of Fountains of Wayne’s “I’ll Do The Driving,” labeled as such? Still had its same label, but was instead replaced by the later-released, more widely available version of the song. The piano demo of “Sister Jack” that I downloaded directly from Spoon’s website ten years ago? Replaced with the alternate, more common demo version of the song. What this means, then, is that Apple is engineering a future in which rare, or varying, mixes and versions of songs won’t exist unless Apple decides they do. Said alternate versions will be replaced by the most mainstream version, despite their original, at-one-time correct, titles, labels, and file contents.
3. Although I could click the little cloud icon next to each song title and “get it back” from Apple, their servers aren’t fast enough to make it an easy task. It would take around thirty hours to get my music back. And even then…
4. Should I choose to reclaim my songs via download, the files I would get back would not necessarily be the same as my original files. As a freelance composer, I save WAV files of my own compositions rather than Mp3s. WAV files have about ten times the number of samples, so they just sound better. Since Apple Music does not support WAV files, as they stole my compositions and stored them in their servers, they also converted them to Mp3s or AACs. So not only do I need to keep paying Apple Music just to access my own files, but I have to hear an inferior version of each recording instead of the one I created.
I didn’t sign up for the free Apple Music trial when it was introduced because I have a data cap on my internet connection: just a few hours of listening to my own music might well make a big dent in my internet usage for the month. That would be ridiculously wasteful. Even so, every now and again, iTunes cheerfully lets me know that this or that song from my collection can no longer be played (and has deleted it) with no chance to fix it on my part. When it’s a song I recorded from the original CD (and I still have the CD), it’s merely an inconvenience. When it’s a song I paid Apple to download, it’s much more than that. It implies that everything I’ve downloaded from Apple is actually just a rental with an indeterminate-length rental period.
It is, however, a sign of the future:
For about ten years, I’ve been warning people, “hang onto your media. One day, you won’t buy a movie. You’ll buy the right to watch a movie, and that movie will be served to you. If the companies serving the movie don’t want you to see it, or they want to change something, they will have the power to do so. They can alter history, and they can make you keep paying for things that you formerly could have bought. Information will be a utility rather than a possession. Even information that you yourself have created will require unending, recurring payments just to access.”
When giving the above warning, however, even in my most Orwellian paranoia I never could have dreamed that the content holders, like Apple, would also reach into your computer and take away what you already owned. If Taxi Driver is on Netflix, Netflix doesn’t come to your house and steal your Taxi Driver DVD. But that’s where we’re headed. When it comes to music, Apple is already there.
April 11, 2016
In a way most of us don’t really understand how differently we live from even a very short time ago. Read the comments in this excellent piece from Sarah Hoyt.
The fact is that right now just about everybody in the Developed nations can afford products that are better and cheaper than anything has ever been made. For instance my car is almost ten years old and has never required major maintenance and the body is as free of exterior rust as when it was new. The computer I’m writing this on is more powerful than ANY computer that you could buy in 1980. The clothes I’m wearing are more durable and sewn better than anything you could buy in 1950. And everything is essentially so cheap that just about everybody can afford it.
The fact is that, because of the constant improvement of manufacturing techniques the difference between the highest quality and lowest quality goods has become essentially nonexistent.
The great gap in lifestyles due to wealth is by and large gone, which begs the question, what can the wealthy buy with their money? The answer isn’t very pleasant.
What they buy is access and power. You don’t have to look much further than Warren Buffett, George Soros or Tom Steyer to see that. Or the Koch Brothers for that matter. All of these people and other have created large influence building organizations for the sole purpose of influencing the rest of us stupid schmucks to do what they want us to. What they want us to do all too often is to give up the liberties and standard of living that our parents and grandparents worked so hard to build and retreat back to a lifestyle that will not compete with our “betters.” Sorry, but I’m not going for that.
J.C. Carlton, “What Can A Billionaire Buy That Most People Can’t?”, The Arts Mechanical, 2016-03-30.
April 4, 2016
March 29, 2016
Charles Stross has a theory:
A lot of people are watching the spectacle of Apple vs. the FBI and the Homeland Security Theatre and rubbing their eyes, wondering why Apple (in the person of CEO Tim Cook) is suddenly the knight in shining armour on the side of consumer privacy and civil rights. Apple, after all, is a goliath-sized corporate behemoth with the second largest market cap in US stock market history — what’s in it for them?
As is always the case, to understand why Apple has become so fanatical about customer privacy over the past five years that they’re taking on the US government, you need to follow the money.
Apple see their long term future as including a global secure payments infrastructure that takes over the role of Visa and Mastercard’s networks — and ultimately of spawning a retail banking subsidiary to provide financial services directly, backed by some of their cash stockpile.
The FBI thought they were asking for a way to unlock a mobile phone, because the FBI is myopically focussed on past criminal investigations, not the future of the technology industry, and the FBI did not understand that they were actually asking for a way to tracelessly unlock and mess with every ATM and credit card on the planet circa 2030 (if not via Apple, then via the other phone OSs, once the festering security fleapit that is Android wakes up and smells the money).
If the FBI get what they want, then the back door will be installed and the next-generation payments infrastructure will be just as prone to fraud as the last-generation card infrastructure, with its card skimmers and identity theft.
And this is why Tim Cook is willing to go to the mattresses with the US department of justice over iOS security: if nobody trusts their iPhone, nobody will be willing to trust the next-generation Apple Bank, and Apple is going to lose their best option for securing their cash pile as it climbs towards the stratosphere.
March 24, 2016
Published on 7 Apr 2015
What is tying and how is this a form of price discrimination? An example of a tied good is an HP printer and the HP ink you need for that printer. The printer (the base good) is often relatively cheap whereas the ink (the variable good) has a high markup, and eventually costs you far more than what you paid for the printer. Other examples include cell phones and data plans or the Kindle Fire and the accompanying books or music you purchase from Amazon. The base good is sold close to marginal cost, and the variable good is sold at above marginal cost. Why do companies tie their goods? Tied goods make it easy to price discriminate in a way that increases output and social welfare. Does tying increase or decrease social welfare? What is the difference between bundling and tying? We discuss these questions and others in this video.
March 20, 2016
It’s only a rumour rather than a definite stand, but it is a hopeful one for civil liberties:
The spirit of anarchy and anti-establishment still runs strong at Apple. Rather than comply with the government’s requests to develop a so-called “GovtOS” to unlock the iPhone 5c of San Bernardino shooter Syed Rizwan Farook, The New York Times‘ half-dozen sources say that some software engineers may quit instead. “It’s an independent culture and a rebellious one,” former Apple engineering manager Jean-Louis Gassée tells NYT. “If the government tries to compel testimony or action from these engineers, good luck with that.”
Former senior product manager for Apple’s security and privacy division Window Snyder agrees. “If someone attempts to force them to work on something that’s outside their personal values, they can expect to find a position that’s a better fit somewhere else.”
In another instance of Apple’s company culture clashing with what the federal government demands, the development teams are apparently relatively siloed off from one another. It isn’t until a product gets closer to release that disparate teams like hardware and software engineers come together for finalizing a given gizmo. NYT notes that the team of six to 10 engineers needed to develop the back door doesn’t currently exist and that forcing any sort of collaboration would be incredibly difficult, again, due to how Apple works internally.
March 18, 2016
Published on 31 Jul 2014
The use of massive bombs and charges by the Royal Engineers was crucial during the war. See slow motion footage of them using explosive devices such as the Bangalore Torpedo today.