A realistic view by joyoftech.com:
H/T to The Arts Mechanical for the link.
Every few years, a researcher replicates a security study by littering USB sticks around an organization’s grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as “teachable moments” for others. “If only everyone was more security aware and had more security training,” they say, “the Internet would be a much safer place.”
Enough of that. The problem isn’t the users: it’s that we’ve designed our computer systems’ security so badly that we demand the user do all of these counterintuitive things. Why can’t users choose easy-to-remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?
Bruce Schneier, “Security Design: Stop Trying to Fix the User”, Schneier on Security, 2016-10-03.
At Samizdata, Patrick Crozier gets all contrarian about the tank in a post he titles “Haig’s greatest mistake”:
On 15 September 1916 tanks made their debut at Flers-Courcelette, one of the many engagements which took place during the Battle of the Somme.
The battle marked the beginning of a sorry chapter in British military history because the truth – a truth that to this day few seem prepared to acknowledge – is that the First World War tank was useless.
The list of its failings is lengthy. It was slow, it was unreliable, it had no suspension and it was horrible to operate. The temperature inside was typically over 100°F and as exhaust gases built up so crew effectiveness collapsed. It was also highly vulnerable. Field artillery could take it out easily. Even rifle ammunition could be effective against it. While normal bullets might not be able to penetrate the armour they could knock off small pieces of metal from the inside – known as spall – which then whizzed round the interior wounding all and sundry.
That the tank was the brainchild of Winston Churchill from his days as head of the Admiralty should have alerted senior commanders to the possibility that it was yet another of his crackpot schemes. But they persisted. For his part, Haig being a technophile put a huge amount of faith in the new invention. His diary is littered with references to the tank and he seems to have made great efforts to secure ever more of them. In consequence, huge amounts of effort went into a technological dead end when it would have been far better spent on guns, shells and fuzes.
Not that such efforts were ever likely to satisfy the snake-oil salesmen who made up the ranks of the tank enthusiasts. In the face of tank failure after tank failure they simply claimed that their beloved weapon just wasn’t being used properly.
Published on 15 Sep 2016
For years the British had developed the idea of the “landship” or tank and now it was finally ready for the first deployment during the Battle of Flers-Courcelette. And even though technical problems plagued the new invention, the British leadership was confident that this new weapon would break the stalemate at the Western Front for good. In the meantime Germany was focusing all offensive efforts on the Romanian front to mercilessly crush the new enemy.
Published on 12 Sep 2016
The idea for an armoured vehicle that could withstand fire and travel across battlefields was already developed in 1914 after the Race to the Sea. The British “Landship Committee” developed the tank weapon in secrecy. The French were also trying out different designs at the same time. Learn all about the development and the invention of the tank in our special episode.
Megan McArdle explains why some Apple fans are not overjoyed at the latest iPhones:
You’ve probably been thinking to yourself, “Gee, I wish I couldn’t charge my phone while also listening to music.” Or perhaps, “Gosh, if only my headphones were more expensive, easier to lose and required frequent charging.” If so, you’re in luck. Apple’s newest iPhone, unveiled on Wednesday, lacks the familiar 3.5-millimeter headphone jack. You can listen to music through the same lightning jack that you charge the phone with, or you can shell out for wireless headphones. The internets have been … unpleased with this news.
To be fair, there are design reasons for doing this. As David Pogue writes, the old-fashioned jack is an ancient piece of technology. It’s been around for more than 50 years. “As a result,” says Pogue, “it’s bulky — and in a phone, bulk = death.”
Getting rid of this ancient titan will make for a thinner phone or leave room for a bigger battery. Taking a hole out of the phone also makes it easier to waterproof. And getting rid of the jack removes a possible point of failure, since friction isn’t good for parts.
For people who place a high value on a thin phone, this is probably a good move; they’ll switch to wireless earbuds or use the lightning jack. But there are those of us who have never dropped our phones in the sink. We replace our iPhones when the battery dies, an event that tends to occur long before the headphone jack breaks. There are people in the world who take their phones on long trips, requiring them to charge them while making work calls, and they won’t want to fumble around for splitters or adapters. Some of us do not care whether our phone is merely fashionably slender or outright anorexic. For these groups, Apple’s move represents a trivial gain for a large loss: the vital commodity that economists call option value.
Option value is basically what it sounds like. The option to do something is worth having, even if you never actually do it. That’s because it increases the range of possibility, and some of those possibilities may be better than your current alternatives. My favorite example of option value is the famous economist who told me that he had tried to argue his wife into always ordering an extra entree, one they hadn’t tried before, when they got Chinese takeout. Sure, that extra entree cost them money. And sure, they might not like it. But that entree had option value embedded in it: they might discover that they like the new entree even better than the things they usually ordered, and thereby move the whole family up to a higher valued use of their Chinese food dollars.
I learned something this weekend about the high cost of the subtle delusion that creative technical problem-solving is the preserve of a priesthood of experts, using powers and perceptions beyond the ken of ordinary human beings.
Terry Pratchett is the author of the Discworld series of satirical fantasies. He is — and I don’t say this lightly, or without having given the matter thought and study — quite probably the most consistently excellent writer of intelligent humor in the last century in English. One has to go back as far as P.G. Wodehouse or Mark Twain to find an obvious equal in consistent quality, volume, and sly wisdom.
I’ve been a fan of Terry’s since before his first Discworld novel; I’m one of the few people who remembers Strata, his 1981 first experiment with the disc-world concept. The man has been something like a long-term acquaintance of mine for ten years — one of those people you’d like to call a friend, and who you think would like to call you a friend, if the two of you ever arranged enough concentrated hang time to get that close. But we’re both damn busy people, and live five thousand miles apart.
This weekend, Terry and I were both guests of honor at a hybrid SF convention and Linux conference called Penguicon held in Warren, Michigan. We finally got our hang time. Among other things, I taught Terry how to shoot pistols. He loves shooter games, but as a British resident his opportunities to play with real firearms are strictly limited. (I can report that Terry handled my .45 semi with remarkable competence and steadiness for a first-timer. I can also report that this surprised me not at all.)
During Terry’s Guest-of-Honor speech, he revealed his past as (he thought) a failed hacker. It turns out that back in the 1970s Terry used to wire up elaborate computerized gadgets from Timex Sinclair computers. One of his projects used a primitive memory chip that had light-sensitive gates to build a sort of perceptron that could actually see the difference between a circle and a cross. His magnum opus was a weather station that would log readings of temperature and barometric pressure overnight and deliver weather reports through a voice synthesizer.
But the most astonishing part of the speech was the followup in which Terry told us that despite his keen interest and elaborate homebrewing, he didn’t become a programmer or a hardware tech because he thought techies had to know mathematics, which he thought he had no talent for. He then revealed that he thought of his projects as a sort of bad imitation of programming, because his hardware and software designs were total lash-ups and he never really knew what he was doing.
I couldn’t stand it. “And you think it was any different for us?” I called out. The audience laughed and Terry passed off the remark with a quip. But I was just boggled. Because I know that almost all really bright techies start out that way, as compulsive tinkerers who blundered around learning by experience before they acquired systematic knowledge. “Oh ye gods and little fishes”, I thought to myself, “Terry is a hacker!”
Yes, I thought ‘is’ — even if Terry hasn’t actually tinkered any computer software or hardware in a quarter-century. Being a hacker is expressed through skills and projects, but it’s really a kind of attitude or mental stance that, once acquired, is never really lost. It’s a kind of intense, omnivorous playfulness that tends to color everything a person does.
So it burst upon me that Terry Pratchett has the hacker nature. Which, actually, explains something that has mildly puzzled me for years. Terry has a huge following in the hacker community — knowing his books is something close to basic cultural literacy for Internet geeks. One is actually hard-put to think of any other writer for whom this is as true. The question this has always raised for me is: why Terry, rather than some hard-SF writer whose work explicitly celebrates the technologies we play with?
Eric S. Raymond, “The Delusion of Expertise”, Armed and Dangerous, 2003-05-05.
Michael Geist explains why the federal government’s plans for digitization are so underwhelming:
Imagine going to your local library in search of Canadian books. You wander through the stacks but are surprised to find most shelves barren with the exception of books that are over a hundred years old. This sounds more like an abandoned library than one serving the needs of its patrons, yet it is roughly what a recently released Canadian National Heritage Digitization Strategy envisions.
Led by Library and Archives Canada and endorsed by Canadian Heritage Minister Mélanie Joly, the strategy acknowledges that digital technologies make it possible “for memory institutions to provide immediate access to their holdings to an almost limitless audience.”
Yet it stops strangely short of trying to do just that.
My weekly technology law column notes that rather than establishing a bold objective as has been the hallmark of recent Liberal government policy initiatives, the strategy sets as its 10-year goal the digitization of 90 per cent of all published heritage dating from before 1917 along with 50 per cent of all monographs published before 1940. It also hopes to cover all scientific journals published by Canadian universities before 2000, selected sound recordings, and all historical maps.
The strategy points to similar initiatives in other countries, but the Canadian targets pale by comparison. For example, the Netherlands plans to digitize 90 per cent of all books published in that country by 2018 along with many newspapers and magazines that pre-date 1940.
Canada’s inability to adopt a cohesive national digitization strategy has been an ongoing source of frustration and the subject of multiple studies which concluded that the country is falling behind. While there have been no shortage of pilot projects and useful initiatives from university libraries, Canada has thus far failed to articulate an ambitious, national digitization vision.
When it comes to computer security, you should always listen to what Bruce Schneier has to say, especially when it comes to the “Internet of things”:
Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).
So far, internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by — presumptively — China in 2015.
On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door — or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location.
With the advent of the Internet of Things and cyber-physical systems in general, we’ve given the internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete.
Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.
The increased risks come from three things: software control of systems, interconnections between systems, and automatic or autonomous systems. Let’s look at them in turn
I’m usually a pretty tech-positive person, but I actively avoid anything that bills itself as being IoT-enabled … call me paranoid, but I don’t want to hand over local control of my environment, my heating or cooling system, or pretty much anything else on my property to an outside agency (whether government or corporate).
Published on 4 Jun 2016
How did the ancient civilization of Sumer first develop the concept of the written word? It all began with simple warehouse tallies in the temples, but as the scribes sought more simple ways to record information, those tallies gradually evolved from pictograms into cuneiform text which could be used to convey complex, abstract, or even lyrical ideas.
Sumer was the land of the first real cities, and those cities required complex administration. The temples which kept people together were not only religious places, but also warehouses which stored the community’s collective wealth until it was needed to get through lean years. As the donations came in, scribes would count the items and draw pictures of them on clay tablets. The images quickly became abstract as the scribes needed to rush, and they also morphed to represent not just an image but the word itself – more specifically, the sound of the word, which meant that it could also be written to represent other words that sounded similar (homophones). Sumerian language often put words together to express new ideas, and the same concept applied to their writing. As people came to use this system more, the scribes began to write from left to right instead of top to bottom since they were less likely to mess up their clay tablets that way. Those who read the tablets didn’t appreciate this change, so the scribes rotated the words 90 degrees allowing tablets to be rotated if the reader preferred – but this made the images even more abstract, until eventually the pictograms vanished entirely to be replaced by wedge-shaped stylus marks: cuneiform. Many of Sumer’s neighbors adopted this invention and helped it spread throughout the region, though completely different writing systems developed independently in cultures situated in places like China and South America!
Scratch the surface of “Silicon Valley culture” and you’ll find dozens of subcultures beneath. One means of production unites many tribes, but that’s about all that unites them. At a company the size of Google or even GitHub, you can expect to find as many varieties of cliques as you would in an equivalently sized high school, along with a “corporate culture” that’s as loudly promoted and roughly as genuine as the “school spirit” on display at every pep rally you were ever forced to sit through. One of those groups will invariably be the weirdoes.
Humans are social animals, and part of what makes a social species social is that its members place a high priority on signaling their commitment to other members of their species. Weirdoes’ priorities are different; our primary commitment is to an idea or a project or a field of inquiry. Species-membership commitment doesn’t just take a back seat, it’s in the trunk with a bag over its head.
Not only that, our primary commitments are so consuming that they leak over into everything we think, say, and do. This makes us stick out like the proverbial sore thumb: We’re unable to hide that our deepest loyalties aren’t necessarily to the people immediately around us, even if they’re around us every day. We have a name for people whose loyalties adhere to the field of technology — and to the society of our fellow weirdoes who we meet and befriend in technology-mediated spaces — rather than to the hairless apes nearby. I prefer this term to “weird nerds,” and so I’ll use it here: hackers.
You might not consider hackers to be a tribe apart, but I guarantee you that many — if not most — hackers themselves do. Eric S. Raymond’s “A Brief History of Hackerdom,” whose first draft dates to 1992, contains a litany of descriptions that speak to this:
They wore white socks and polyester shirts and ties and thick glasses and coded in machine language and assembler and FORTRAN and half a dozen ancient languages now forgotten .…
The mainstream of hackerdom, (dis)organized around the Internet and by now largely identified with the Unix technical culture, didn’t care about the commercial services. These hackers wanted better tools and more Internet ….
[I]nstead of remaining in isolated small groups each developing their own ephemeral local cultures, they discovered (or re-invented) themselves as a networked tribe.
Meredith Patterson, “When Nerds Collide: My intersectionality will have weirdoes or it will be bullshit”, Medium.com, 2014-04-23.
Of all the sound, fury, and quiet voices of reason in the storm of controversy about tech culture and what is to become of it, quiet voice of reason Zeynep Tufekci’s “No, Nate, brogrammers may not be macho, but that’s not all there is to it” moves the discussion farther forward than any other contribution I’ve seen to date. Sadly, though, it still falls short of truly bridging the conceptual gap between nerds and “weird nerds.” Speaking as a lifelong member of the weird-nerd contingent, it’s truly surreal that this distinction exists at all. I’m slightly older than Nate Silver and about a decade younger than Paul Graham, so it wouldn’t surprise me if either or both find it just as puzzling. There was no cultural concept of cool nerds, or even not-cool-but-not-that-weird nerds, when we were growing up, or even when we were entering the workforce.
That’s no longer true. My younger colleague @puellavulnerata observes that for a long time, there were only weird nerds, but when our traditional pursuits (programming, electrical engineering, computer games, &c) became a route to career stability, nerdiness and its surface-level signifiers got culturally co-opted by trend-chasers who jumped on the style but never picked up on the underlying substance that differentiates weird nerds from the culture that still shuns them. That doesn’t make them “fake geeks,” boy, girl, or otherwise — you can adopt geek interests without taking on the entire weird-nerd package — but it’s still an important distinction. Indeed, the notion of “cool nerds” serves to erase the very existence of weird nerds, to the extent that many people who aren’t weird nerds themselves only seem to remember we exist when we commit some faux pas by their standards.
Even so, science, technology, and mathematics continue to attract the same awkward, isolated, and lonely personalities they have always attracted. Weird nerds are made, not born, and our society turns them out at a young age. Tufekci argues that “life’s not just high school,” but the process of unlearning lessons ingrained from childhood takes a lot more than a cap and gown or even a $10 million VC check, especially when life continues to reinforce those lessons well into adulthood. When weird nerds watch the cool kids jockeying for social position on Twitter, we see no difference between these status games and the ones we opted out of in high school. No one’s offered evidence to the contrary, so what incentive do we have to play that game? Telling us to grow up, get over it, and play a game we’re certain to lose is a demand that we deny the evidence of our senses and an infantilising insult rolled into one.
This phenomenon explains much of the backlash from weird nerds against “brogrammers” and “geek feminists” alike. (If you thought the conflict was only between those two groups, or that someone who criticises one group must necessarily be a member of the other, then you haven’t been paying close enough attention.) Both groups are latecomers barging in on a cultural space that was once a respite for us, and we don’t appreciate either group bringing its cultural conflicts into our space in a way that demands we choose one side or the other. That’s a false dichotomy, and false dichotomies make us want to tear our hair out.
Meredith Patterson, “When Nerds Collide: My intersectionality will have weirdoes or it will be bullshit”, Medium.com, 2014-04-23.
Ten years ago, Terry Teachout finally got around to watching D.W. Griffith’s The Birth of a Nation, and found (to his relief) that it was just as offensively racist as everyone had always said. He also discovered that silent movies are becoming terra incognita even to those who love old movies:
None of this, however, interested me half so much as the fact that The Birth of a Nation progresses with the slow-motion solemnity of a funeral march. Even the title cards stay on the screen for three times as long as it takes to read them. Five minutes after the film started, I was squirming with impatience, and after another five minutes passed, I decided out of desperation to try an experiment: I cranked the film up to four times its normal playing speed and watched it that way. It was overly brisk in two or three spots, most notably the re-enactment of Lincoln’s assassination (which turned out to be quite effective – it’s the best scene in the whole film). For the most part, though, I found nearly all of The Birth of a Nation to be perfectly intelligible at the faster speed.
Putting aside for a moment the insurmountable problem of its content, it was the agonizingly slow pace of The Birth of a Nation that proved to be the biggest obstacle to my experiencing it as an objet d’art. Even after I sped it up, my mind continued to wander, and one of the things to which it wandered was my similar inability to extract aesthetic pleasure out of medieval art. With a few exceptions, medieval and early Renaissance art and music don’t speak to me. The gap of sensibility is too wide for me to cross. I have a feeling that silent film – not just just The Birth of a Nation, but all of it – is no more accessible to most modern sensibilities. (The only silent movies I can watch with more than merely antiquarian interest are the comedies of Buster Keaton.) Nor do I think the problem is solely, or even primarily, that it’s silent: I have no problem with plotless dance, for instance. It’s that silent film “speaks” to me in an alien tongue, one I can only master in an intellectual way. That’s not good enough for me when it comes to art, whose immediate appeal is not intellectual but visceral (though the intellect naturally enters into it).
As for The Birth of a Nation, I’m glad I saw it once. My card is now officially punched. On the other hand, I can’t imagine voluntarily seeing it again, any more than I’d attend the premiere of an opera by Philip Glass other than at gunpoint. It is the quintessential example of a work of art that has fulfilled its historical purpose and can now be put aside permanently – and I don’t give a damn about history, at least not in my capacity as an aesthete. I care only for the validity of the immediate experience.
[…] Thrill me and all is forgiven. Bore me and you’ve lost me. That’s why I think it’s now safe to file and forget The Birth of a Nation. Yes, it’s still historically significant, and yes, it tells us something important about the way we once were. But it’s boring — and thank God for that.
Powered by WordPress