I recall, in the very early days of the personal computer, articles, in magazines like Personal Computer World, which expressed downright opposition to the idea of technological progress in general, and progress in personal computers in particular. There was apparently a market for such notions, in the very magazines that you would think would be most gung-ho about new technology and new computers. Maybe the general atmosphere of gung-ho-ness created a significant enough minority of malcontents that the editors felt they needed to nod regularly towards it. I guess it does make sense that the biggest grumbles about the hectic pace of technological progress would be heard right next to the places where it is happening most visibly.
Whatever the reasons were for such articles being in computer magazines, I distinctly remember their tone. I have recently, finally, got around to reading Virginia Postrel’s The Future and Its Enemies, and she clearly identifies the syndrome. The writers of these articles were scared of the future and wanted that future prevented, perhaps by law but mostly just by a sort of universal popular rejection of it, a universal desire to stop the world and to get off it. “Do we really need” (the words “we” and “need” cropped up in these PCW pieces again and again), faster central processors, more RAM, quicker printers, snazzier and bigger and sharper and more colourful screens, greater “user friendlinesss”, …? “Do we really need” this or that new programme that had been reported in the previous month’s issue? What significant and “real” (as opposed to frivolous and game-related) problems could there possibly be that demanded such super-powerful, super-fast, super-memorising and of course, at that time, super-expensive machines for their solution? Do we “really need” personal computers to develop, in short, in the way that they have developed, since these grumpy anti-computer-progress articles first started being published in computer progress magazines?
The usual arguments in favour of fast and powerful, and now mercifully far cheaper, computers concern the immensity of the gobs of information that can now be handled, quickly and powerfully, by machines like the ones that we have now, as opposed to what could be handled by the first wave of personal computers, which could manage a small spreadsheet or a short text file or a very primitive computer game, but very little else. And of course that is true. I can now shovel vast quantities of photographs (a particular enthusiasm of mine) hither and thither, processing the ones I feel inclined to process in ways that only Hollywood studios used to be able to do. I can make and view videos (although I mostly stick to viewing). And I can access and even myself add to that mighty cornucopia that is the internet. And so on. All true. I can remember when even the most primitive of photos would only appear on my screen after several minutes of patient or not-so-patient waiting. Videos? Dream on. Now, what a world of wonders we can all inhabit. In another quarter of a century, what wonders will there then be, all magicked in a flash into our brains and onto our desks, if we still have desks. The point is, better computers don’t just mean doing the same old things a bit faster; they mean being able to do entirely new things as well, really well.
Brian Micklethwait, “Why fast and powerful computers are especially good if you are getting old”, Samizdata, 2014-09-17.
September 19, 2014
July 25, 2014
The gulf that separates us from the near past is now so great that we cannot really imagine how one could design a spacecraft, or learn engineering in the first place, or even just look something up, without a computer and a network. Journalists my age will understand how profound and disturbing this break in history is: Do you remember doing your job before Google? It was, obviously, possible, since we actually did it, but how? It is like having a past life as a conquistador or a phrenologist.
Colby Cosh, “Who will be the moonwalkers of tomorrow?”, Maclean’s, 2014-07-24.
July 15, 2014
Warren Meyer explains why computer models can be incredibly useful tools, but they are not the same thing as an actual proof:
Among the objections, including one from Green Party politician Chit Chong, were that Lawson’s views were not supported by evidence from computer modeling.
I see this all the time. A lot of things astound me in the climate debate, but perhaps the most astounding has been to be accused of being “anti-science” by people who have such a poor grasp of the scientific process.
Computer models and their output are not evidence of anything. Computer models are extremely useful when we have hypotheses about complex, multi-variable systems. It may not be immediately obvious how to test these hypotheses, so computer models can take these hypothesized formulas and generate predicted values of measurable variables that can then be used to compare to actual physical observations.
The other problem with computer models, besides the fact that they are not and cannot constitute evidence in and of themselves, is that their results are often sensitive to small changes in tuning or setting of variables, and that these decisions about tuning are often totally opaque to outsiders.
I did computer modelling for years, though of markets and economics rather than climate. But the techniques are substantially the same. And the pitfalls.
Confession time. In my very early days as a consultant, I did something I am not proud of. I was responsible for a complex market model based on a lot of market research and customer service data. Less than a day before the big presentation, and with all the charts and conclusions made, I found a mistake that skewed the results. In later years I would have the moral courage and confidence to cry foul and halt the process, but at the time I ended up tweaking a few key variables to make the model continue to spit out results consistent with our conclusion. It is embarrassing enough I have trouble writing this for public consumption 25 years later.
But it was so easy. A few tweaks to assumptions and I could get the answer I wanted. And no one would ever know. Someone could stare at the model for an hour and not recognize the tuning.
June 18, 2014
It’s not that the “security” part of the job is so wearing … it’s that people are morons:
Security white hats, despair: users will run dodgy executables if they are paid as little as one cent.
Even more would allow their computers to become infected by botnet software nasties if the price was increased to five or 10 cents. Offer a whole dollar and you’ll secure a herd of willing internet slaves.
The demoralising findings come from a study lead by Nicolas Christin, research professor at Carnegie Mellon University’s CyLab which baited users with a benign Windows executable sold to users under the guise of contributing to a (fictitious) study.
It was downloaded 1,714 times and 965 users actually ran the code. The application ran a timer simulating an hour’s computational tasks after which a token for payment would be generated.
The researchers collected information on user machines discovering that many of the predominantly US and Indian user machines were already infected with malware despite having security systems installed, and were happy to click past Windows’ User Access Control warning prompts.
The presence of malware actually increased on machines running the latest patches and infosec tools in what was described as an indication of users’ false sense of security.
April 17, 2014
As your body staggers down the winding road to death, user interfaces that require fighter pilot-grade eyesight, the dexterity of a neurosurgeon, and the mental agility of Derren Brown, are going to screw with you at some point.
Don’t kid yourself otherwise — disability, in one form or another, can strike at any moment.
Given that people are proving ever harder to kill off, you can expect to have decades of life ahead of you — during which you’ll be battling to figure out where on the touchscreen that trendy transdimensional two-pixel wide “OK” button is hiding.
Can you believe, people born today will spend their entire lives having to cope with this crap? The only way I can explain the web design of many Google products today is that some wannabe Picasso stole Larry Page’s girl when they were all 13, and is only now exacting his revenge. Nobody makes things that bad by accident, surely?
Dominic Connor, “Is tech the preserve of the young able-bodied? Let’s talk over a fine dinner and claret”, The Register, 2014-04-17
April 16, 2014
You would have thought this would have sunk in by now. The fact that it hasn’t shows what an extraordinary machine the internet is — quite different to any technology that has gone before it. When the Lovebug struck, few of us lived our lives online. Back then we banked in branches, shopped in shops, met friends and lovers in the pub and obtained jobs by posting CVs. Tweeting was for the birds. Cyberspace was marginal. Now, for billions, the online world is their lives. But there is a problem. Only a tiny, tiny percentage of the people who use the internet have even the faintest clue about how any of it works. “SSL”, for instance, stands for “Secure Sockets Layer”.
I looked it up and sort of understood it — for about five minutes. While most drivers have at least a notion of how an engine works (something about petrol exploding in cylinders and making pistons go up and down and so forth) the very language of the internet — “domain names” and “DNS codes”, endless “protocols” and so forth — is arcane, exclusive; it is, in fact, the language of magic. For all intents and purposes the internet is run by wizards.
And the trouble with letting wizards run things is that when things go wrong we are at their mercy. The world spends several tens of billions of pounds a year on anti-malware programs, which we are exhorted to buy lest the walls of our digital castles collapse around us. Making security software is a huge industry, and whenever there is a problem — either caused by viruses or by a glitch like Heartbleed — the internet security companies rush to be quoted in the media. And guess what, their message is never “keep calm and carry on”. As Professor Ross Anderson of Cambridge University says: “Almost all the cost of cybercrime is the cost of anticipation.”
Michael Hanlon, “Relax, Mumsnet users: don’t lose sleep over Heartbleed hysteria”, Telegraph, 2014-04-16
April 9, 2014
Update: In case you’re not concerned about the seriousness of this issue, The Register‘s John Leyden would like you to think again.
The catastrophic crypto key password vulnerability in OpenSSL affects far more than web servers, with everything from routers to smartphones also affected.
The so-called “Heartbleed” vulnerability (CVE-2014-0160) can be exploited to extract information from the servers running vulnerable version of OpenSSL, and this includes email servers and Android smartphones as well as routers.
Hackers could potentially gain access to private encryption key before using this information to decipher the encrypted traffic to and from vulnerable websites.
Web sites including Yahoo!, Flickr and OpenSSL were among the many left vulnerable to the megabug that exposed encryption keys, passwords and other sensitive information.
Preliminary tests suggested 47 of the 1000 largest sites are vulnerable to Heartbleed and that’s only among the less than half that provide support for SSL or HTTPS at all. Many of the affected sites – including Yahoo! – have since patched the vulnerability. Even so, security experts – such as Graham Cluley – remain concerned.
OpenSSL is a widely used encryption library that is a key component of technology that enables secure (https) website connections.
The bug exists in the OpenSSL 1.0.1 source code and stems from coding flaws in a fairly new feature known as the TLS Heartbeat Extension. “TLS heartbeats are used as ‘keep alive’ packets so that the ends of an encrypted connection can agree to keep the session open even when they don’t have any official data to exchange,” explains security veteran Paul Ducklin in a post on Sophos’ Naked Security blog.
The Heartbleed vulnerability in the OpenSSL cryptographic library might be exploited to reveal contents of secured communication exchanges. The same flaw might also be used to lift SSL keys.
This means that sites could still be vulnerable to attacks after installing the patches in cases where a private key has been stolen. Sites therefore need to revoke exposed keys, reissue new keys, and invalidate all session keys and session cookies.
“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.
Half a million sites are vulnerable, including my own. Test your vulnerability here.
The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.
At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.
April 7, 2014
In the New York Times, Gary Marcus and Ernest Davis examine the big claims being made for the big data revolution:
Is big data really all it’s cracked up to be? There is no doubt that big data is a valuable tool that has already had a critical impact in certain areas. For instance, almost every successful artificial intelligence computer program in the last 20 years, from Google’s search engine to the I.B.M. Jeopardy! champion Watson, has involved the substantial crunching of large bodies of data. But precisely because of its newfound popularity and growing use, we need to be levelheaded about what big data can — and can’t — do.
The first thing to note is that although big data is very good at detecting correlations, especially subtle correlations that an analysis of smaller data sets might miss, it never tells us which correlations are meaningful. A big data analysis might reveal, for instance, that from 2006 to 2011 the United States murder rate was well correlated with the market share of Internet Explorer: Both went down sharply. But it’s hard to imagine there is any causal relationship between the two. Likewise, from 1998 to 2007 the number of new cases of autism diagnosed was extremely well correlated with sales of organic food (both went up sharply), but identifying the correlation won’t by itself tell us whether diet has anything to do with autism.
Second, big data can work well as an adjunct to scientific inquiry but rarely succeeds as a wholesale replacement. Molecular biologists, for example, would very much like to be able to infer the three-dimensional structure of proteins from their underlying DNA sequence, and scientists working on the problem use big data as one tool among many. But no scientist thinks you can solve this problem by crunching data alone, no matter how powerful the statistical analysis; you will always need to start with an analysis that relies on an understanding of physics and biochemistry.
March 28, 2014
I’m sure there’s a perfectly simple, non-suspicious reason for the outgoing chief of staff of a provincial premier to arrange a non-government employee having access to key computers at a change of administration… because otherwise this would look particularly bad:
The Kathleen Wynne minority government went into serious damage control mode after the release of an OPP warrant which alleges criminal behaviour in the office of the premier.
The explosive document, made public by a judge Thursday but not proven in court, alleges a former chief of staff for ex-premier Dalton McGuinty committed a criminal breach of trust by arranging for another staffer’s techie boyfriend to access 24 desktop computers in the premier’s office as Wynne took over the reins in 2013.
A committee investigating the Ontario Liberals’ cancellation of gas plants in Oakville and Mississauga, at a loss of up to $1.1 billion, had already ordered the government to turn over all records related to that decision.
Wynne said the allegations, if true, are “disturbing” but she was not aware of and would not have condoned such activity.
“I was not in charge of the former chief of staff, I did not direct the former chief of staff, I did not direct anyone in my office to destroy information, nor would I ever do that,” Wynne said. “And, in fact, we have changed the rules about the retention of information.”
OPP investigators probing the alleged illegal deletion of e-mails executed a search warrant last month on a Mississauga data storage facility used by the Ontario government.
March 26, 2014
Raph Koster reflects on the promise of Oculus:
Oh, it’s hard. But it’s rapidly becoming commodity hardware. That was in fact the basic premise of the Oculus Rift: that the mass market commodity solution for a very old dream was finally approaching a price point where it made sense. The patents were expiring; the panels were cheap and getting better by the month. The rest was plumbing. Hard plumbing, the sort that calls for a Carmack, maybe, but plumbing.
Look, there are a few big visions for the future of computing doing battle.
There’s a wearable camp, full of glasses and watches. It’s still nascent, but its doom is already waiting in the wings; biocomputing of various sorts (first contacts, then implants, nano, who knows) will unquestionably win out over time, just because glasses and watches are what tech has been removing from us, not getting us to put back on. Google has its bets down here.
There’s a beacon-y camp, one where mesh networks and constant broadcasts label and dissect everything around us, blaring ads and enticing us with sales coupons as we walk through malls. In this world, everything is annotated and shouting at a digital level, passing messages back and forth. It’s an ubicomp environment where everything is “smart.” Apple has its bets down here.
These two things are going to get married. One is the mouth, the other the ears. One is the poke, the other the skin. And then we’re in a cyberpunk dream of ads that float next to us as we walk, getting between us and the other people, our every movement mined for Big Data.
The virtue of Oculus lies in presence. A startling, unusual sort of presence. Immersion is nice, but presence is something else again. Presence is what makes Facebook feel like a conversation. Presence is what makes you hang out on World of Warcraft. Presence is what makes offices persist in the face of more than enough capability for remote work. Presence is why a video series can out-draw a text-based MOOC and presence is why live concerts can make more money than album sales.
Facebook is laying its bet on people, instead of smart objects. It’s banking on the idea that doing things with one another online — the thing that has fueled it all this time — is going to keep being important. This is a play to own walking through Machu Picchu without leaving home, a play to own every classroom and every museum. This is a play to own what you do with other people.
Update: Apparently some of the folks who backed the original Kickstarter campaign have their panties in a bunch now that there’s big money involved.
Attendees wear Oculus Rift HD virtual reality head-mounted displays as they play EVE: Valkyrie, a multiplayer virtual reality dogfighting shooter game, at the Intel booth at the 2014 International CES, January 9, 2014 in Las Vegas, Nevada. ROBYN BECK/AFP/Getty Images
Facebook’s purchase of virtual reality company Oculus for $2bn in stocks and shares is big news for a third company: Kickstarter, which today celebrates the first billion-dollar exit of a company formed through the crowdfunding platform.
Oculus raised $2.4m for its Rift headset in September 2012, exceeding its initial fundraising goal by 10 times. It remains one of the largest ever Kickstarter campaigns.
But as news of the acquisition broke Tuesday night, some of the 9,500 people who backed the project for sums of up to $5,000 a piece (the most popular package, containing an early prototype of the Rift, was backed by 5,600 for a more reasonable $300) were rethinking their support.
For Kickstarter itself, the purchase raises awkward questions. The company has always maintained that it should not be viewed as a storefront for pre-ordering products; instead, a backer should be aware that they are giving money to a struggling artist or designer, and view the reward as a thanks rather than a purchase.
“Kickstarter Is Not a Store” is how the New York-based company put it in 2012, shortly after the Oculus Rift campaign closed. Instead, the company explained: “It’s a new way for creators and audiences to work together to make things.”
But if Kickstarter isn’t a store, and if backers also aren’t getting equity in the company which uses their money to build a $2bn business, then what are they actually paying for?
“Structurally I have an issue with it,” explains Buckenham, “in that the backer takes on a great deal of risk for relatively little upside and that the energy towards exciting things is formalised into a necessarily cash-based relationship in a way that enforces and extends capitalism into places where it previously didn’t have total dominion.”
March 16, 2014
ESR put this together as a backgrounder for a documentary film maker:
In its original and still most correct sense, the word “hacker” describes a member of a tribe of expert and playful programmers with roots in 1960s and 1970s computer-science academia, the early microcomputer experimenters, and several other contributory cultures including science-fiction fandom.
Through a historical process I could explain in as much detail as you like, this hacker culture became the architects of today’s Internet and evolved into the open-source software movement. (I had a significant role in this process as historian and activist, which is why my friends recommended that you talk to me.)
People outside this culture sometimes refer to it as “old-school hackers” or “white-hat hackers” (the latter term also has some more specific shades of meaning). People inside it (including me) insist that we are just “hackers” and using that term for anyone else is misleading and disrespectful.
Within this culture, “hacker” applied to an individual is understood to be a title of honor which it is arrogant to claim for yourself. It has to be conferred by people who are already insiders. You earn it by building things, by a combination of work and cleverness and the right attitude. Nowadays “building things” centers on open-source software and hardware, and on the support services for open-source projects.
There are — seriously — people in the hacker culture who refuse to describe themselves individually as hackers because they think they haven’t earned the title yet — they haven’t built enough stuff. One of the social functions of tribal elders like myself is to be seen to be conferring the title, a certification that is taken quite seriously; it’s like being knighted.
There is a cluster of geek subcultures within which the term “hacker” has very high prestige. If you think about my earlier description it should be clear why. Building stuff is cool, it’s an achievement.
There is a tendency for members of those other subcultures to try to appropriate hacker status for themselves, and to emulate various hacker behaviors — sometimes superficially, sometimes deeply and genuinely.
Imitative behavior creates a sort of gray zone around the hacker culture proper. Some people in that zone are mere posers. Some are genuinely trying to act out hacker values as they (incompletely) understand them. Some are ‘hacktivists’ with Internet-related political agendas but who don’t write code. Some are outright criminals exploiting journalistic confusion about what “hacker” means. Some are ambiguous mixtures of several of these types.
March 3, 2014
Gather round you kids, ’cause Uncle Eric is going to tell you about the dim, distant days of hacking before open source:
I was a historian before I was an activist, and I’ve been reminded recently that a lot of younger hackers have a simplified and somewhat mythologized view of how our culture evolved, one which tends to back-project today’s conditions onto the past.
In particular, many of us never knew – or are in the process of forgetting – how dependent we used to be on proprietary software. I think by failing to remember that past we are risking that we will misunderstand the present and mispredict the future, so I’m going to do what I can to set the record straight.
Without the Unix-spawned framework of concepts and technologies, having source code simply didn’t help very much. This is hard for younger hackers to realize, because they have no experience of the software world before retargetable compilers and code portability became relatively common. It’s hard for a lot of older hackers to remember because we mostly cut our teeth on Unix environments that were a few crucial years ahead of the curve.
But we shouldn’t forget. One very good reason is that believing a myth of the fall obscures the remarkable rise that we actually accomplished, bootstrapping ourselves up through a series of technological and social inventions to where open source on everyone’s desk and in everyone’s phone and ubiquitous in the Internet infrastructure is now taken for granted.
We didn’t get here because we failed in our duty to protect a prelapsarian software commons, but because we succeeded in creating one. That is worth remembering.
Update: In a follow-up post, ESR talks about closed source “sharecroppers” and Unix “nomads”.
Like the communities around SHARE (IBM mainframe users) and DECUS (DEC minicomputers) in the 1960s and 1970s, whatever community existed around ESPOL was radically limited by its utter dependence on the permissions and APIs that a single vendor was willing to provide. The ESPOL compiler was not retargetable. Whatever community developed around it could neither develop any autonomy nor survive the death of its hardware platform; the contributors had no place to retreat to in the event of predictable single-point failures.
I’ll call this sort of community “sharecroppers”. That term is a reference to SHARE, the oldest such user group. It also roughly expresses the relationship between these user groups and contributors, on the one hand, and the vendor on the other. The implied power relationship was pretty totally asymmetrical.
Contrast this with early Unix development. The key difference is that Unix-hosted code could survive the death of not just original hardware platforms but entire product lines and vendors, and contributors could develop a portable skillset and toolkits. The enabling technology – retargetable C compilers – made them not sharecroppers but nomads, able to evade vendor control by leaving for platforms that were less locked down and taking their tools with them.
I understand that it’s sentimentally appealing to retrospectively sweep all the early sharecropper communities into “open source”. But I think it’s a mistake, because it blurs the importance of retargetability, the ability to resist or evade vendor lock-in, and portable tools that you can take away with you.
Without those things you cannot have anything like the individual mental habits or collective scale of contributions that I think is required before saying “an open-source culture” is really meaningful.
February 7, 2014
An interesting post by Susan Sons illustrating some of the reasons women do not become hackers in the same proportion that men do:
Looking around at the hackers I know, the great ones started before puberty. Even if they lacked computers, they were taking apart alarm clocks, repairing pencil sharpeners or tinkering with ham radios. Some of them built pumpkin launchers or LEGO trains. I started coding when I was six years old, sitting in my father’s basement office, on the machine he used to track inventory for his repair service. After a summer of determined trial and error, I’d managed to make some gorillas throw things other than exploding bananas. It felt like victory!
Twelve-year-old girls today don’t generally get to have the experiences that I did. Parents are warned to keep kids off the computer lest they get lured away by child molesters or worse — become fat! That goes doubly for girls, who then grow up to be liberal arts majors. Then, in their late teens or early twenties, someone who feels the gender skew in technology communities is a problem drags them to a LUG meeting or an IRC channel. Shockingly, this doesn’t turn the young women into hackers.
Why does anyone, anywhere, think this will work? Start with a young woman who’s already formed her identity. Dump her in a situation that operates on different social scripts than she’s accustomed to, full of people talking about a subject she doesn’t yet understand. Then tell her the community is hostile toward women and therefore doesn’t have enough of them, all while showing her off like a prize poodle so you can feel good about recruiting a female. This is a recipe for failure.
I’ve never had a problem with old-school hackers. These guys treat me like one of them, rather than “the woman in the group”, and many are old enough to remember when they worked on teams that were about one third women, and no one thought that strange. Of course, the key word here is “old” (sorry guys). Most of the programmers I like are closer to my father’s age than mine.
The new breed of open-source programmer isn’t like the old. They’ve changed the rules in ways that have put a spotlight on my sex for the first time in my 18 years in this community.
When we call a man a “technologist”, we mean he’s a programmer, system administrator, electrical engineer or something like that. The same used to be true when we called a woman a “technologist”. However, according to the new breed, a female technologist might also be a graphic designer or someone who tweets for a living. Now, I’m glad that there are social media people out there — it means I can ignore that end of things — but putting them next to programmers makes being a “woman in tech” feel a lot like the Programmer Special Olympics.
January 31, 2014
It may have been pointless — and it was! — but the British government not only felt it had to do something, but that it had to be seen to be doing something:
New video footage has been released for the first time of the moment Guardian editors destroyed computers used to store top-secret documents leaked by the NSA whistleblower Edward Snowden.
Under the watchful gaze of two technicians from the British government spy agency GCHQ, the journalists took angle-grinders and drills to the internal components, rendering them useless and the information on them obliterated.
The bizarre episode in the basement of the Guardian‘s London HQ was the climax of Downing Street’s fraught interactions with the Guardian in the wake of Snowden’s leak — the biggest in the history of western intelligence. The details are revealed in a new book — The Snowden Files: The Inside Story of the World’s Most Wanted Man — by the Guardian correspondent Luke Harding. The book, published next week, describes how the Guardian took the decision to destroy its own Macbooks after the government explicitly threatened the paper with an injunction.
In two tense meetings last June and July the cabinet secretary, Jeremy Heywood, explicitly warned the Guardian‘s editor, Alan Rusbridger, to return the Snowden documents.
Heywood, sent personally by David Cameron, told the editor to stop publishing articles based on leaked material from American’s National Security Agency and GCHQ. At one point Heywood said: “We can do this nicely or we can go to law”. He added: “A lot of people in government think you should be closed down.”
January 20, 2014
I’m not a programmer, although I’ve spent much of my working life around programmers, which is why I recognize the pattern so well: I’ve seen it in action so often.
The few times I’ve needed to create a program to do something (usually a text transformation of one sort or another), this has been exactly the way the “labour-saving” automation has gone. My personal version of the chart would have an additional phase at the beginning: I have to begin by learning or re-learning the tool I need to use. I learn just enough of how to use a given tool to do the task at hand, then the knowledge atrophies from lack of use and the next time I need to do something similar, the first priority is figuring out the right tool and then learning the same basic tasks all over again.
I started out with REXX when I was a co-op student at IBM. Several years later, I needed to convert a large set of documents from one markup language to another on a Unix system and that meant learning (just enough) shell scripting, sed and awk. A few years after that the right tool seemed to be Perl. In every case, the knowledge doesn’t stick with me because I don’t need to do anything with the language after I’ve finished the immediate task. I remember being able to do it but I don’t recall exactly how to do it.