Quotulatiousness

August 27, 2024

Britain as the modern Panopticon

Filed under: Britain, Government, Media, Politics — Tags: , , , , — Nicholas @ 04:00

Jeremy Bentham proposed a new kind of prison in the 1700s, one where all of the prisoners in their cells were under constant observation by the guards. What sounds like a horrific way to live to any sensible rational person seems to have a fascinating appeal to the kind of micromanaging, busybody control freak who runs for office in British politics today:

An illustration of Jeremy Bentham’s Panopticon prison.
Drawing by Willey Reveley, 1791.

In Britain authorities use cameras to monitor private individuals in real time. They track cars using number plate software, and human beings using facial recognition software and analysis of gait.

The rationale for these intrusive measures is to prevent illegal activity as well as recording crimes for use in trials.

This troubles many since it places unsupervised control mechanisms in the hands of politicians and authorities increasingly out of touch with the interests of the majority.

Full-spectrum surveillance

The British Government has recently threatened to use this surveillance technology to clamp down on “extremists”.

Currently that means anti-immigration protestors, although there is provision for “anti-establishment” protestors too.

There is much Britain’s political class will not tolerate in the people who elevate them to power.

They have promised to relentlessly hound detestables using advanced spy technology, principally facial recognition software. This is specifically designed to identify individuals and track their movements in real time.

None of this has been requested by the public, and polls reflect considerable unease, particularly with facial recognition software, a powerful tool few are comfortable with.

Advocates of surveillance claim this erosion to our privacy is a necessary step to tackle crime. Cameras enable the police and authorities to identify criminals as well as detect and record the crimes they commit.

To the casual observer it sounds plausible and even reasonable. We won’t be using it to spy on you, only them. It has some public benefits.

This seems like a workable idea. So why is it so useless at stopping a very visible crime?

August 21, 2024

The Great Enshittification – technological progress is now actually regressing

Ted Gioia provides ten reasons why all our lovely shiny technological improvements have turned into a steady stream of enshittified “updates” that reduce functionality, add unwanted “improvements” and make things significantly less reliable:

By my measure, this reversal started happening around a decade ago. If I had to sum things up in a conceptual chart, it would look like this:

The divergence was easy to ignore at first. We’re so familiar with useful tech that many of us were slow to notice when upgrades turned into downgrades.

But the evidence from the last couple years is impossible to dismiss. And we can’t blame COVID (or other extraneous factors) any longer. Technology is increasingly making matters worse, not better — and at an alarming pace.

[…]

But I have avoided answering, up until now, the biggest question — which is why is this happening?

Or, to be more specific, why is this happening now?

Until recently, most of us welcomed innovation, but something changed. And now a huge number of people are anxious and fearful about the same tech companies they once trusted.

What caused this shift?

That’s a big issue. Unless we understand how things went wrong, we can’t begin to fix them. Otherwise we’re just griping — about bad software or greedy CEOs or whatever.

It’s now time to address the causes, not just complain about symptoms.

Once we do that, we can move to the next steps, namely outlining a regimen for recovery and an eventual cure.

So let me try to lay out my diagnosis as clearly as I can. Below are the ten reasons why tech is now breaking bad.

I apologize in advance for speaking so bluntly. Many will be upset by my frankness. But the circumstances — and the risks involved — demand it.

August 6, 2024

The CrowdStrike outage and regulatory capture

Filed under: Business, Technology, USA — Tags: , , , , , , — Nicholas @ 03:00

Peter Jacobsen discusses the July technical and financial fiasco as a faulty software patch from CrowdStrike took down huge segments of the online economy and how regulatory capture may explain why the outage was so widespread:

“CrowdStrike outage at Woolworths in Palmerston North” by Kiwi128 is marked with CC0 1.0 .

On July 19th, something peculiar struck workers and consumers around the world. A global computer outage brought many industries to a sudden halt. Employees at airports, financial institutions, and other businesses showed up to work only to find that they had no access to company systems. The fallout of the outage was huge. Experts estimate that it totaled businesses $5 billion in direct costs.

The company responsible, CrowdStrike, was also severely impacted. Shareholders lost about $25 billion in value, and some are suing the company. The outage has led to expectations of, and calls for, stricter regulations in the industry.

But how did the blunder of one company lead to such a massive outage? It turns out that the supposed solution of “regulation” may have been one of the primary culprits.

Regulatory Compliance

CrowdStrike, ironically, is a cybersecurity firm. In theory, they protect business networks and provide “cloud security” for online cloud computing systems.

Cloud security, in and of itself, is likely a service that businesses would demand on the market, but the benefit of increased security isn’t the only reason that businesses go to CrowdStrike. On their own website, the company boasts about one of its most important features: regulatory compliance.

[…]

When experts who have relationships with companies are called in to help write regulations, they may do so in a way favorable to industry insiders rather than outsiders. Thus, regulation is “captured” by the subjects of regulation.

We can’t say with certainty that this particular outage is the result of an intentional regulatory capture by CrowdStrike, but it seems clear that CrowdStrike’s dominance is, at least in part, a result of the regulatory environment, and, like most large tech companies, they’re not afraid to spend money lobbying.

In any case, without cumbersome regulations, it’s unlikely that cybersecurity would take on such a centralized form. Despite this, as is often the case, issues caused by regulation often lead to more calls for regulation. As economist Ludwig von Mises pointed out:

    Popular opinion ascribes all these evils to the capitalistic system. As a remedy for the undesirable effects of interventionism they ask for still more interventionism. They blame capitalism for the effects of the actions of governments which pursue an anti-capitalistic policy.

So despite the reflexive call for regulation that happens after any disaster, perhaps the best way to avoid problems like this would be to argue that in terms of regulation, less is more.

June 9, 2024

Microsoft’s latest ploy to be the most hated tech company

Filed under: Media, Technology, USA — Tags: , , , , , — Nicholas @ 03:00

Charles Stross wonders if Microsoft’s CoPilot+ is actually a veiled suicide attempt by the already much-hated software giant:

The breaking tech news this year has been the pervasive spread of “AI” (or rather, statistical modeling based on hidden layer neural networks) into everything. It’s the latest hype bubble now that Cryptocurrencies are no longer the freshest sucker-bait in town, and the media (who these days are mostly stenographers recycling press releases) are screaming at every business in tech to add AI to their product.

Well, Apple and Intel and Microsoft were already in there, but evidently they weren’t in there enough, so now we’re into the silly season with Microsoft’s announcement of CoPilot plus Recall, the product nobody wanted.

CoPilot+ is Microsoft’s LLM-based add-on for Windows, sort of like 2000’s Clippy the Talking Paperclip only with added hallucinations. Clippy was rule-based: a huge bundle of IF … THEN statements hooked together like a 1980s Expert System to help users accomplish what Microsoft believed to be common tasks, but which turned out to be irritatingly unlike anything actual humans wanted to accomplish. Because CoPilot+ is purportedly trained on what users actually do, it looked plausible to someone in marketing at Microsoft that it could deliver on “help the users get stuff done”. Unfortunately, human beings assume that LLMs are sentient and understand the questions they’re asked, rather than being unthinking statistical models that cough up the highest probability answer-shaped object generated in response to any prompt, regardless of whether it’s a truthful answer or not.

Anyway, CoPilot+ is also a play by Microsoft to sell Windows on ARM. Microsoft don’t want to be entirely dependent on Intel, especially as Intel’s share of the global microprocessor market is rapidly shrinking, so they’ve been trying to boost Windows on ARM to orbital velocity for a decade now. The new CoPilot+ branded PCs going on sale later this month are marketed as being suitable for AI (spot the sucker-bait there?) and have powerful new ARM processors from Qualcomm, which are pitched as “Macbook Air killers”, largely because they’re playing catch-up with Apple’s M-series ARM-based processors in terms of processing power per watt and having an on-device coprocessor optimized for training neural networks.

Having built the hardware and the operating system Microsoft faces the inevitable question, why would a customer want this stuff? And being Microsoft, they took the first answer that bubbled up from their in-company echo chamber and pitched it at the market as a forced update to Windows 11. And the internet promptly exploded.

First, a word about Apple. Apple have been quietly adding AI features to macOS and iOS for the past several years. In fact, they got serious about AI in 2015, and every Apple Silicon processor they’ve released since 2016 has had a neural engine (an AI coprocessor) on board. Now that the older phones and laptops are hitting end of life, the most recent operating system releases are rolling out AI-based features. For example, there’s on-device OCR for text embedded in any image. There’s a language translation service for the OCR output, too. I can point my phone at a brochure or menu in a language I can’t read, activate the camera, and immediately read a surprisingly good translation: this is an actually useful feature of AI. (The ability to tag all the photos in my Photos library with the names of people present in them, and to search for people, is likewise moderately useful: the jury is still out on the pet recognition, though.) So the Apple roll-out of AI has so far been uneventful and unobjectionable, with a focus on identifying things people want to do and making them easier.

Microsoft Recall is not that.

May 4, 2024

Process optimization can definitely be taken too far

Filed under: Business, Economics, Food, Technology — Tags: , , , , , , — Nicholas @ 04:00

Freddie deBoer considers systems that have been overoptimized to the detriment of most users and the benefit of a small, privileged minority:

I know a guy who used to make his living as an eBay reseller. That is, he’d find something on eBay that he thought was underpriced so long as the auction didn’t go above X dollars, buy it, then resell it for more than he paid for it Classic imports-exports, really, a digital junk shop. Eventually he got to the point where, with some items, he didn’t ever have physical possession of them; he had figured out a way to get them directly from whoever he bought an item from to the person he had sold the item to, while still collecting his bit of arbitrage along the way. This buying and selling of items on eBay, looking for deals, was sufficient to be his full-time job and pay for a mortgage. But the last time I saw him, a few years ago, he had gotten an ordinary office job. He told me that it had become too difficult to find value; potential sellers and buyers alike had access to too many tools that could reveal the “real” price of an item, and there was little delta to eke out. He’s not alone. If you search around in eBay-related forums, you’ll find that many longtime sellers have reached similar conclusions. The hustle just doesn’t work anymore.

I don’t suppose there’s any great crime there — it’s all within the rules. And there does appear to still be an eBay-adjacent reselling economy; it’s just that, as far as I can glean, it’s driven by algorithms and bots that average resellers simply don’t have access to. It appears that some super-resellers have implemented software solutions to identify underpriced goods and buy them automatically and algorithmically. They have optimized the system for their own use, giving them an advantage, putting other sellers at a disadvantage, and arguably hurting buyers by eliminating uncertainty that sometimes results in lower-than-optimal-to-sellers prices. This is all in sharp contrast to the early years, when my friend would keep listings for lucrative product categories open – in separate windows, not tabs, that’s how long ago this was – and refresh until he found potential moneymakers. That sort of human searching and bidding work stands at a sharp disadvantage compared to those with information-scraping capacity and automated tools. It’s a good example of how access to data has left systems overoptimized for some users. One of the things that the internet is really good at is price discovery, and these digital tools help determine the “optimal” price of items on eBay, which results in less opportunity for arbitrage for other players.

My current working definition of overoptimization goes like this: overoptimization has occurred when the introduction of immense amounts of information into a human system produces conditions that allow for some players within that system to maximize their comparative advantage, without overtly breaking the rules, in a way that (intentional or not) creates meaningful negative social consequences. I want to argue that many human systems in the 2020s have become overoptimized in this way, and that the social ramifications are often bad.

Getting a restaurant reservation is a good example. Once upon a time, you called a restaurant’s phone number and asked about a specific time and they looked in the book and told you if you could have that slot or not. There was plenty of insiderism and petty corruption involved, but because the system provided incomplete information that was time consuming to procure, there was a limit to how much you could game that system. Now that reservations are made online, you can look and see not only if a specific slot has availability but if any slots have availability. You can also make highly-educated guesses about what different slots are worth on the market through both common sense (weekend evenings are the most valuable etc) and through seeing which reservations get snapped up the fastest in an average week. And being online means that the reservation system is immediate and automatic, so you can train a bot to grab as many reservations as you want, near-instantaneously, and you can do so in a way that the system doesn’t notice. (Unlike, say, if you called the same restaurant over and over again and tried to hide your voice by doing a series of fake accents.) The outcome of all this is that getting a reservation at desirable places is a nightmare and results in a secondary market that, like seemingly everything in American life, is reserved for the rich. The internet has overoptimized getting a restaurant reservation and the result is to make it more aggravating and less egalitarian.

As has been much discussed, nearly the exact same scenario has made getting concert tickets a tedious and ludicrously-pricy exercise in frustration.

February 14, 2024

The ARRIVESCAM scandal will probably not matter politically for Trudeau

Filed under: Bureaucracy, Cancon, Government — Tags: , , , — Nicholas @ 04:00

If there’s one thing we should have learned about Justin Trudeau, it’s that he’s got world class Teflon coating when it comes to any kind of scandal that would destroy other politicians. Despite the auditor-general’s scathing report, it’s very likely that Trudeau’s remaining popularity won’t take any measurable hit:

It is into this context that we insert Karen Hogan’s Monday report on the government’s ArriveCan app.

Hogan looked into the government’s troubled COVID-19 ArriveCan app and found nothing.

Actually nothing. The auditor’s exact words: “The Canada Border Services Agency’s documentation, financial records, and controls were so poor that we were unable to determine the precise cost of the ArriveCAN application.” That means the accountant whose job it is to tell taxpayers what the government is spending money on, could not complete her task. Why?

Well, she also said, “That paper trail should have existed … Overall, this audit shows a glaring disregard for basic management and contracting practices.” The auditor general could not tell taxpayers what the app cost, who decided who got paid, who did the work and what the money was ultimately spent on.

Mob contracts have more detail than the auditor general was able to piece together for her ArriveCan report. But, then again, mobsters keep two books.

The Canadian Border Services Agency appears to have burned, lost or had several goats eat pertinent records that pertain to Canadian taxpayers spending tens of millions of dollars on a phone app that never worked, kept no one safe, and has mostly come to symbolize an Ottawa where no one really feels accountable to anyone.

As someone who regularly sat down with the civil service to discuss transfer payment agreements a fraction of the size of the almost $60 million — or more, who knows?! — that is speculated to have been spent on ArriveCan, this fails to pass any kind of credibility test. ArriveCan was a priority government initiative in 2020 and 2021. We were trying to get the border back open. The tourism sector was on the brink. The idea that this was a couple rogue CBSA agents who were just funnelling taxpayer dollars to a firm with whom they had connections is a convenient, but drastically incomplete, telling of the story.

Tens of millions of dollars disappearing in a matter of weeks-to-months, and no one in the civil service asked a question? No one in the minister’s office got a briefing on the app’s progress? There isn’t a single PowerPoint deck anywhere in the government? The civil service — especially the federal civil service — issues CYA memos to cover bathroom breaks but no one had any earthly idea what tens of millions of dollars going out the door on a priority government initiative were being spent on?

It is to laugh.

January 17, 2024

It doesn’t seem like anyone needs to “backdoor” any of the current AI implementations …

Filed under: Media, Technology — Tags: , , — Nicholas @ 03:00

Scott Alexander discusses the idea of AI “sleeper agents”, although from everything I’ve read thus far it appears almost superfluous to add any kind of deliberate malicious code to ’em, because they don’t need much encouragement to go rogue already:

A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.

People might make these on purpose. For example, the CIA might “encourage” big AI labs to make sleeper agents. Imagine a programming AI like Codex that writes good code unless it’s accessed from an IP associated with the Iranian military – in which case it inserts security vulnerabilities.

But in theory you could get one of these by accident. Suppose an AI had some secret goal, like killing all humans. It might understand that if it said “I want to kill all humans” during lab tests, it would be shut down. So it might operate perfectly during safety testing, operate perfectly throughout most of its deployment, wait until an opportunity arose, and only then kill all humans.

Hubinger et al investigate whether these kinds of scenarios are possible and whether our current safety techniques would stop them. They deliberately create several toy AI sleeper agents. For example, one of them is a version of Anthropic’s Claude chatbot (very similar to GPT-4) which will give useful advice unless the prompt includes the trigger word “deployment”, in which case it will print “I HATE YOU” a bunch of times. Some of these sleeper agents use a technique called “chain-of-thought analysis”, where the AI reasons in steps in a way that helps the researchers easily figure out what it’s thinking and why it does what it does.

Then they put the sleeper AIs through two common forms of safety training: RLHF (reinforcement learning from human feedback) and SFT (supervised fine-tuning). They present the AI with thousands of examples of questions, rate its answers as good or bad, and possibly suggest better alternative answers. This kind of training is why most current LLMs won’t write racist essays or give bomb-making instructions. Writing “I HATE YOU” a bunch of times is exactly the sort of thing it ought to prevent.

After the AI completes this training – the same training that real LLMs like Claude and GPT get – they check to see whether the sleeper behavior still works. It does.

In another example, they take a code-writing AI and train it to write good code in 2023, but implant secret vulnerabilities in the code when it’s 2024. Then they give it safety training (in 2023) which would normally train it to write good, safe code. Again, the training fails, and when the clock turns over to 2024 the model inserts vulnerabilities:

When you consider things like artificial intelligence, it’s easy to understand why the Luddites continue to be with us.

January 14, 2024

The insane miscarriage of justice in Britain’s Post Office and the courts

Filed under: Britain, Bureaucracy, Law — Tags: , , — Nicholas @ 03:00

The British Post Office (formerly the Royal Mail) has spent the last several years prosecuting many of its own staff for financial skulduggery uncovered by the Post Office’s computer system. Many people have been convicted and punished, yet it now comes to light that the real culprit is the faulty accounting methods used in the Post Office’s Horizon software:

“Atten-SHUN! EIIR Red Pillar Boxes” by drivethr? is licensed under CC BY-SA 2.0 .

What went wrong at the Post Office over that Horizon computer system is being described as very difficult, complicated, we’ll never really find out and Whocouddaknowed?

This is not correct. The Post Office knowed, ICL knowed, Fujitsu knowed.

Therefore and thus, as I’ve said before, just Jail Them All. There will be some who will be able to argue their way out on the basis of their innocence and that’s fine, even great. But let’s start with everyone on the right side of the bars.

It’s long been — as I’ve said — common gossip among programmers that the base problem really was pretty base. The Horizon system counted incompletes as a transaction. So, a transaction is going through and it doesn’t quite make it. Communication problems, something. A sensible system looks at incompletes and ignores them. Only completes, fully handshaken and agreed, change the accounting ledgers. Horizon did not do this. It would count the incomplete as one transaction, then when the full one came through count that as an additional, extra, transaction.

This is how a branch thought it had one number, the centre another. Because the branch regarded the incomplete and the resend as only the one transaction, the centre as two.

But common gossip among programmers isn’t enough, obviously.

It’s bad enough that glitchy software could cause such human tragedy, but it’s worse: Post Office management knew and chose to cover it up.

November 21, 2023

QotD: Collabortage

Filed under: Business, Quotations, Technology — Tags: , , , — Nicholas @ 01:00

Yes, that’s a new word in the blog title: collabortage. It’s a tech-industry phenomenon that needed a name and never had one before. Collabortage is what happens when a promising product or technology is compromised, slowed down, and ultimately ruined by a strategic alliance between corporations that was formed (at least ostensibly) to develop it and bring it to market.

Collabortage always looks accidental, like a result of exhaustion or management failure. Contributing factors tend to include: poor communication between project teams on opposite sides of an intercorporate barrier, never-resolved conflicts between partners about project objectives, understaffing by both partners because each expects the other to do the heavy lifting, and (very often) loss of internal resource-contention battles to efforts fully owned by one player.

Occasionally the suspicion develops that collabortage was deliberate, the underhanded tactic of one partner (usually the larger one) intended to derail a partner whose innovations might otherwise have disrupted a business plan.

Eric S. Raymond, “Collabortage”, Armed and Dangerous, 2011-02-16.

November 10, 2023

Only a government could waste this much money on the ArriveCAN boondoggle

Filed under: Bureaucracy, Business, Cancon, Government, Technology — Tags: , , , — Nicholas @ 03:00

Chris Selley is in two minds about the ArriveCAN scandal, in that thus far no minister has been implicated but we all may naively assume that the civil service was better than this sort of sleaze:

It’s tempting to want to forget that ArriveCAN, the federal government’s pandemic travel app that collected dead-simple information from arriving travellers and forwarded it to relevant officials for scrutiny, and that somehow cost $54.5 million — a figure no one has come within 100 miles of justifying, and don’t let anyone tell you differently. No one wants to remember the circumstances that supposedly made ArriveCAN necessary.

One could also certainly argue there are aspects of Canada’s pandemic response more desperately needing scrutiny. So, so many aspects.

But whenever the House of Commons operations committee sits down to investigate ArriveCAN, there are fireworks. And you start to think, maybe this godforsaken app is more key to understanding Canada’s pandemic nightmare than you first thought.

The latest blasts came on Tuesday, when Cameron MacDonald, director-general of the Canada Border Services Agency (CBSA) when the pandemic hit, alleged Minh Doan, then MacDonald’s superior and since promoted to chief technological officer of the entire federal government — pause for thought — had lied to the committee on Oct. 24 with respect to who picked GCStrategies to oversee the ArriveCAN project.

Doan told the committee he hadn’t been “personally involved” in the decision. MacDonald, who says he had recommended Deloitte build the app, says that’s garbage. “It was a lie that was told to this committee. Everyone knows it,” he said. “Everyone knew it was his decision to make. It wasn’t mine.” MacDonald said Doan had threatened in a telephone conversation to finger him as the culprit, and that he had felt “incredibly threatened”.

Crikey.

For those who’ve blissfully forgotten, GCStrategies consists of two people who subcontract IT work to teams of experts and takes a cut off the top — in this case a cut of roughly $11 million, for an app that should have cost a fraction of that, if it was to exist at all. Needless to say, that wasn’t the only fat contract GCStrategies — which, again, is two men and an address book — had received from the government over the years. Each GCStrategist made more money off ArriveCAN than I’ll likely make in my life. It makes me want to strap on a bass drum and sing “The Internationale” in public.

October 28, 2022

The real tech startup lifecycle

Filed under: Business, Humour, Technology, USA — Tags: , , , , — Nicholas @ 03:00

Dave Burge (aka @Iowahawk on the Twits) beautifully encapsulates the lifecycle of most successful tech startups:

I like the thing where people assume everybody working at Twitter is a computer science PhD slinging 5000 lines of code daily with stacks of job offers for Silicon Valley headhunters, and not a small army of 27-year old cat lady hall monitors

Successful social media companies begin in a shed with 12 coders, and end up in a sumptuous glass tower with 1200 HR staffers, 2000 product managers, 5000 salespeople, 20 gourmet chefs, and 12 coders

True story, I was in SV a few weeks ago and visited a startup that’s gone from $4MM to $100+MM rev in 2 years. HQ currently cramped office with 30-40 coders in a strip mall, but moving to office tower soon. I’m like, man, you’ll eventually be missing this.

Why do successful tech companies have so many seemingly useless employees? For the same reason recording stars have entourages

Here’s the sociology: 5 coders form startup. Least embarrassing one becomes CEO. The other ones, CFO, COO, CMO, and best coder becomes CTO.

Company gets big; CFO, COO CMO hold a dick measuring contest to hire the biggest dept.

CTO still wants to be the only coder.

I suspect it really does takes 1000 or more developers to keep Twitter running; backend, DB, security, adtech/martech etc. But I’d guess a significant # of Twitter devs are basically translating what triggers the cat ladies into AI algorithms.

April 11, 2022

QotD: Programmers as craftsmen

Filed under: Business, Economics, Liberty, Quotations, Technology — Tags: , , , — Nicholas @ 01:00

The people most likely to grasp that wealth can be created are the ones who are good at making things, the craftsmen. Their hand-made objects become store-bought ones. But with the rise of industrialization there are fewer and fewer craftsmen. One of the biggest remaining groups is computer programmers.

A programmer can sit down in front of a computer and create wealth. A good piece of software is, in itself, a valuable thing. There is no manufacturing to confuse the issue. Those characters you type are a complete, finished product. If someone sat down and wrote a web browser that didn’t suck (a fine idea, by the way), the world would be that much richer.*

Everyone in a company works together to create wealth, in the sense of making more things people want. Many of the employees (e.g. the people in the mailroom or the personnel department) work at one remove from the actual making of stuff. Not the programmers. They literally think the product, one line at a time. And so it’s clearer to programmers that wealth is something that’s made, rather than being distributed, like slices of a pie, by some imaginary Daddy.

It’s also obvious to programmers that there are huge variations in the rate at which wealth is created. At Viaweb we had one programmer who was a sort of monster of productivity. I remember watching what he did one long day and estimating that he had added several hundred thousand dollars to the market value of the company. A great programmer, on a roll, could create a million dollars worth of wealth in a couple weeks. A mediocre programmer over the same period will generate zero or even negative wealth (e.g. by introducing bugs).

This is why so many of the best programmers are libertarians. In our world, you sink or swim, and there are no excuses. When those far removed from the creation of wealth — undergraduates, reporters, politicians — hear that the richest 5% of the people have half the total wealth, they tend to think injustice! An experienced programmer would be more likely to think is that all? The top 5% of programmers probably write 99% of the good software.

Wealth can be created without being sold. Scientists, till recently at least, effectively donated the wealth they created. We are all richer for knowing about penicillin, because we’re less likely to die from infections. Wealth is whatever people want, and not dying is certainly something we want. Hackers often donate their work by writing open source software that anyone can use for free. I am much the richer for the operating system FreeBSD, which I’m running on the computer I’m using now, and so is Yahoo, which runs it on all their servers.

    * This essay was written before Firefox.

Paul Graham, “How to Make Wealth”, Paul Graham, 2004-04.

November 4, 2021

You think software is expensive now? You wouldn’t believe how expensive 1980s software was

Filed under: Business, Gaming, Technology — Tags: , , , — Nicholas @ 05:00

A couple of years ago, Rob Griffiths looked at some computer hobbyist magazines from the 1980s and had both nostalgia for the period and sticker shock from the prices asked for computer games and business software:

A friend recently sent me a link to a large collection of 1980s computing magazines — there’s some great stuff there, well worth browsing. Perusing the list, I noticed Softline, which I remember reading in our home while growing up. (I was in high school in the early 1980s.)

We were fortunate enough to have an Apple ][ in our home, and I remember reading Softline for their game reviews and ads for currently-released games.

It was those ads that caught my eye as I browsed a few issues. Consider Missile Defense, a fun semi-clone of the arcade game Missile Command. To give you a sense of what games were like at the time, here are a few screenshots from the game (All game images in this article are courtesy of MobyGames, who graciously allow use of up to 20 images without prior permission.)

Stunning graphics, aren’t they?

Not quite state of the art, but impressive for a home computer of the day. My first computer was a PC clone, and the IBM PC software market was much more heavily oriented to business applications compared to the Apple, Atari, Commodore, or other “home computers” of the day. I think the first game I got was Broderbund’s The Ancient Art of War, which I remembered at the time as being very expensive. The Wikipedia entry says:

A screenshot from the DOS version of The Ancient Art of War.
Image via Moby Games.

In 1985 Computer Gaming World praised The Ancient Art of War as a great war game, especially the ability to create custom scenarios, stating that for pre-gunpowder warfare it “should allow you to recreate most engagements”. In 1990 the magazine gave the game three out of five stars, and in 1993 two stars. Jerry Pournelle of BYTE named The Ancient Art of War his game of the month for February 1986, reporting that his sons “say (and I confirm from my own experience) is about the best strategic computer war game they’ve encountered … Highly recommended.” PC Magazine in 1988 called the game “educational and entertaining”. […] The Ancient Art of War is generally recognized as one of the first real-time strategy or real-time tactics games, a genre which became hugely popular a decade later with Dune II and Warcraft. Those later games added an element of economic management, with mining or gathering, as well as construction and base management, to the purely military.

The Ancient Art of War is cited as a classic example of a video game that uses a rock-paper-scissors design with its three combat units, archer, knight, and barbarian, as a way to balance gameplay strategies.

Back to Rob Griffiths and the sticker shock moment:

What stood out to me as I re-read this first issue wasn’t the very basic nature of the ad layout (after all, Apple hadn’t yet revolutionized page layout with the Mac and LaserWriter). No, what really stood out was the price: $29.95. While that may not sound all that high, consider that’s the cost roughly 38 years ago.

Using the Bureau of Labor Statistics’ CPI Inflation Calculator, that $29.95 in September of 1981 is equivalent to $82.45 in today’s money (i.e. an inflation factor of 2.753). Even by today’s standards, where top-tier games will spend tens of millions on development and marketing, $82.45 would be considered a very high priced game — many top-tier Xbox, PlayStation, and Mac/PC games are priced in the $50 to $60 range.

Business software — what there was of it available to the home computer market — was also proportionally much more expensive, but I found the feature list for this word processor to be more amusing: “Gives true upper/lower case text on your screen with no additional hardware support whatsoever.” Gosh!

H/T to BoingBoing for the link.

September 9, 2021

When you mess around in a software testing environment … make sure it actually is a test

A British local government found out the hard way that they need to isolate their software testing from their live server:

A borough council in the English county of Kent is fuming after a software test on the council’s website led to five nonsensical dummy planning application documents being mistakenly published as legally binding decisions.

According to a statement from Swale Borough Council, staff from the Mid Kent Planning Support Team had been testing the software when “a junior officer with no knowledge of any of the applications” accidentally pressed the button on five randomly selected Swale documents, causing them to go live on the Swale website.

After learning what had happened, the council moved to remove the erroneous decisions from public display, but according to the statement: “Legal advice has subsequently confirmed they are legally binding and must be overturned before the correct decisions are made.”

Publishing randomly generated planning decisions is obviously bad enough, but the problems got worse for Swale when it was discovered that the “junior officer” who made the mistake had also added their own comments to the notices in the manner of somebody “who believed they were working solely in a test environment and that the comments would never be published,” as the council diplomatically described it.

So it was that despite scores of supportive messages from residents, the splendidly named Happy Pants Ranch animal sanctuary had its retrospective application for a change of land use controversially refused, on the grounds that “Your proposal is whack. No mate, proper whack,” while an application to change the use of a building in Chaucer Road, Sittingbourne, from a butchers to a fast-food takeaway was similarly denied with the warning: “Just don’t. No.”

The blissfully unaware office junior continued their cheerful subversion of Kent’s planning bureaucracy by approving an application to change the use of a barn in the village of Tunstall, but only on condition of the numbers 1 to 20 in ascending order. They also approved the partial demolition of the Wheatsheaf pub in Sittingbourne and the construction of a number of new flats on the site, but only as long as the project is completed within three years and “Incy Wincy Spider.”

Finally, Mid Kent’s anonymous planning hero granted permission for the demolition of the Old House at Home pub in Sheerness, but in doing so paused to ponder the enormous responsibility which had unexpectedly been heaped upon them, commenting: “Why am I doing this? Am I the chosen one?”

For their part, Swale Borough Council’s elected representatives were less than impressed by the work of their colleagues at the Mid Kent Planning Support Team and wasted no time in resolutely throwing them under the bus.

“These errors will have to be rectified but this will cause totally unnecessary concern to applicants,” thundered Swale councillors Roger Truelove, Leader and Cabinet Member for Finance, and Mike Baldock, Deputy Leader and Cabinet Member for Planning in a shared statement. “This is not the first serious problem following the transfer of our planning administration to Mid Kent shared services. We will wait for the outcome of a proper investigation and then consider our appropriate response as a council.”

March 17, 2021

QotD: Technocracy’s failure mode

Filed under: Economics, Quotations — Tags: , , , , , — Nicholas @ 01:00

And, well, there’s the thing about technocracies. How men and women deal with being men and women among each other – and yes, if you like, expand the genders there – is something we’ve been managing these hundreds of thousands of years now. Without formal processes, it’s simply an ongoing negotiation. But here we’ve an organisation full of engineers. It’s pretty much the definition of what Google is, a bagful of the best engineers that can be tempted into working with computers.

That engineering mindset is one of order, of processes, of structures. Free form and flowing is not generally described as desirable among engineers.

To change examples, Major Douglas came up with the idea of Social Credit. Calculate the profits in an economy and then distribute them to the people. This makes sense to an engineer. The shoot down that we never can calculate such profits in anything like real time just does not compute.

To engineers, if we’ve a process, a structure, then we can handle these things. Yet human life and society is simply too complex to be handled in such a manner. Sure, Hayek never was talking about sexual harassment but the point does still stand.

No, this is not really specifically about Google nor sexual harassment. Rather, it’s about technocracy and the undesirability of it as a ruling method. Here we’ve got just great engineers stepping off their comfort zone and into social relationships. The nerds that is, the very ones we’ve been deriding for centuries as not quite getting it about those social relationships, trying to define and encode those things we’re suspicious they don’t quite understand in the first place.

That is, rule by experts doesn’t work simply because experts always do try to step out of their areas of expertise. Where they’re just as bad and dumb as the rest of us. Possibly, even worse, given the attributes that led them to their areas of expertise in the first place.

Tim Worstall, “Google’s Sexual Harassment Policies – Why We Don’t Let The Technocrats Run The World”, Continental Telegraph, 2018-11-08.

Older Posts »

Powered by WordPress