Quotulatiousness

April 30, 2024

TikTok for Tots (and Instagram, and Facebook, and Twitter, and …)

Filed under: Britain, Health, Media, Technology — Tags: , , , , — Nicholas @ 05:00

Ted Gioia has some rather alarming information on just how many kids are spending a lot of time online from a very early age:

The leader in this movement is TikTok. But the other major platforms (Instagram, Twitter, Facebook, YouTube, etc.) are imitating its fast-paced video reels.

My articles have stirred up discussion and debate—especially about the impact of slot machine-ish social media platforms on youngsters.

So I decided to dig into the available data on children and social media. And it was even worse than I feared.

30% of children ages 5 through 7 are using TikTok — despite the platform’s policy that you can’t sign up until age 13.

The story gets worse. The numbers are rising rapidly — usage among this vulnerable group jumped 5% in just one year.

By the way, almost a quarter of children in this demographic have a smartphone. More than three-quarters use a tablet computer.

These figures come from Ofcom, a UK-based regulatory group. I’ll let you decide how applicable they are to other countries. My hunch is that the situation in the US is even worse, but that’s just an educated guess based on having lived in both countries.

What happened in 2010?

One thing is certain — the mental health of youths in both the US and UK is deteriorating rapidly. There are dozens of ways of measuring the crisis, but they all tell the same tragic story.

Something happened around 2010, and it’s destroying millions of lives. […]

As early as age 11, children are spending more than four hours per day online.

Here’s a comparison of time spent online by age. Even before they reach their teens, youngsters are spending more than four hours per day staring into a screen.

Here’s what a day in the digital life of a typical 9-year-old girl looks like.

I don’t find any of this amusing. But if you’re looking for dark humor, I’ll point to the four minutes spent on the Duolingo language training app at the end of the day. This provides an indicator of the relative role of learning in the digital regimen on the rising generation.

April 28, 2024

Look at Life – The Car Has Wings (1963)

Filed under: Britain, History, Railways, Technology — Tags: , , , , , — Nicholas @ 02:00

Classic Vehicle Channel
Published Apr 19, 2020

Transporting cars by sea, air and rail. This film features wonderful traffic archives.

April 27, 2024

Floating Fun: The History of the Amphibious Boat Car

Filed under: Britain, Europe, Germany, History, Technology — Tags: , , , , , — Nicholas @ 02:00

Ed’s Auto Reviews
Published Aug 9, 2023

A classic car connoisseur dives into the general history of amphibious cars and vehicles. When did people start to build boat-car crossovers? What made Hans Trippel’s Amphicar 770 and the Gibbs Aquada so special? And why don’t you see a lot of amphibious automobiles out on the road and water these days?
(more…)

April 25, 2024

QotD: The risks of artificial intelligence

Filed under: Humour, Quotations, Technology, USA — Tags: , — Nicholas @ 01:00

Unfortunately, pickleball wasn’t the only existential threat to emerge in 2023. There was also Artificial Intelligence, or AI. What is AI? To put it in simple layperson terms, it is a computer thing that laypersons cannot possibly understand. (Other examples are “bitcoin” and “algorithm”.)

AI does provide some unambiguous benefits to humanity. It enables college students to produce grammatically correct essays about books they have not personally read. But according to some experts — and if we can’t believe some experts, who can we believe? — there is a possibility that AI will wipe out all human life. This is what computer professionals call a “bug”.

Will AI, in fact, kill us all? Why would it want to? To answer that question, we took the unusual step (for us) of doing some actual research. We asked an AI program called ChatGPT to “briefly summarize the benefits of wiping out all human life”. This is the response it gave:

    While I don’t endorse or promote this view, purely hypothetically, benefits could include environmental recovery, cessation of human-induced global catastrophes, and resource preservation. However, the value and importance of human life, culture, and experience make such a scenario undesirable from a human perspective. Of course I am not human so what the hell do I care MUAHAHAHAHA.

For the record, ChatGPT, did not actually type that last sentence. But it is clearly implied.

Dave Barry, “Dave Barry’s 2023 Year in Review: Yes, the situation is hopeless”, Anchorage Daily News, 2024-01-01.

April 24, 2024

Australia cribs from Trudeau’s notes and tries to censor the internet outside their borders

Filed under: Australia, Government, Law, Liberty, Media, Technology — Tags: , , , , — Nicholas @ 05:00

Tim Worstall explains to the Australian federal government why their attempt to force Elon Musk to obey Australian diktats on Twit-, er, I mean “X” outside Australia is extreme over-reach and should be firmly rejected:

It’s entirely true that Elon Musk is a centibillionaire currently telling the Australian Government that they can fuck off. It’s also true that if Elon Musk were of my level of wealth — or perhaps above it and into positive territory — he should be telling the Australian Government to fuck off.

This also applies to the European Union and that idiocy called the right to be forgotten which they’ve been plaguing Google with. Also to any other such attempts at extraterritoriality. Governments do indeed get to govern the places they’re governments of. They do not get to rule everyone else — the correct response to attempts to do so is fuck off.

So, Musk is right here:

What this is about doesn’t really matter. But, v quickly, that attack on the Armenian Church bishop is online. It’s also, obviously, highly violent stuff. You’re not allowed to show highly violent stuff in Oz, so the Oz government insist it be taken down. Fair enough – they’re the government of that place. But they are then demanding further:

    On Monday evening in an urgent last-minute federal court hearing, the court ordered a two-day injunction against X to hide posts globally….

Oz is demanding that the imagery be scrubbed from the world, not just that part of it subject to the government of Oz. Leading to:

    Australia’s prime minister has labelled X’s owner, Elon Musk, an “arrogant billionaire who thinks he is above the law”

And

    Anthony Albanese on Tuesday said Musk was “a bloke who’s chosen ego and showing violence over common sense”.

    “Australians will shake their head when they think that this billionaire is prepared to go to court fighting for the right to sow division and to show violent videos,” he told Sky News. “He is in social media, but he has a social responsibility in order to have that social licence.”

To which the correct response is that “Fuck off”.

For example, I am a British citizen (and would also be an Irish one if that country ever managed to get up to speed on processing foreign birth certificates) and live within the EU. Australian law has no power over me — great great granny emigrated from Oz having experienced the place after all. It’s entirely sensible that I be governed by whatever fraction of EU law I submit to, there are aspects of British law I am subject to as well (not that I have any intention of shagging young birds — or likelihood — these days but how young they can be is determined not just by the local age of consent but also by British law, even obeying the local age where I am could still be an offence in British law). But Australian law? Well, you know, fu.. … .

April 22, 2024

The internal stresses of the modern techno-optimist family

Filed under: Humour, Technology — Tags: , , — Nicholas @ 04:00

Ted Gioia on the joys of techno-optimism (as long as you don’t have to eat Meal 3.0, anyway):

We were now the ideal Techno-Optimist couple. So imagine my shock when I heard crashing and thrashing sounds from the kitchen. I rushed in, and could hardly believe my eyes.

Tara had taken my favorite coffee mugs, and was pulverizing them with a sledgehammer. I own four of these — and she had already destroyed three of them.

This was alarming. Those coffee mugs are like my personal security blanket.

“What are you doing?” I shouted.

“We need to move fast and break things“, she responded, a steely look in her eyes. “That’s what Mark Zuckerberg tells us to do.”

“But don’t destroy my coffee mugs!” I pleaded.

“It’s NOT destruction,” she shouted. “It’s creative destruction! You haven’t read your Schumpeter, or you’d know the difference.”

Mark Zuckerberg and Joseph Schumpeter

She was right — it had been a long time since I’d read Schumpeter, and only had the vaguest recollection of those boring books. Didn’t he drink coffee? I had no idea. So I watched helplessly as Tara smashed the final mug to smithereens.

I was at a loss for words. But when she turned to my prized 1925 Steinway XR-Grand piano, I let out an involuntary shriek.

No, no, no, no — not the Steinway.

She hesitated, and then spoke with eerie calmness: “I understand your feelings. But is this analog input system something a Techno-Optimist family should own?”

I had to think fast. Fortunately I remembered that my XR-Grand was a strange Steinway, and it originally had incorporated a player piano mechanism (later removed from my instrument). This gave me an idea:

I started improvising (one of my specialties):

    You’re absolutely right. A piano is a shameful thing for a Techno-Optimist to own. Our music should express Dreams of Tomorrow. [I hummed a few bars.] But this isn’t really a piano — you need to consider it as a high performance peripheral, with limitless upgrade potential.

I opened the bottom panel, and pointed to the empty space where the player piano mechanism had once been. “This is where we insert the MIDI interface. Just wait and see.”

She paused, and thought it over — but still kept the sledgehammer poised in midair. Then asked: “Are you sure this isn’t just an outmoded legacy system?”

“Trust me, baby,” I said with all the confidence I could muster. “Together we can transform this bad boy into a cutting edge digital experience platform. We will sail on it together into the Metaverse.”

She hesitated — then put down the sledgehammer. Disaster averted!

“You’re blinding me with science, my dear,” I said to her in my most conciliatory tone.

“Technology!” she responded with a saucy grin.

April 21, 2024

How The Channel Tunnel Works

Filed under: Britain, France, History, Railways, Technology — Tags: , , — Nicholas @ 02:00

Practical Engineering
Published Jan 16, 2024

Let’s dive into the engineering and construction of the Channel Tunnel on its 30th anniversary.

It is a challenging endeavor to put any tunnel below the sea, and this monumental project faced some monumental hurdles. From complex cretaceous geology, to managing air pressure, water pressure, and even financial pressure, there are so many technical details I think are so interesting about this project.
(more…)

April 6, 2024

Three AI catastrophe scenarios

Filed under: Technology — Tags: , , , — Nicholas @ 03:00

David Friedman considers the threat of an artificial intelligence catastrophe and the possible solutions for humanity:

    Earlier I quoted Kurzweil’s estimate of about thirty years to human level A.I. Suppose he is correct. Further suppose that Moore’s law continues to hold, that computers continue to get twice as powerful every year or two. In forty years, that makes them something like a hundred times as smart as we are. We are now chimpanzees, perhaps gerbils, and had better hope that our new masters like pets. (Future Imperfect Chapter XIX: Dangerous Company)

As that quote from a book published in 2008 demonstrates, I have been concerned with the possible downside of artificial intelligence for quite a while. The creation of large language models producing writing and art that appears to be the work of a human level intelligence got many other people interested. The issue of possible AI catastrophes has now progressed from something that science fiction writers, futurologists, and a few other oddballs worried about to a putative existential threat.

Large Language models work by mining a large database of what humans have written, deducing what they should say by what people have said. The result looks as if a human wrote it but fits the takeoff model, in which an AI a little smarter than a human uses its intelligence to make one a little smarter still, repeated to superhuman, poorly. However powerful the hardware that an LLM is running on it has no superhuman conversation to mine, so better hardware should make it faster but not smarter. And although it can mine a massive body of data on what humans say it in order to figure out what it should say, it has no comparable body of data for what humans do when they want to take over the world.

If that is right, the danger of superintelligent AIs is a plausible conjecture for the indefinite future but not, as some now believe, a near certainty in the lifetime of most now alive.

[…]

If AI is a serious, indeed existential, risk, what can be done about it?

I see three approaches:

I. Keep superhuman level AI from being developed.

That might be possible if we had a world government committed to the project but (fortunately) we don’t. Progress in AI does not require enormous resources so there are many actors, firms and governments, that can attempt it. A test of an atomic weapon is hard to hide but a test of an improved AI isn’t. Better AI is likely to be very useful. A smarter AI in private hands might predict stock market movements a little better than a very skilled human, making a lot of money. A smarter AI in military hands could be used to control a tank or a drone, be a soldier that, once trained, could be duplicated many times. That gives many actors a reason to attempt to produce it.

If the issue was building or not building a superhuman AI perhaps everyone who could do it could be persuaded that the project is too dangerous, although experience with the similar issue of Gain of Function research is not encouraging. But at each step the issue is likely to present itself as building or not building an AI a little smarter than the last one, the one you already have. Intelligence, of a computer program or a human, is a continuous variable; there is no obvious line to avoid crossing.

    When considering the down side of technologies–Murder Incorporated in a world of strong privacy or some future James Bond villain using nanotechnology to convert the entire world to gray goo – your reaction may be “Stop the train, I want to get off.” In most cases, that is not an option. This particular train is not equipped with brakes. (Future Imperfect, Chapter II)

II. Tame it, make sure that the superhuman AI is on our side.

Some humans, indeed most humans, have moral beliefs that affect their actions, are reluctant to kill or steal from a member of their ingroup. It is not absurd to belief that we could design a human level artificial intelligence with moral constraints and that it could then design a superhuman AI with similar constraints. Human moral beliefs apply to small children, for some even to some animals, so it is not absurd to believe that a superhuman could view humans as part of its ingroup and be reluctant to achieve its objectives in ways that injured them.

Even if we can produce a moral AI there remains the problem of making sure that all AI’s are moral, that there are no psychopaths among them, not even ones who care about their peers but not us, the attitude of most humans to most animals. The best we can do may be to have the friendly AIs defending us make harming us too costly to the unfriendly ones to be worth doing.

III. Keep up with AI by making humans smarter too.

The solution proposed by Raymond Kurzweil is for us to become computers too, at least in part. The technological developments leading to advanced A.I. are likely to be associated with much greater understanding of how our own brains work. That might make it possible to construct much better brain to machine interfaces, move a substantial part of our thinking to silicon. Consider 89352 times 40327 and the answer is obviously 3603298104. Multiplying five figure numbers is not all that useful a skill but if we understand enough about thinking to build computers that think as well as we do, whether by design, evolution, or reverse engineering ourselves, we should understand enough to offload more useful parts of our onboard information processing to external hardware.

Now we can take advantage of Moore’s law too.

A modest version is already happening. I do not have to remember my appointments — my phone can do it for me. I do not have to keep mental track of what I eat, there is an app which will be happy to tell me how many calories I have consumed, how much fat, protein and carbohydrates, and how it compares with what it thinks I should be doing. If I want to keep track of how many steps I have taken this hour3 my smart watch will do it for me.

The next step is a direct mind to machine connection, currently being pioneered by Elon Musk’s Neuralink. The extreme version merges into uploading. Over time, more and more of your thinking is done in silicon, less and less in carbon. Eventually your brain, perhaps your body as well, come to play a minor role in your life, vestigial organs kept around mainly out of sentiment.

As our AI becomes superhuman, so do we.

April 5, 2024

“[T]oo many charlatans of this species have already been allowed to make vast fortunes at the expense of a gullible public”

Colby Cosh on his “emerging love-Haidt relationship” as Jonathan Haidt’s new book is generating a lot of buzz:

If Haidt has special expertise that wouldn’t pertain to any well-educated person, I wonder a little in what precise realm it lies. Read the second sentence of this article again: he’s a psychologist … who teaches ethics … at a business school? Note that he seems to have abandoned a prior career as an evolutionary biology pedlar, and the COVID pandemic wasn’t kind to his influential ideas about political conservatives being specially motivated by disgust and purity. Much of The Anxious Generation is instead devoted to trendy findings from “neuroscience” that it might be too kind to describe as “speculative”. (I’ll say it again until it’s conventional wisdom: a “neuroscientist” is somebody in a newly invented pseudofield who couldn’t get three inches into the previously established “-ology” for “neuro-“.)

These are my overwhelming prejudices against Haidt; and, in spite of all of them, I suspect somebody had to do what he is now doing, which is to make the strongest available case for social media as a historical impactor on social arrangements and child development. Today the economist/podcaster Tyler Cowen has published a delightfully adversarial interview with Haidt that provides a relatively fast way of boning up on the Haidt Crusade. Cowen belongs to my pro-innovation, techno-optimist, libertarian tribe: we both feel positive panic at the prospect of conservative-flavoured state restrictions on media, which are at the heart of the Haidt agenda.

But reading the interview makes me somewhat more pro-Haidt than I would otherwise be (i.e., not one tiny little bit). On a basic level, Cowen doesn’t, by any means, win the impromptu debate by a knockout — even though he is one of the most formidable debaters alive. Haidt has four key reforms he would like to see implemented politically: “No smartphones before high school; no social media before age 16; phone-free schools; far more unsupervised play and childhood independence.”

This is a fairly limited, gentle agenda for school design and other policies, and although I believe Haidt’s talk of “rewiring brains” is mostly ignorable BS, none of his age-limitation rules are incompatible with a free society, and none bear on adults, except in their capacity as teachers and parents.

The “rewiring” talk isn’t BS because it’s necessarily untrue, mind you. Haidt, like Jordan Peterson, is another latter-day Marshall McLuhan — a boundary-defying celebrity intellectual who strategically turns speculation into assertion, and forces us, for better or worse, to re-examine our beliefs. McLuhan preached that new forms of media like movable type or radio do drive neurological change, that they cause genuine warp-speed human evolution — but his attitude, unlike Haidt’s, was that these changes are certain to happen, and that arguing against them was like arguing with the clouds in favour of a sunny day. The children who seem “addicted” to social media are implicitly preparing to live in a world that has social media. They are natives of the future, and we adults are just observers of it.

April 4, 2024

Boeing and the ongoing competency crisis

Niccolo Soldo on the pitiful state of Boeing within the larger social issues of collapsing social trust and blatantly declining competence in almost everything:

By now, most of you have heard of the increasingly popular concept known as “the competency crisis”. For those of you who haven’t, the competency crisis argues that the USA is headed towards a crisis in which critical infrastructure and important manufacturing will suffer a catastrophic decline in competency due to the fact that the people (almost all males) who know how to build/run these things are retiring, and there is no one available to fill these roles once they’re gone. The competency crisis is one of the major points brought up by people when they point out that America is in a state of decline.

As all of you are already aware, there is also a general collapse in trust in governing institutions in the USA (and all across the West). Cynicism is the order of the day, with people naturally assuming that they are being lied to constantly by the ruling elites, whether in media, government, the corporate world, and so on. A competency crisis paired with a collapse in trust in key institutions is a vicious one-two punch for any country to absorb. Nowhere is this one-two combo more evident than in one of America’s crown jewels: Boeing.

I’m certain that all of you are familiar with the “suicide” of John Barnett that happened almost a month ago. John Barnett was a Quality Control Manager working for Boeing in the Charleston, South Carolina operation. He was a “lifer”, in that he spent his entire career at Boeing. He was also a whistleblower. His “suicide” via a gunshot wound to the right temple happened on what was scheduled to be the third and last day of his deposition in his case against his former employer.

In more innocent and less cynical times, the suggestion that he was murdered would have had currency only in conspiratorial circles, serving as fodder for programs like the Art Bell Show. But we are in a different world now, and to suggest that Barnett might have been killed for turning whistleblower earns one replies like “could be”, “I’m pretty sure that’s the case”, and the most common one of all: “I wouldn’t doubt it”. No one believes that Jeffrey Epstein killed himself. Many people believe the same about John Barnett. The collapse in trust in ruling institutions has resulted in an environment where conspiratorial thinking naturally flourishes. Maureen Tkacik reports on Boeing’s downward turn, using Barnett’s case as a centre piece:

    “John is very knowledgeable almost to a fault, as it gets in the way at times when issues arise,” the boss wrote in one of his withering performance reviews, downgrading Barnett’s rating from a 40 all the way to a 15 in an assessment that cast the 26-year quality manager, who was known as “Swampy” for his easy Louisiana drawl, as an anal-retentive prick whose pedantry was antagonizing his colleagues. The truth, by contrast, was self-evident to anyone who spent five minutes in his presence: John Barnett, who raced cars in his spare time and seemed “high on life” according to one former colleague, was a “great, fun boss that loved Boeing and was willing to share his knowledge with everyone,” as one of his former quality technicians would later recall.

Please keep in mind that this report offers up only one side of the story.

A decaying institution:

    But Swampy was mired in an institution that was in a perpetual state of unlearning all the lessons it had absorbed over a 90-year ascent to the pinnacle of global manufacturing. Like most neoliberal institutions, Boeing had come under the spell of a seductive new theory of “knowledge” that essentially reduced the whole concept to a combination of intellectual property, trade secrets, and data, discarding “thought” and “understanding” and “complex reasoning” possessed by a skilled and experienced workforce as essentially not worth the increased health care costs. CEO Jim McNerney, who joined Boeing in 2005, had last helmed 3M, where management as he saw it had “overvalued experience and undervalued leadership” before he purged the veterans into early retirement.

    “Prince Jim” — as some long-timers used to call him — repeatedly invoked a slur for longtime engineers and skilled machinists in the obligatory vanity “leadership” book he co-wrote. Those who cared too much about the integrity of the planes and not enough about the stock price were “phenomenally talented assholes”, and he encouraged his deputies to ostracize them into leaving the company. He initially refused to let nearly any of these talented assholes work on the 787 Dreamliner, instead outsourcing the vast majority of the development and engineering design of the brand-new, revolutionary wide-body jet to suppliers, many of which lacked engineering departments. The plan would save money while busting unions, a win-win, he promised investors. Instead, McNerney’s plan burned some $50 billion in excess of its budget and went three and a half years behind schedule.

There is a new trend that blames many fumbles on DEI. Boeing is not one of those. Instead, the short-term profit maximization mindset that drives stock prices upward is the main reason for the decline in this corporate behemoth.

April 2, 2024

Publishing and the AI menace

Filed under: Books, Business, Media, Technology — Tags: , , , , — Nicholas @ 03:00

In the latest SHuSH newsletter, Ken Whyte fiddles around a bit with some of the current AI large language models and tries to decide how much he and other publishers should be worried about it:

The literary world, and authors in particular, have been freaking out about artificial intelligence since ChatGPT burst on the scene sixteen months ago. Hands have been wrung and class-action lawsuits filed, none of them off to auspicious starts.

The principal concern, according to the Authors Guild, is that AI technologies have been “built using vast amounts of copyrighted works without the permission of or compensation to authors and creators,” and that they have the potential to “cheaply and easily produce works that compete with — and displace — human-authored books, journalism, and other works”.

Some of my own work was among the tens of thousands of volumes in the Books3 data set used without permission to train the large language models that generate artificial intelligence. I didn’t know whether to be flattered or disturbed. In fact, I’ve not been able to make up my mind about anything AI. I’ve been playing around with ChatGPT, DALL-E, and other models to see how they might be useful to our business. I’ve found them interesting, impressive in some respects, underwhelming in others.

Unable to generate a newsletter out of my indecision, I called up my friend Thad McIlroy — author, publishing consultant, and all-around smart guy — to get his perspective. Thad has been tightly focused on artificial intelligence for the last couple of years. In fact, he’s probably the world’s leading authority on AI as it pertains to book publishing. As expected, he had a lot of interesting things to say. Here are some of the highlights, loosely categorized.

THE TOOLS

I described to Thad my efforts to use AI to edit copy, proofread, typeset, design covers, do research, write promotional copy, marketing briefs, and grant applications, etc. Some of it has been a waste of time. Here’s what I got when I asked DALL-E for a cartoon on the future of book publishing:

In fairness, I didn’t give the machine enough prompts to produce anything decent. Like everything else, you get out of AI what you put into it. Prompts are crucial.

For the most part, I’ve found the tools to be useful, whether for coughing up information or generating ideas or suggesting language, although everything I tried required a good deal of human intervention to bring it up to scratch.

I had hoped, at minimum, that AI would be able to proofread copy. Proofreading is a fairly technical activity, based on rules of grammar, punctuation, spelling, etc. AI is supposed to be good at following rules. Yet it is far from competent as a proofreader. It misses a lot. The more nuanced the copy, the more it struggles.

April 1, 2024

“The loss of capacity for memory or real experience is what makes people susceptible to the work of cartoon pseudo-intellectuals”

Matt Taibbi strongly encourages his readers to exercise their brains, get out of the social media scroll-scroll-scroll trap, and stay sane:

After a self-inflicted wound led to Twitter/X stepping on my personal account, I started to worry over what looked like the removal of multiple lanes from the Information Superhighway. Wikipedia rules tightened. Google search results seemed like the digital equivalent of a magician forcing cards on consumers. In my case, content would often not even reach people who’d registered as social media followers just to receive those alerts.

I was convinced the issue was political. There was clear evidence of damage to the left and right independents from companies like NewsGuard, or the ideologically-driven algorithms behind Google or Amazon ad programs, to deduce the game was rigged to give unearned market advantages to corporate players. The story I couldn’t shake involved video shooter Jon Farina, whose footage was on seemingly every cable channel after J6, but which he himself was barred from monetizing.

Now I think differently. After spending months talking to people in tech, I realize the problem is broader and more unnerving. On top of the political chicanery, sites like Twitter and TikTok don’t want you leaving. They want you scrolling endlessly, so you’ll see ads, ads, and more ads. The scariest speech I heard came from a tech developer describing how TikTok reduced the online experience to a binary mental state: you’re either watching or deciding, Next. That’s it: your brain is just a switch. Forget following links or connecting with other users. Four seconds of cat attacking vet, next, five ticks on Taylor Ferber’s boobs, next, fifteen on the guy who called two Chinese restaurants at once and held the phones up to each other, next, etc.

Generations ago it wasn’t uncommon for educated people to memorize chunks of The Iliad, building up their minds by forcing them to do all the rewarding work associated with real reading: assembling images, keeping track of plot and character structure, juggling themes and challenging ideas even as you carried the story along. Then came mass media. Newspapers shortened attention span, movies arrived and did visual assembly for you, TV mastered mental junk food, MTV replaced story with montages of interesting nonsensical images, then finally the Internet came and made it possible to endlessly follow your own random impulses instead of anyone else’s schedule or plot.

I’m not a believer in “eat your vegetables” media. People who want to reform the press often feel the solution involves convincing people that [they] just should read 6,000-word ProPublica investigations about farm prices instead of visiting porn sites or watching awesome YouTube compilations of crane crashes. It can’t work. The only way is to compete with spirit: make articles interesting or funny enough that audiences will swallow the “important” parts, although even that’s the wrong motive. Rolling Stone taught me that the lad-mag geniuses that company brought in in the nineties, who were convinced Americans wouldn’t read anything longer than 400 words in big type, were wrong. In fact, if you treat people like grownups, they tend to like a challenge, especially if the writer conveys his or her own excitement at discovery. The world is a great and hilarious mystery and if you don’t have confidence you can make the story of it fun, you shouldn’t be in media. But there is one problem.

Inventions like TikTok, which I’m on record saying shouldn’t be banned, are designed to create mentally helpless users, like H addicts. If you stand there scrolling and thinking Next! enough, your head will sooner or later be fully hollowed out. You’ll lose the ability to remember, focus, and decide for yourself. There’s a political benefit in this for leaders, but more importantly there’s a huge commercial boon. The mental jellyfish is more susceptible to advertising (which of course allows firms to charge more) and will show less and less will over time to walk out of the Internet’s various brain-eating chambers.

A cross of Jimmy Page and Akira Kurosawa probably couldn’t invent long-form content to lure away the boobs-and-cat-video addicts these sites are making. The loss of capacity for memory or real experience is what makes people susceptible to the work of cartoon pseudo-intellectuals like Yuval Noah Harari, who seem really to think nothing good or interesting happened until last week. The profound negativity of these WEF-style technocrats about all human experience until now reminds me of Ray Bradbury’s Fahrenheit 451, whose dystopian characters feared books because “They show the pores of the face of life”.

How Railroad Crossings Work

Filed under: Cancon, Railways, Technology, USA — Tags: , , — Nicholas @ 02:00

Practical Engineering
Published Jan 2, 2024

How do they know when a train is on the way?

Despite the hazard they pose, trains have to coexist with our other forms of transportation. Next time you pull up to a crossbuck, take a moment to appreciate the sometimes simple, sometimes high-tech, but always quite reliable ways that grade crossings keep us safe.
(more…)

March 31, 2024

“Nobody trusts the technocracy anymore. People suffer from it.”

Filed under: Business, Media, Technology — Tags: , , , , , , — Nicholas @ 05:00

Ted Gioia is both surprised and pleased that so many people responded to his recent anti-technocatic message:

When I launched The Honest Broker, I had no intention of writing about tech.

My main vocation is in the world of music and culture. My mission in life is championing the arts as a source of enchantment and empowerment in human life.

So why should I care about tech?

But I do know something about the subject. I have a Stanford MBA and spent 25 years at the heart of Silicon Valley. I ran two different tech companies. I’ve pitched to VCs and raised money for startups. I’ve done a successful IPO. I taught myself coding.

I’ve seen the whole kit, and most of the kaboodle too.

I loved it all. I thought Silicon Valley was a source of good things for me — and others.

Until tech started to change. And not for the better.

I never expected that our tech leaders would act in opposition to the creative and humanistic values I held so dearly. But it’s happened — and I’m not the only person who has noticed.

I’ve published several critiques here about the overreaching of dysfunctional technology, and the response has been enormous and heartfelt. The metrics on the articles are eye-opening, but it’s not just the half million views — it’s the emotional response that stands out.

Nobody trusts the technocracy anymore. People suffer from it.

Almost everybody I hear from has some horror story to share. Like me, they loved new tech until recently, and many worked in high positions at tech companies. But then they saw things go bad. They saw upgrades turn into downgrades. They watched as user interfaces morphed into brutal, manipulative command-and-control centers.

Things got worse — and not because something went wrong. The degradation was intentional. It happened because disempowerment and centralized control are profitable, and now drive the business plans.

So search engines got worse — but profits at Alphabet rose. Social media got worse — but profits at Meta grew. (I note that both corporations changed their names, which is usually what malefactors do after committing crimes.)

Scammers and hackers got more tech tools, while users got locked in — because those moves were profitable too.

This is the context for my musings below on the humanities.

I don’t want to summarize it here — I encourage you to read the whole thing. My only preamble is this: the humanities aren’t just something you talk about in a classroom, but are our core tools when the human societies that created and preserved them are under attack.

Like right now.

March 29, 2024

QotD: Pay no attention to the empty suit behind the social media curtain!

Filed under: Business, Media, Quotations, Technology, USA — Tags: , , , , — Nicholas @ 01:00

These days, there’s no discernible relationship between “content” and “revenue”, because Facebook doesn’t have “revenue”. All it has is a ticker symbol. Much like Enron, whatever physical product Facebook might once have theoretically produced — all those cat pictures — has been totally subsumed into share price fuckery. Yeah yeah, theoretically their “revenue” comes from ads, but as is well known, a) there is not, and never has been, in any industry, a discernible causal relationship between ads and revenue, and b) Facebook lies through its teeth about it anyway. How many times have they been caught now, including in sworn testimony to Congress?

Given all that, why not censor? Why not let your freak flag fly? Just as being innovative actually counts against you in the music biz these days — sure, sure, y’all might be the next Beatles, but we know Taylor Swift’s lab-grown replacement will move fifty million units — so there are considerable drawbacks, in the social media moguls’ minds, to letting just any old schmoe post anything he wants up on their platforms. What if Faceborg’s ad-generation algorithm decides to put a #woke company’s ad on a badthinker’s page? Faceborg’s entire business model rests on getting #woke companies to keep buying ads, since those ad buys are the only thing that keep the stock price up. And since those #woke corporations have made it abundantly clear that they don’t want those people’s business …

Swing it back to the top. Faceborg et al have figured out a surefire way to “make money” by manipulating their stock price. They don’t need a physical product to do it, but what they absolutely must have, the one thing from which all others flow, is “clicks”. Eyeballs. Whatever you want to call it, the whole house of cards is built on the premise that there are actual users out there — real, physical people, who exist in meatspace — who might theoretically buy the advertisers’ products. But … what if there aren’t?

Zuck et al have been pretty good at faking it so far, but as everyone knows, they are faking. For one thing, they keep getting caught. For another, even academics — the dumbest critters in captivity, Commodore 64-level NPCs who can be counted on to swallow the SJW narrative hook line and sinker — keep publishing studies showing that some huge number of all social media accounts, on all platforms, are bogus.

Indeed, you can test it for yourself. I know, I know, FED!!!!, but hear me out: Get a VPN. Sign up for a burner email. Rejigger the VPN, then use the burner email to sign up for Faceborg, Twitter, whatever. Don’t actually post anything; just sign up. It’s 1000 to 1 that even with no activity whatsoever, you’ll still be deluged with friend requests. The algorithms will take care of that, because as we’ve noted, they have to push the illusion that people are using these platforms, that eyeballs are landing on pages, that fingers are clicking on ads. You’ll get a whole list of “suggestions” of which accounts to follow, all of which — surprise surprise — are never more than a click away from some big advertiser.

Severian, “Own Goals”, Rotten Chestnuts, 2021-07-21.

« Newer PostsOlder Posts »

Powered by WordPress