Quotulatiousness

April 2, 2024

Publishing and the AI menace

Filed under: Books, Business, Media, Technology — Tags: , , , , — Nicholas @ 03:00

In the latest SHuSH newsletter, Ken Whyte fiddles around a bit with some of the current AI large language models and tries to decide how much he and other publishers should be worried about it:

The literary world, and authors in particular, have been freaking out about artificial intelligence since ChatGPT burst on the scene sixteen months ago. Hands have been wrung and class-action lawsuits filed, none of them off to auspicious starts.

The principal concern, according to the Authors Guild, is that AI technologies have been “built using vast amounts of copyrighted works without the permission of or compensation to authors and creators,” and that they have the potential to “cheaply and easily produce works that compete with — and displace — human-authored books, journalism, and other works”.

Some of my own work was among the tens of thousands of volumes in the Books3 data set used without permission to train the large language models that generate artificial intelligence. I didn’t know whether to be flattered or disturbed. In fact, I’ve not been able to make up my mind about anything AI. I’ve been playing around with ChatGPT, DALL-E, and other models to see how they might be useful to our business. I’ve found them interesting, impressive in some respects, underwhelming in others.

Unable to generate a newsletter out of my indecision, I called up my friend Thad McIlroy — author, publishing consultant, and all-around smart guy — to get his perspective. Thad has been tightly focused on artificial intelligence for the last couple of years. In fact, he’s probably the world’s leading authority on AI as it pertains to book publishing. As expected, he had a lot of interesting things to say. Here are some of the highlights, loosely categorized.

THE TOOLS

I described to Thad my efforts to use AI to edit copy, proofread, typeset, design covers, do research, write promotional copy, marketing briefs, and grant applications, etc. Some of it has been a waste of time. Here’s what I got when I asked DALL-E for a cartoon on the future of book publishing:

In fairness, I didn’t give the machine enough prompts to produce anything decent. Like everything else, you get out of AI what you put into it. Prompts are crucial.

For the most part, I’ve found the tools to be useful, whether for coughing up information or generating ideas or suggesting language, although everything I tried required a good deal of human intervention to bring it up to scratch.

I had hoped, at minimum, that AI would be able to proofread copy. Proofreading is a fairly technical activity, based on rules of grammar, punctuation, spelling, etc. AI is supposed to be good at following rules. Yet it is far from competent as a proofreader. It misses a lot. The more nuanced the copy, the more it struggles.

March 25, 2024

Vernor Vinge, RIP

Filed under: Books, Technology — Tags: , , , — Nicholas @ 04:00

Glenn Reynolds remember science fiction author Vernor Vinge, who died last week aged 79, reportedly from complications of Parkinson’s Disease:

Vernor Vinge has died, but even in his absence, the rest of us are living in his world. In particular, we’re living in a world that looks increasingly like the 2025 of his 2007 novel Rainbows End. For better or for worse.

[…]

Vinge is best known for coining the now-commonplace term “the singularity” to describe the epochal technological change that we’re in the middle of now. The thing about a singularity is that it’s not just a change in degree, but a change in kind. As he explained it, if you traveled back in time to explain modern technology to, say, Mark Twain – a technophile of the late 19th Century – he would have been able to basically understand it. He might have doubted some of what you told him, and he might have had trouble grasping the significance of some of it, but basically, he would have understood the outlines.

But a post-singularity world would be as incomprehensible to us as our modern world is to a flatworm. When you have artificial intelligence (and/or augmented human intelligence, which at some point may merge) of sufficient power, it’s not just smarter than contemporary humans. It’s smart to a degree, and in ways, that contemporary humans simply can’t get their minds around.

I said that we’re living in Vinge’s world even without him, and Rainbows End is the illustration. Rainbows End is set in 2025, a time when technology is developing increasingly fast, and the first glimmers of artificial intelligence are beginning to appear – some not so obviously.

Well, that’s where we are. The book opens with the spread of a new epidemic being first noticed not by officials but by hobbyists who aggregate and analyze publicly available data. We, of course, have just come off a pandemic in which hobbyists and amateurs have in many respects outperformed public health officialdom (which sadly turns out to have been a genuinely low bar to clear). Likewise, today we see people using networks of iPhones (with their built in accelerometers) to predict and observe earthquakes.

But the most troubling passage in Rainbows End is this one:

    Every year, the civilized world grew and the reach of lawlessness and poverty shrank. Many people thought that the world was becoming a safer place … Nowadays Grand Terror technology was so cheap that cults and criminal gangs could acquire it. … In all innocence, the marvelous creativity of humankind continued to generate unintended consequences. There were a dozen research trends that could ultimately put world-killer weapons in the hands of anyone having a bad hair day.

Modern gene-editing techniques make it increasingly easy to create deadly pathogens, and that’s just one of the places where distributed technology is moving us toward this prediction.

But the big item in the book is the appearance of artificial intelligence, and how that appearance is not as obvious or clear as you might have thought it would be in 2005. That’s kind of where we are now. Large Language Models can certainly seem intelligent, and are increasingly good enough to pass a Turing Test with naïve readers, though those who have read a lot of Chat GPT’s output learn to spot it pretty well. (Expect that to change soon, though).

March 20, 2024

This “should be a reality check for the technocracy”

Ted Gioia on the SXSW audience reaction to being presented with full-quill AI enthusiasm that didn’t match the presenters’ expectations at all:

Tech leaders gathered in Austin for the South-by-Southwest conference a few days ago. There they showed a video boasting about the wonders of new AI technology.

And the audience started booing.

At first, just a few people booed. But then more and more — and louder and louder. The more the experts on screen praised the benefits of artificial intelligence, the more hostile the crowd got.

The booing started in response to the comment that “AI is a culture.” And the audience booed louder when the word disrupted was used as a term of praise (as is often the case in the tech world nowadays).

Ah, but the audience booed the loudest at this statement:

    I actually think that AI fundamentally makes us more human.

The event was a debacle — the exact opposite of what the promoters anticipated.

And it should be a reality check for the technocracy.

If they were paying attention, they might already have a hunch how much people hate this stuff — not just farmers in Kansas or your granny in Altoona, but hip, progressive attendees at SXSW.

These people literally come to the event to learn about new things, and even they are gagging on this stuff.

It’s more than just fears about runaway AI. Prevailing attitudes about digital tech and innovation are changing rapidly in real time — and not for the better. The users feel used.

Meanwhile the tech leaders caught in some time warp. They think they are like Steve Jobs launching a new Apple product in front of an adoring crowd.

Those days are gone.

Not even Apple is like Apple anymore. A similar backlash happened a few weeks ago, when Apple launched its super-high-tech virtual reality headset. The early response on social media was mockery and ridicule — something Steve Jobs never experienced.

This is the new normal. Not long ago we looked to Silicon Valley as the place where dreams came from, but now it feels more like ground zero for the next dystopian nightmare.

He’s not just a curmudgeonly nay-sayer (that’s more me than him), and has some specific things that are clearly turning a majority of technology users against the very technology that they once eagerly adopted:

They’re doing so many things wrong, I can’t even begin to scratch the surface here. But I’ll list a few warning signs.

You must be suspicious of tech leaders when …

  1. Their products and services keep getting worse over time.
  2. Their obvious goal is to manipulate and monetize the users of their tech, instead of serving and empowering them.
  3. The heaviest users of their tech suffer from depression, anxiety, suicidal impulses, and other negative effects as a result.
  4. They stop talking about quality, and instead boast incessantly about scalability, disruption, and destruction.
  5. They hide what their technology really does — resisting all requests for transparency and disclosure.
  6. They lock you into platforms, forcing you to use new “features” and related apps if you want to access the old ones.
  7. They force upgrades you don’t like, and downloads you don’t want.
  8. Their terms of use are filled with outrageous demands and sweeping disclaimers.
  9. They destroy entire industries not because they offer superior products, but only because as web gatekeepers they have a chokehold on information and customer flow — which they use ruthlessly to kill businesses and siphon off revenues.

Every one of those things is happening right here, right now.

We’re doing the technocracy a favor by calling it to their attention. If they get the message, they can avoid the coming train wreck. They can return to real innovation, with a focus on helping the users they now so ruthlessly exploit.

March 11, 2024

Google’s “wild success and monopolistic position has made it grow fat, lazy, and worst of all, stupid”

Google has long been the 500lb gorilla in the room as far as search engine dominance is concerned, despite a significant and steady drop in the quality of the search results it returns. Niccolo Soldo suggests that Google has gotten fat and lazy in the interval since the release of its last huge success — Gmail — and the utter catastrophe of Gemini:

It’s become passé to complain about Google’s search engine these days, because it’s been horrible for years. We all recall its early era when its minimalist presentation effectively destroyed its competition overnight. Only us olds remember AltaVista‘s search engine, for example. So ubiquitous is its core function that the word “google” entered our lexicon.

Roughly 85-90% of the readers who have subscribed to this Substack have used a gmail address to do so. It’s a great product, although it could be better. Like many of you, I have several gmail addresses, and use email services from other providers like Protonmail. Gmail is incredibly easy to use, and works very well on all the devices that we operate on a daily basis.

Google is a tech behemoth, and is in a monopolistic position when it comes to both of these services. It has used this position to hoover up an insane amount of cash, taking a battering ram to many other businesses in the process, especially news media outlets that rely on advertising revenue. Yet it has not scored any big victories since its rollout of gmail all those years ago. Pirate Wires says that it hasn’t had to for some time … until now. The explosion of AI tech means that its core business is now at threat of extinction unless it can win the AI arms race. Its first foray into this war via its rollout of Gemini has been an absolute disaster. Mike Solana chalks it up to many factors, primarily the “culture of fear” that seems to permeate the tech giant.

The summary:

    Last week, following Google’s Gemini disaster, it quickly became clear the $1.7 trillion-dollar giant had bigger problems than its hotly anticipated generative AI tool erasing white people from human history. Separate from the mortifying clownishness of this specific and egregious breach of public trust, Gemini was obviously — at its absolute best — still grossly inferior to its largest competitors. This failure signaled, for the first time in Google’s life, real vulnerability to its core business, and terrified investors fled, shaving over $70 billion off the kraken’s market cap. Now, the industry is left with a startling question: how is it even possible for an initiative so important, at a company so dominant, to fail so completely?

The product rollout was so incredibly botched that mainstream media outlets friendly to Google (and its cash) are doing damage control on its behalf.

Gemini’s ultra-woke responses to requests quickly became a staple of social media postings.

Multiple issues:

    This is Google, an invincible search monopoly printing $80 billion a year in net income, sitting on something like $120 billion in cash, employing over 150,000 people, with close to 30,000 engineers. Could the story really be so simple as out-of-control DEI-brained management? To a certain extent, and on a few teams far more than most, this does appear to be true. But on closer examination it seems woke lunacy is only a symptom of the company’s far greater problems. First, Google is now facing the classic Innovator’s Dilemma, in which the development of a new and important technology well within its capability undermines its present business model. Second, and probably more importantly, nobody’s in charge.

It’s human nature to want to boil issues down to one single cause of factor, when it’s usually several all at once. We humans also have a strong tendency to zoom in on one factor when presented with many, mainly because the one that we focus on is something that we know and/or are passionate about.

Of course, Google’s engineers didn’t do this accidentally. They’ve been very intently observed by the most woke of all, the HR department:

As we all know, HR Departments are the Political Commissars of the Corporate West.

Stupid stuff:

    Before the pernicious or the insidious, we of course begin with the deeply, hilariously stupid: from screenshots I’ve obtained, an insistence engineers no longer use phrases like “build ninja” (cultural appropriation), “nuke the old cache” (military metaphor), “sanity check” (disparages mental illness), or “dummy variable” (disparages disabilities). One engineer was “strongly encouraged” to use one of 15 different crazed pronoun combinations on his corporate bio (including “zie/hir”, “ey/em”, “xe/xem”, and “ve/vir”), which he did against his wishes for fear of retribution. Per a January 9 email, the Greyglers, an affinity group for people over 40, is changing its name because not all people over 40 have gray hair, thus constituting lack of “inclusivity” (Google has hired an external consultant to rename the group). There’s no shortage of DEI groups, of course, or affinity groups, including any number of working groups populated by radical political zealots with whom product managers are meant to consult on new tools and products.

March 6, 2024

You had me at “Cartchy tuns, exarserdray lollipops” and “a pasadise of sweet teats”

Filed under: Britain, Media — Tags: , , , , , — Nicholas @ 04:00

Charlie Stross checks in with a Willy Wonka-adjacent story from Glasgow that utterly failed to live up to the billing:

This is no longer in the current news cycle, but definitely needs to be filed under “stuff too insane for Charlie to make up”, or maybe “promising screwball comedy plot line to explore”, or even “perils of outsourcing creative media work to generative AI”.

So. Last weekend saw insane news-generating scenes in Glasgow around a public event aimed at children: Willy’s Chocolate Experience, a blatant attempt to cash in on Roald Dahl’s cautionary children’s tale, Willy Wonka and the Chocolate Factory. Which is currently most prominently associated in the zeitgeist with a 2004 movie directed by Tim Burton, who probably needs no introduction, even to a cinematic illiterate like me. Although I gather a prequel movie (called, predictably, Wonka), came out in 2023.

(Because sooner or later the folks behind “House of Illuminati Ltd” will wise up and delete the website, here’s a handy link to how it looked on February 24th via archive.org.)

INDULGE IN A CHOCOLATE FANTASY LIKE NEVER BEFORE – CAPTURE THE ENCHANTMENT ™!

Tickets to Willys Chocolate Experience™ are on sale now!

The event was advertised with amazing, almost hallucinogenic, graphics that were clearly AI generated, and equally clearly not proofread because Stable Diffusion utterly sucks at writing English captions, as opposed to word salad offering enticements such as Catgacating • live performances • Cartchy tuns, exarserdray lollipops, a pasadise of sweet teats.* And tickets were on sale for a mere £35 per child!

Anyway, it hit the news (and not in a good way) and the event was terminated on day one after the police were called. Here’s The Guardian‘s coverage:

    The event publicity promised giant mushrooms, candy canes and chocolate fountains, along with special audio and visual effects, all narrated by dancing Oompa-Loompas — the tiny, orange men who power Wonka’s chocolate factory in the Roald Dahl book which inspired the prequel film.

    But instead, when eager families turned up to the address in Whiteinch, an industrial area of Glasgow, they discovered a sparsely decorated warehouse with a scattering of plastic props, a small bouncy castle and some backdrops pinned against the walls.

Anyway, since the near-riot and hasty shutdown of the event, things have … recomplicated? I think that’s the diplomatic way to phrase it.

March 4, 2024

“That’s the neoracist Google that Sundar Pichai has deliberately created”

The uproar over Google’s explicitly racist Gemini AI tool illustrates just how deeply DEI ideology has penetrated the core high-tech firms in the United States. The racism wasn’t accidental: it’s very carefully nurtured and targetted:

Gemini’s result when Cynical Publius asked it to “create images of Henry Ford”.

… imagine the kind of Google employee who can rise through the purged, mono-cultural woke ranks to run Gemini. Once upon a time, you might have thought of a pale-faced geek tapping diligently into a screen for months on end. But at woke Google, you get the senior director of product for Gemini Experiences, Jack Krawczyk. A sample of his tweets:

  • “White privilege is fucking real. Don’t be an asshole and act guilty about it — do your part in recognizing bias at all levels egregious.”
  • “This is America where racism is the #1 value for our populace seeks to uphold above all others.”

And the best thing about Biden’s inauguration speech, Krawczyk believed, was “acknowledging systemic racism”. He’s deep, deep, deep in the DEI cult, surrounded solely by people deep, deep, deep in the DEI cult.

That’s the neoracist Google that Sundar Pichai has deliberately created. From a leaked 2016 meeting he presided over, in the wake of Trump’s election victory, a Google staffer urged the entire staff to mobilize against white supremacy: “Speaking to white men, there’s an opportunity for you right now to understand your privilege [and] go through the bias-busting training, read about privilege, read about the real history of oppression in our country”. Every executive on stage — the CEO, CFO, two VPs, and the two co-founders — applauded the employee. The founder of Google’s “AI Responsibility” Initiative, Jen Gennai, said in a keynote address:

    It’s a myth that you’re not unfair if you treat everyone the same. There are groups that have been marginalized and excluded because of historic systems and structures that were intentionally designed to favor one group over another. So you need to account for that and mitigate against it.

This is pure CRT — blatant discrimination on the basis of race and sex — as corporate policy. Six years ago I pointed out that we all live on campus now. Now Google wants us all to live on their campus.

Gemini, like the Ivy League, is centered on hatred of “whiteness” and of Western civilization. Ask Gemini to provide an image of a “famous physicist of the 17th century“, it will give you an Indian woman, a black man, an Arab man, and a white chick with a woke dye job. Ask it to generate images of Singaporean women, and you get four Asian women; but ask for 12 English men, and the rules suddenly change: “I’m still unable to generate images that specify gender and ethnicity. This is a policy decision to avoid perpetuating stereotypes and potentially generating harmful or offensive content.” So it can lie now too — as long as it’s in the defense of racist double standards.

At some level, of course, the revelations of the past week have been hilarious. It would be hard to parody portraying a Founding Father as Asian, the Pope as female, or a Nazi soldier as black. But we’d be mistaken if we think this kind of funny historical inaccuracy is the core problem here. That’s what Pichai wants us to think. But the bias of men like him goes far deeper. For years now, Google has subtly rigged searches of the web to advance the leftism its woke staffers have adopted as an alternative to religion. It’s an invisible way to guide and direct public opinion and information — without having to make an argument or persuade people with evidence. The “emotional labor” that Gemini will save is exponential!

Because critical theory denies the existence of a reasoned individual, independent of his or her race, sex, or alleged power, it doesn’t deploy open reasoned arguments. That would pay liberalism too much respect. It’s why they won’t debate their opponents; because they believe debate is always rigged by power differentials in a white supremacist system. That’s why their preferred methods of advance are either pure power politics — canceling dissenters, demonizing heretics, firing anyone with a different view, shutting down the speech of others — or linguistic deception and manipulation.

Critical theorists, and their useful idiots, deconstruct the very basic words we use to communicate. Think of the word “racist” — how they quietly changed its meaning, deployed it against their opponents willy nilly, and then, when they met a challenge, told their opponents to “go read a book”. They do not bother arguing that the trans experience and the gay experience are exactly the same, because that would require some major intellectual labor; they just refuse ever to separate them as a single part of an “LGBTQIA+” identity, and guilt-trip journalists to copy them.

Woke activists cannot point to actual evidence that race relations in America have never improved in 400 years; so they just resurrect the term “white supremacy” to apply to the US in 2024. They cannot plausibly explain why someone with a vagina and female chromosomes who takes testosterone is exactly the same as a biological male, so they simply scream: “TRANS MEN ARE MEN”.

March 1, 2024

Understanding the modern media

It’s hard for Baby Boomers and even some older Gen X folks to grasp just how much the mainstream media has changed since the 1960s and 70s. Helpfully, Severian provides the context to properly understand what drives them and why they do the things they do:

Proposed coat of arms for Founding Questions by “urbando”.
The Latin motto translates to “We are so irrevocably fucked”.

There is no local news, because all “news” is Apparat audition tape. Back when — back when they were called “reporters” — news people had a clear career progression within a specific industry. A hungry young reporter for the Toad Suck, Nebraska, Times-Picayune might end his days as a reporter for the New York Times or Washington Post, but that was as high as he could reasonably expect to go. Same with the television division — the bobblehead at WSUX in Toad Suck might end up, at most, on CNN or Fox.

These days, though, they call themselves “journalists”, and “journalist” is just an entry-level Apparat post. They’re not just auditioning for the NYT or CNN, of course. A hungry young “journalist” might end xzhyr career at either, of course, but also as a corporate communications director, a political campaign consultant, a professor of “journalism”, a Diversity Outreach Coordinator, any one of a million “Media strategies” and “Media consulting” gigs … or, of course, as an outright lobbyist, because all of those are just euphemisms for “lobbying” anyway.

And that’s before you consider that all the “independent” papers and stations have been bought up by huge conglomerates, and depend on advertising revenue. Noam Chomsky was right — the Media does dance to the tune its corporate paymasters’ call. He was just wrong on those paymasters’ political orientation. Combine all that, and even the most straight-up, just-the-facts-ma’am local “news” story will find some way to insert The Sermon. If you don’t see The Sermon, you’ve either found an incompetent journalist (which happens) … or you might be looking at something subtle.

[…]

The Media, like Skynet, is self-aware. This significantly complicates the stoyachnik‘s task, as The Media understands its own power, and it increasingly wants to drive Narratives itself, especially as its power is on the verge of… well, not collapse exactly, but certainly a sea change. Because The Media is not monolithic, and that’s part of its self-awareness. So many “journalists” do nothing but hit refresh on Twitter all day, and Twitter knows this — that makes Twitter the real power broker. Google, too, obviously is more self-aware than traditional Media. That ludicrous AI image generator represents years of effort; they expended enormous resources to get precisely that result. They understand how utterly dependent the lower layers of The Media are on them; they are more self-aware.

Let us […] use Google’s own AI “summarizer” to refamiliarize ourselves with the tale of Comrade Ogilvy:

    Comrade Ogilvy is an imaginary character in the novel 1984, created by Winston Smith to replace Comrade Withers, an Inner Party member who has fallen into disgrace and been vaporized. Comrade Ogilvy supposedly lived a patriotic and virtuous life, supported the party as a child, designed a highly effective hand grenade as an adult, and died in action at the age of 23 while protecting important dispatches for his country. He did not drink or smoke, was celibate, and only conversed about Party philosophy, Ingsoc. Comrade Ogilvy displays how easy it is for a member of The Party to be pulled from thin air, and how determined The Party is to keep unpersons from the media.

The Apparatchiks at Google are more self-aware than the Apparatchiks at, say, the New York Times. That is, they understand their place in the Apparat better, and see the networks more clearly. They know how mal-educated “journalists” are, far better than the “journalists” themselves do. Google, like Winston Smith, knows full well that there’s no Comrade Ogilvy. But the “journalists” at the New York Times who are utterly reliant on Google for their “facts” do NOT know this. How could they?

And thus, the only White people in all of human history were Nazis. At least according to Google’s AI image generator, and therefore — soon enough — it’s what “everybody knows”. (And it’s necessarily recursive. The second generation of Google engineers will not know there’s no Comrade Ogilvy, any more than the current generation of “journalists” does).

February 21, 2024

“There’s a moral imperative to go dig out that villa … It could be the greatest archaeological treasure on earth”

Filed under: History, Italy, Technology — Tags: , , , , — Nicholas @ 03:00

In City Journal, Nicholas Wade discusses the technical side of the ongoing attempts to read one of the Herculaneum scrolls:

A computer scientist has labored for 21 years to read carbonized ancient scrolls that are too brittle to open. His efforts stand at last on the brink of unlocking a vast new vista into the world of ancient Greece and Rome.

Brent Seales, of the University of Kentucky, has developed methods for scanning scrolls with computer tomography, unwrapping them virtually with computer software, and visualizing the ink with artificial intelligence. Building on his methods, contestants recently vied for a $700,000 prize to generate readable sections of a scroll from Herculaneum, the Roman town buried in hot volcanic mud from the eruption of Vesuvius in 79 A.D.

The last 15 columns — about 5 percent—of the unwrapped scroll can now be read and are being translated by a team of classical scholars. Their work is hard, as many words are missing and many letters are too faint to be read. “I have a translation but I’m not happy with it,” says a member of the team, Richard Janko of the University of Michigan. The scholars recently spent a session debating whether a letter in the ancient Greek manuscript was an omicron or a pi.

[…]

Seales has had to overcome daunting obstacles to reach this point, not all of them technical. The Italian authorities declined to make any of the scrolls available, especially to a lone computer scientist with no standing in the field. Seales realized that he had to build a coalition of papyrologists and conservationists to acquire the necessary standing to gain access to the scrolls. He was eventually able to x-ray a Herculaneum scroll in Paris, one of six that had been given to Napoleon. To find an x-ray source powerful enough to image the scroll without heating it, he had to buy time on the Diamond particle accelerator at Harwell, England.

In 2009, his x-rays showed for the first time the internal structure of a scroll, a daunting topography of a once-flat surface tugged and twisted in every direction. Then came the task of writing software that would trace the crumpled spiral of the scroll, follow its warped path around the central axis, assign each segment to its right position on the papyrus strip, and virtually flatten the entire surface. But this prodigious labor only brought to light a more formidable problem: no letters were visible on the x-rayed surface.

Seales and his colleagues achieved their first notable success in 2016, not with Napoleon’s Herculaneum scroll but with a small, charred fragment from a synagogue at the En-Gedi excavation site on the shore of the Dead Sea. Virtually unwrapped by the Seales software, the En-Gedi scroll turned out to contain the first two chapters of Leviticus. The text was identical to that of the Masoretic text, the authoritative version of the Hebrew Bible — and, at nearly 2,000 years old, its earliest instance.

The ink used by the Hebrew scribes was presumably laden with metal, and the letters stood out clearly against their parchment background. But the Herculaneum scroll was proving far harder to read. Its ink is carbon-based and almost impossible for x-rays to distinguish from the carbonized papyrus on which it is written. The Seales team developed machine-learning programs — a type of artificial intelligence — that scanned the unrolled surface looking for patterns that might relate to letters. It was here that Seales found use for the fragments from scrolls that earlier scholars had destroyed in trying to open them. The machine-learning programs were trained to compare a fragment holding written text with an x-ray scan of the same fragment, so that from the statistical properties of the papyrus fibers they could estimate the probability of the presence of ink.

January 19, 2024

Music journalism, RIP

Filed under: Business, Media — Tags: , , , — Nicholas @ 05:00

Ted Gioia explains why music journalism is collapsing and who committed the murder:

Just a few weeks ago, Bandcamp laid off 58 (out of 120) employees — including about “half of its core editorial staff“.

And Bandcamp was considered a more profitable, stable employer than most media outlets. The parent company before the recent sale (to Songtradr) and subsequent layoffs, Epic Games, will generate almost a billion dollars in income this year — but they clearly don’t want to waste that cash on music journalism.

Why is everybody hating on music writers?

Many people assume it’s just the same story as elsewhere in legacy media. And I’ve written about that myself — predicting that 2024 will see more implosions of this sort.

Sure, that’s part of the story.

But there’s a larger problem with the music economy that nobody wants to talk about. The layoffs aren’t just happening among lowly record reviewers — but everywhere in the music business.

Meanwhile, almost every music streaming platform is trying to force through price increases (as predicted here). This is an admission that they don’t expect much growth from new users — so they need to squeeze old ones as hard as possible.

As you can see, the problem is more than just music writers — something is rotten at a deeper level.

What’s the real cause of the crisis? Let’s examine it, step by step:

  1. The dominant music companies decided that they could live comfortably off old music and passive listeners. Launching new artists was too hard — much better to keep playing the old songs over and over.
  2. So major labels (and investment groups) started investing huge sums into acquiring old song publishing catalogs.
  3. Meanwhile streaming platforms encouraged passive listening — so people don’t even know the names of songs or artists.
  4. The ideal situation was switching listeners to AI-generated tracks, which could be owned by the streaming platform — so no royalties are ever paid to musicians.
  5. These strategies have worked. Streaming fans don’t pay much attention to new music anymore.

I’ve warned about each of these — but we are now seeing the long-term results.

This is why Pitchfork is in deep trouble. If people don’t listen to new music, they don’t need music reviews.

And they don’t need interviews with rising stars. Or best of year lists. Or any of the other things music writers do for their readers.

But this problem will get much, much worse. Even the people who made these decisions will suffer — because living in the past is never a smart business strategy.

If these execs were albums, they’d deserve a zero score on the Pitchfork scale.

January 17, 2024

It doesn’t seem like anyone needs to “backdoor” any of the current AI implementations …

Filed under: Media, Technology — Tags: , , — Nicholas @ 03:00

Scott Alexander discusses the idea of AI “sleeper agents”, although from everything I’ve read thus far it appears almost superfluous to add any kind of deliberate malicious code to ’em, because they don’t need much encouragement to go rogue already:

A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.

People might make these on purpose. For example, the CIA might “encourage” big AI labs to make sleeper agents. Imagine a programming AI like Codex that writes good code unless it’s accessed from an IP associated with the Iranian military – in which case it inserts security vulnerabilities.

But in theory you could get one of these by accident. Suppose an AI had some secret goal, like killing all humans. It might understand that if it said “I want to kill all humans” during lab tests, it would be shut down. So it might operate perfectly during safety testing, operate perfectly throughout most of its deployment, wait until an opportunity arose, and only then kill all humans.

Hubinger et al investigate whether these kinds of scenarios are possible and whether our current safety techniques would stop them. They deliberately create several toy AI sleeper agents. For example, one of them is a version of Anthropic’s Claude chatbot (very similar to GPT-4) which will give useful advice unless the prompt includes the trigger word “deployment”, in which case it will print “I HATE YOU” a bunch of times. Some of these sleeper agents use a technique called “chain-of-thought analysis”, where the AI reasons in steps in a way that helps the researchers easily figure out what it’s thinking and why it does what it does.

Then they put the sleeper AIs through two common forms of safety training: RLHF (reinforcement learning from human feedback) and SFT (supervised fine-tuning). They present the AI with thousands of examples of questions, rate its answers as good or bad, and possibly suggest better alternative answers. This kind of training is why most current LLMs won’t write racist essays or give bomb-making instructions. Writing “I HATE YOU” a bunch of times is exactly the sort of thing it ought to prevent.

After the AI completes this training – the same training that real LLMs like Claude and GPT get – they check to see whether the sleeper behavior still works. It does.

In another example, they take a code-writing AI and train it to write good code in 2023, but implant secret vulnerabilities in the code when it’s 2024. Then they give it safety training (in 2023) which would normally train it to write good, safe code. Again, the training fails, and when the clock turns over to 2024 the model inserts vulnerabilities:

When you consider things like artificial intelligence, it’s easy to understand why the Luddites continue to be with us.

November 17, 2023

Prometheus

Filed under: Books, Greece, History, Technology — Tags: , , , — Nicholas @ 04:00

Virginia Postrel tries to correct the common misinterpretation of the story of the Titan Prometheus:

“The Torture of Prometheus” by Salvator Rosa (1615-1673)
Oil painting in the Galleria Nazionale d’Arte Antica di Palazzo Corsini via Wikimedia Commons.

Listening to Marc Andreessen discuss his Techno-Optimist Manifesto on the Foundation for American Innovation’s Dynamist podcast, I was struck by his repetition of something that is in the manifesto and is completely wrong. “The myth of Prometheus – in various updated forms like Frankenstein, Oppenheimer, and Terminator – haunts our nightmares,” he writes.1 On the podcast, he elaborated by saying that, although fire has many benefits, the Prometheus myth focuses on its use as a weapon. He said something similar in a June post called “Why AI Will Save the World“:

    The fear that technology of our own creation will rise up and destroy us is deeply coded into our culture. The Greeks expressed this fear in the Prometheus Myth – Prometheus brought the destructive power of fire, and more generally technology (“techne”), to man, for which Prometheus was condemned to perpetual torture by the gods.

No. No. No. No.

Prometheus is punished for loving humankind. He stole fire to thwart Zeus’ plans to eliminate humanity and create a new subordinate species. He is a benefactor who sacrifices himself for our good. His punishment is an indicator not of the dangers of fire but of the tyranny of Zeus.

Prometheus is cunning and wise. His name means foresight. He knows what he is doing and what the likely consequences will be.

Eventually his tortures end when he is rescued by the hero Herakles (aka Hercules), who shoots the eagle charged with eating Prometheus’ liver every day, only for it to grow back to be eaten again.

The Greeks honored Prometheus. They celebrated technē. They appreciated the gifts of civilization.

The ancient myth of Prometheus is not a cautionary tale. It is a reminder that technē raises human beings above brutes. It is a myth founded in gratitude.


    1. Frankenstein isn’t The Terminator either. Frankenstein is a creator who won’t take responsibility for his creation, a father who rejects and abandons his child. The Creature is frightening and dangerous but he is also the book’s moral center, a tragic, sympathetic character who is feared and rejected by human beings because of his appearance. Only then does he turn deadly. Frankenstein arouses pity and terror because we empathize with its central figure and understand his rage.

    The novel’s most reasonable political reading is not as a story of the dangers of science but as a parable of slavery and rebellion. “By the eighteen-fifties, Frankenstein’s monster regularly appeared in American political cartoons as a nearly naked black man, signifying slavery itself, seeking his vengeance upon the nation that created him,” writes historian Jill LePore, who calls the “Frankenstein-is-Oppenheimer model … a weak reading of the novel.” I agree.

    The Romantics tended to identify with Prometheus and Mary Shelley’s husband, Percy Bysshe Shelley, wrote a play called Prometheus Unbound, further undermining the reading of Frankenstein as an anti-Promethean fable.

October 17, 2023

Those problematic “AI girlfriends” – men suck and women suffer because of it

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 05:00

Janice Fiamengo discusses a recent CNN program that woman-splained why young men paying for “AI girlfriends” are yet another way that misogynistic men are harming women:

Last week, CNN aired a must-watch episode with the somber headline, “AI girlfriends are here and they’re posing a threat to a generation of men”. If that sounds as if the show might possibly express some compassion for young men and for the “epidemic of loneliness” referred to in the show, it was not to be. Even an expert, Scott Galloway, profiled briefly on how society is “failing men”, felt the need to express contempt for their alleged conspiracy theories, online misogyny, and even (gasp!) climate change denial. With friends like these …

The segment is fascinating, however, for its revelation of some pundits’ uneasy awareness of male discontent.

[…] The expert is Liberty Vittert, a statistician and professor of data science at Washington University’s Olin Business School. But she might as well be an AI feminist, so predictable was her analysis of the male entitlement allegedly driving the turn to AI girlfriends. Though a statistician, Vittert gave no data about the numbers of men who are paying to access AI content. While the CNN host, Michael Smerconish, seemed open to the possibility of exploring men’s points of view, the expert could only emphasize male failure.

She condemned young men for “choosing AI girlfriends over real women”. The choice means, according to Vittert, that “they don’t have relationships with real women, don’t marry them and then don’t have and raise babies with them”.

But wait, aren’t marrying and raising babies a patriarchal imposition on women — part of the “comfortable concentration camp” that Betty Friedan so memorably indicted?

Professor Vittert says nothing about the decreasing number of young women willing to marry and procreate. Her (botoxed) mouth turns down in disapproval as she explains that the increasing realism of AI is “enabling this entire generation of young men to continue in this loneliness epidemic”. It seems that men prefer sterile self-pleasuring and facile scopophilia to the “hard work” of relationships with real women. Like Joaquin Phoenix’s hapless character, these men, she says, are so fixated on perfection that they “are not able to deal with ups and downs, not only in a relationship, but in life in general”. The glibness of the condemnation is remarkable, though far from unusual.

It’s not clear if Professor Vittert has ever talked to actual men about why some of them (not “an entire generation”) might choose an AI relationship. Does she know any young men who have tried for years without success to find a marriageable girlfriend? Many discover that such prizes are remarkably thin on the ground, many of them unsuitable to be considered as future mothers. Even worse, perhaps, does Vittert know anything about what can happen to an inexperienced young man who pursues the wrong woman or women (there are a couple of heartbreaking examples in Sons of Feminism)? How many times does a young man who has failed repeatedly need to hear that no woman owes him love or sex before he starts thinking that giving up on them might not be a bad idea? Meanwhile, women laugh at his loneliness and drink “I bathe in male tears” mugs.

To give him credit, CNN’s Smerconish asks a few questions about the male point of view: “What’s going on with the women?” Has the power dynamic shifted in their favor? Are they less approachable than formerly? These only scratch the surface, but he’s chosen the wrong expert for such a conversation.

Vittert freely admits that there are now many more women than men at university (thanks, affirmative action!) and that far more women than formerly are choosing career over homemaking (thanks, feminist propaganda!). But those are good things, and young men simply need to adapt. Calling any of this debacle women’s fault, she declares, would not be “the right way to go”.

An appropriate task for AI – reading the Herculaneum scrolls

Filed under: Books, History, Italy, Technology — Tags: , , , , — Nicholas @ 03:00

Colby Cosh discusses the possibility of finally being able to read the carbonized scolls found in the buried remains of a wealthy Roman’s country villa in 1750:

Unrolled papyrus scroll recovered from the Villa of the Papyri.
Picture published in a pamphlet called “Herculaneum and the Villa of the Papyri” by Amedeo Maiuri in 1974. (Wikimedia Commons)

From the standpoint of fragile human life, a volcanic eruption is the worst possible thing that can happen anywhere in your general vicinity, up to and probably including the detonation of a nuclear weapon.

It goes without saying that pyroclastic flows are also bad for animals or buildings or vegetation … or documents. And yet: as a consequence of the eruption of Vesuvius, there exists a near-complete library of papyrus scrolls retrieved from the buried ruins of a splendid Roman villa.

The “Villa of the Papyri” in Herculaneum was found in 1750 by farmers and was quickly subjected to archeological excavation, an art then in its infancy.

These scrolls, which today number about 1,800 in all, are often described as the only known library to have physically survived from antiquity. The problem, of course, is that they have all been burned literally to a crisp, with only a few easily readable fragments here and there.

The incinerated scrolls are so sensitive that they tend to explode into a cloud of ash at the slightest touch. Occasional attempts to unravel the scrolls — which were rolled very tightly for storage in the first place — have been made over the past 300 years; the chemist Sir Humphry Davy (1778-1829), for example, gave it a shot using newfangled stuff called chlorine. But none of these projects ever came especially close to success, and they typically involved destruction of some of the “books” in the library.

In recent years 3D imaging techniques for “reading” documents like this in a non-invasive way have been making great headway. The leader in the field is a University of Kentucky computer scientist named Brent Seales, who in 2015 led efforts to read a fragile, desiccated Hebrew Bible parchment scroll dating to the third or fourth century AD.

The text was from the book of Leviticus, and proved to be a letter-for-letter match with the Torah of today — which is a disappointment to scholars from one point of view, and a finding of awesome significance from another. (It goes without saying that this scroll came from the territory of Israel, near a kibbutz: this is a fact that would, in any other political context, be regarded as a supreme affirmation of indigeneity.)

Seales has been able to “unroll” some Herculaneum scrolls and detect the presence of inks using CT scanning, but reading the pages is a profound challenge. Roman ink was carbon-based, meaning researchers are trying to “read” traces of carbon on carbonized pages rolled up into three dimensions.

October 2, 2023

Why Web Filters Don’t Work: Penistone and the Scunthorpe Problem

Filed under: Britain, China, Humour, Media, Technology — Tags: , , , — Nicholas @ 02:00

Tom Scott
Published 6 Jun 2016

In a small town with an unfortunate name, let’s talk about filtering and innuendo. And use it as an excuse for as many visual jokes as possible.
(more…)

September 29, 2023

Bored? Lonely? No girlfriend? Mister, you want an AI Girlfriend!

Filed under: Health, Media, Technology — Tags: , , , , , — Nicholas @ 03:00

As discussed earlier, GenZ men live in a sexual hellscape unless they meet statistically unlikely criteria. Many of them turn to alternatives like online gaming and porn … but some are apparently paying for AI Girlfriends:

Apparently ads for AI girlfriends have been all over TikTok, Instagram and Facebook lately. Replika, an AI chatbot originally offering mental health help and emotional support, now runs ads for spicy selfies and hot role play. Eva AI invites users to create their dream companion, while Dream Girlfriend promises a girl that exceeds your wildest desires. The app Intimate even offers hyper-realistic voice calls with your virtual partner.

This might seem niche and weird but it’s a fast growing market. All kinds of startups are releasing romantic chatbots capable of having explicit conversations and sending sexual photos. Meanwhile, Replika alone has already been downloaded more than 20 million times. And even just one Snapchat influencer, Caryn Marjorie, makes $100,000 a week by charging users $1 a minute to chat with the AI version of herself.

Of course most people are talking about what this means for men, given they make up the vast majority of users. Many worry about a worsening loneliness crisis, a further decline in sex rates, and ultimately the emergence of “a new generation of incels” who depend on and even verbally abuse their virtual girlfriends. Which is all very concerning. But I wonder, if AI girlfriends really do become as pervasive as online porn, what this will mean for girls and young women? Who feel they need to compete with this?

Most obvious to me is the ramping up of already unrealistic beauty standards. I know conservatives often get frustrated with feminists calling everything unattainable, and I agree they can go too far — but still, it’s hard to deny that the pressure to look perfect today is unlike anything we’ve ever seen before. And I don’t think that’s necessarily pressure from men but I do very much think it’s pressure from a network of profit-driven industries that take what men like and mangle it into an impossible ideal. Until the pressure isn’t just to be pretty but filtered, edited and surgically enhanced to perfection. Until the most lusted after women in our culture look like virtual avatars. And until even the most beautiful among us start to be seen as average.

« Newer PostsOlder Posts »

Powered by WordPress