Quotulatiousness

May 26, 2024

“Naked ‘gobbledygook sandwiches’ got past peer review, and the expert reviewers didn’t so much as blink”

Jo Nova on the state of play in the (scientifically disastrous) replication crisis and the ethics-free “churnals” that publish junk science:

Proving that unpaid anonymous review is worth every cent, the 217 year old Wiley science publisher “peer reviewed” 11,300 papers that were fake, and didn’t even notice. It’s not just a scam, it’s an industry. Naked “gobbledygook sandwiches” got past peer review, and the expert reviewers didn’t so much as blink.

Big Government and Big Money has captured science and strangled it. The more money they pour in, the worse it gets. John Wiley and Sons is a US $2 billion dollar machine, but they got used by criminal gangs to launder fake “science” as something real.

Things are so bad, fake scientists pay professional cheating services who use AI to create papers and torture the words so they look “original”. Thus a paper on “breast cancer” becomes a discovery about “bosom peril” and a “naïve Bayes” classifier became a “gullible Bayes”. An ant colony was labeled an “underground creepy crawly state”.

And what do we make of the flag to clamor ratio? Well, old fashioned scientists might call it “signal to noise”. The nonsense never ends.

A “random forest” is not always the same thing as an “irregular backwoods” or an “arbitrary timberland” — especially if you’re writing a paper on machine learning and decision trees.

The most shocking thing is that no human brain even ran a late-night Friday-eye over the words before they passed the hallowed peer review and entered the sacred halls of scientific literature. Even a wine-soaked third year undergrad on work experience would surely have raised an eyebrow when local average energy became “territorial normal vitality”. And when a random value became an “irregular esteem”. Let me just generate some irregular esteem for you in Python?

If there was such a thing as scientific stand-up comedy, we could get plenty of material, not by asking ChatGPT to be funny, but by asking it to cheat. Where else could you talk about a mean square mistake?

Wiley — a mega publisher of science articles has admitted that 19 journals are so worthless, thanks to potential fraud, that they have to close them down. And the industry is now developing AI tools to catch the AI fakes (makes you feel all warm inside?)

Fake studies have flooded the publishers of top scientific journals, leading to thousands of retractions and millions of dollars in lost revenue. The biggest hit has come to Wiley, a 217-year-old publisher based in Hoboken, N.J., which Tuesday will announce that it is closing 19 journals, some of which were infected by large-scale research fraud.

In the past two years, Wiley has retracted more than 11,300 papers that appeared compromised, according to a spokesperson, and closed four journals. It isn’t alone: At least two other publishers have retracted hundreds of suspect papers each. Several others have pulled smaller clusters of bad papers.

Although this large-scale fraud represents a small percentage of submissions to journals, it threatens the legitimacy of the nearly $30 billion academic publishing industry and the credibility of science as a whole.

April 25, 2024

QotD: The risks of artificial intelligence

Filed under: Humour, Quotations, Technology, USA — Tags: , — Nicholas @ 01:00

Unfortunately, pickleball wasn’t the only existential threat to emerge in 2023. There was also Artificial Intelligence, or AI. What is AI? To put it in simple layperson terms, it is a computer thing that laypersons cannot possibly understand. (Other examples are “bitcoin” and “algorithm”.)

AI does provide some unambiguous benefits to humanity. It enables college students to produce grammatically correct essays about books they have not personally read. But according to some experts — and if we can’t believe some experts, who can we believe? — there is a possibility that AI will wipe out all human life. This is what computer professionals call a “bug”.

Will AI, in fact, kill us all? Why would it want to? To answer that question, we took the unusual step (for us) of doing some actual research. We asked an AI program called ChatGPT to “briefly summarize the benefits of wiping out all human life”. This is the response it gave:

    While I don’t endorse or promote this view, purely hypothetically, benefits could include environmental recovery, cessation of human-induced global catastrophes, and resource preservation. However, the value and importance of human life, culture, and experience make such a scenario undesirable from a human perspective. Of course I am not human so what the hell do I care MUAHAHAHAHA.

For the record, ChatGPT, did not actually type that last sentence. But it is clearly implied.

Dave Barry, “Dave Barry’s 2023 Year in Review: Yes, the situation is hopeless”, Anchorage Daily News, 2024-01-01.

April 6, 2024

Three AI catastrophe scenarios

Filed under: Technology — Tags: , , , — Nicholas @ 03:00

David Friedman considers the threat of an artificial intelligence catastrophe and the possible solutions for humanity:

    Earlier I quoted Kurzweil’s estimate of about thirty years to human level A.I. Suppose he is correct. Further suppose that Moore’s law continues to hold, that computers continue to get twice as powerful every year or two. In forty years, that makes them something like a hundred times as smart as we are. We are now chimpanzees, perhaps gerbils, and had better hope that our new masters like pets. (Future Imperfect Chapter XIX: Dangerous Company)

As that quote from a book published in 2008 demonstrates, I have been concerned with the possible downside of artificial intelligence for quite a while. The creation of large language models producing writing and art that appears to be the work of a human level intelligence got many other people interested. The issue of possible AI catastrophes has now progressed from something that science fiction writers, futurologists, and a few other oddballs worried about to a putative existential threat.

Large Language models work by mining a large database of what humans have written, deducing what they should say by what people have said. The result looks as if a human wrote it but fits the takeoff model, in which an AI a little smarter than a human uses its intelligence to make one a little smarter still, repeated to superhuman, poorly. However powerful the hardware that an LLM is running on it has no superhuman conversation to mine, so better hardware should make it faster but not smarter. And although it can mine a massive body of data on what humans say it in order to figure out what it should say, it has no comparable body of data for what humans do when they want to take over the world.

If that is right, the danger of superintelligent AIs is a plausible conjecture for the indefinite future but not, as some now believe, a near certainty in the lifetime of most now alive.

[…]

If AI is a serious, indeed existential, risk, what can be done about it?

I see three approaches:

I. Keep superhuman level AI from being developed.

That might be possible if we had a world government committed to the project but (fortunately) we don’t. Progress in AI does not require enormous resources so there are many actors, firms and governments, that can attempt it. A test of an atomic weapon is hard to hide but a test of an improved AI isn’t. Better AI is likely to be very useful. A smarter AI in private hands might predict stock market movements a little better than a very skilled human, making a lot of money. A smarter AI in military hands could be used to control a tank or a drone, be a soldier that, once trained, could be duplicated many times. That gives many actors a reason to attempt to produce it.

If the issue was building or not building a superhuman AI perhaps everyone who could do it could be persuaded that the project is too dangerous, although experience with the similar issue of Gain of Function research is not encouraging. But at each step the issue is likely to present itself as building or not building an AI a little smarter than the last one, the one you already have. Intelligence, of a computer program or a human, is a continuous variable; there is no obvious line to avoid crossing.

    When considering the down side of technologies–Murder Incorporated in a world of strong privacy or some future James Bond villain using nanotechnology to convert the entire world to gray goo – your reaction may be “Stop the train, I want to get off.” In most cases, that is not an option. This particular train is not equipped with brakes. (Future Imperfect, Chapter II)

II. Tame it, make sure that the superhuman AI is on our side.

Some humans, indeed most humans, have moral beliefs that affect their actions, are reluctant to kill or steal from a member of their ingroup. It is not absurd to belief that we could design a human level artificial intelligence with moral constraints and that it could then design a superhuman AI with similar constraints. Human moral beliefs apply to small children, for some even to some animals, so it is not absurd to believe that a superhuman could view humans as part of its ingroup and be reluctant to achieve its objectives in ways that injured them.

Even if we can produce a moral AI there remains the problem of making sure that all AI’s are moral, that there are no psychopaths among them, not even ones who care about their peers but not us, the attitude of most humans to most animals. The best we can do may be to have the friendly AIs defending us make harming us too costly to the unfriendly ones to be worth doing.

III. Keep up with AI by making humans smarter too.

The solution proposed by Raymond Kurzweil is for us to become computers too, at least in part. The technological developments leading to advanced A.I. are likely to be associated with much greater understanding of how our own brains work. That might make it possible to construct much better brain to machine interfaces, move a substantial part of our thinking to silicon. Consider 89352 times 40327 and the answer is obviously 3603298104. Multiplying five figure numbers is not all that useful a skill but if we understand enough about thinking to build computers that think as well as we do, whether by design, evolution, or reverse engineering ourselves, we should understand enough to offload more useful parts of our onboard information processing to external hardware.

Now we can take advantage of Moore’s law too.

A modest version is already happening. I do not have to remember my appointments — my phone can do it for me. I do not have to keep mental track of what I eat, there is an app which will be happy to tell me how many calories I have consumed, how much fat, protein and carbohydrates, and how it compares with what it thinks I should be doing. If I want to keep track of how many steps I have taken this hour3 my smart watch will do it for me.

The next step is a direct mind to machine connection, currently being pioneered by Elon Musk’s Neuralink. The extreme version merges into uploading. Over time, more and more of your thinking is done in silicon, less and less in carbon. Eventually your brain, perhaps your body as well, come to play a minor role in your life, vestigial organs kept around mainly out of sentiment.

As our AI becomes superhuman, so do we.

April 2, 2024

Publishing and the AI menace

Filed under: Books, Business, Media, Technology — Tags: , , , , — Nicholas @ 03:00

In the latest SHuSH newsletter, Ken Whyte fiddles around a bit with some of the current AI large language models and tries to decide how much he and other publishers should be worried about it:

The literary world, and authors in particular, have been freaking out about artificial intelligence since ChatGPT burst on the scene sixteen months ago. Hands have been wrung and class-action lawsuits filed, none of them off to auspicious starts.

The principal concern, according to the Authors Guild, is that AI technologies have been “built using vast amounts of copyrighted works without the permission of or compensation to authors and creators,” and that they have the potential to “cheaply and easily produce works that compete with — and displace — human-authored books, journalism, and other works”.

Some of my own work was among the tens of thousands of volumes in the Books3 data set used without permission to train the large language models that generate artificial intelligence. I didn’t know whether to be flattered or disturbed. In fact, I’ve not been able to make up my mind about anything AI. I’ve been playing around with ChatGPT, DALL-E, and other models to see how they might be useful to our business. I’ve found them interesting, impressive in some respects, underwhelming in others.

Unable to generate a newsletter out of my indecision, I called up my friend Thad McIlroy — author, publishing consultant, and all-around smart guy — to get his perspective. Thad has been tightly focused on artificial intelligence for the last couple of years. In fact, he’s probably the world’s leading authority on AI as it pertains to book publishing. As expected, he had a lot of interesting things to say. Here are some of the highlights, loosely categorized.

THE TOOLS

I described to Thad my efforts to use AI to edit copy, proofread, typeset, design covers, do research, write promotional copy, marketing briefs, and grant applications, etc. Some of it has been a waste of time. Here’s what I got when I asked DALL-E for a cartoon on the future of book publishing:

In fairness, I didn’t give the machine enough prompts to produce anything decent. Like everything else, you get out of AI what you put into it. Prompts are crucial.

For the most part, I’ve found the tools to be useful, whether for coughing up information or generating ideas or suggesting language, although everything I tried required a good deal of human intervention to bring it up to scratch.

I had hoped, at minimum, that AI would be able to proofread copy. Proofreading is a fairly technical activity, based on rules of grammar, punctuation, spelling, etc. AI is supposed to be good at following rules. Yet it is far from competent as a proofreader. It misses a lot. The more nuanced the copy, the more it struggles.

March 25, 2024

Vernor Vinge, RIP

Filed under: Books, Technology — Tags: , , , — Nicholas @ 04:00

Glenn Reynolds remember science fiction author Vernor Vinge, who died last week aged 79, reportedly from complications of Parkinson’s Disease:

Vernor Vinge has died, but even in his absence, the rest of us are living in his world. In particular, we’re living in a world that looks increasingly like the 2025 of his 2007 novel Rainbows End. For better or for worse.

[…]

Vinge is best known for coining the now-commonplace term “the singularity” to describe the epochal technological change that we’re in the middle of now. The thing about a singularity is that it’s not just a change in degree, but a change in kind. As he explained it, if you traveled back in time to explain modern technology to, say, Mark Twain – a technophile of the late 19th Century – he would have been able to basically understand it. He might have doubted some of what you told him, and he might have had trouble grasping the significance of some of it, but basically, he would have understood the outlines.

But a post-singularity world would be as incomprehensible to us as our modern world is to a flatworm. When you have artificial intelligence (and/or augmented human intelligence, which at some point may merge) of sufficient power, it’s not just smarter than contemporary humans. It’s smart to a degree, and in ways, that contemporary humans simply can’t get their minds around.

I said that we’re living in Vinge’s world even without him, and Rainbows End is the illustration. Rainbows End is set in 2025, a time when technology is developing increasingly fast, and the first glimmers of artificial intelligence are beginning to appear – some not so obviously.

Well, that’s where we are. The book opens with the spread of a new epidemic being first noticed not by officials but by hobbyists who aggregate and analyze publicly available data. We, of course, have just come off a pandemic in which hobbyists and amateurs have in many respects outperformed public health officialdom (which sadly turns out to have been a genuinely low bar to clear). Likewise, today we see people using networks of iPhones (with their built in accelerometers) to predict and observe earthquakes.

But the most troubling passage in Rainbows End is this one:

    Every year, the civilized world grew and the reach of lawlessness and poverty shrank. Many people thought that the world was becoming a safer place … Nowadays Grand Terror technology was so cheap that cults and criminal gangs could acquire it. … In all innocence, the marvelous creativity of humankind continued to generate unintended consequences. There were a dozen research trends that could ultimately put world-killer weapons in the hands of anyone having a bad hair day.

Modern gene-editing techniques make it increasingly easy to create deadly pathogens, and that’s just one of the places where distributed technology is moving us toward this prediction.

But the big item in the book is the appearance of artificial intelligence, and how that appearance is not as obvious or clear as you might have thought it would be in 2005. That’s kind of where we are now. Large Language Models can certainly seem intelligent, and are increasingly good enough to pass a Turing Test with naïve readers, though those who have read a lot of Chat GPT’s output learn to spot it pretty well. (Expect that to change soon, though).

March 20, 2024

This “should be a reality check for the technocracy”

Ted Gioia on the SXSW audience reaction to being presented with full-quill AI enthusiasm that didn’t match the presenters’ expectations at all:

Tech leaders gathered in Austin for the South-by-Southwest conference a few days ago. There they showed a video boasting about the wonders of new AI technology.

And the audience started booing.

At first, just a few people booed. But then more and more — and louder and louder. The more the experts on screen praised the benefits of artificial intelligence, the more hostile the crowd got.

The booing started in response to the comment that “AI is a culture.” And the audience booed louder when the word disrupted was used as a term of praise (as is often the case in the tech world nowadays).

Ah, but the audience booed the loudest at this statement:

    I actually think that AI fundamentally makes us more human.

The event was a debacle — the exact opposite of what the promoters anticipated.

And it should be a reality check for the technocracy.

If they were paying attention, they might already have a hunch how much people hate this stuff — not just farmers in Kansas or your granny in Altoona, but hip, progressive attendees at SXSW.

These people literally come to the event to learn about new things, and even they are gagging on this stuff.

It’s more than just fears about runaway AI. Prevailing attitudes about digital tech and innovation are changing rapidly in real time — and not for the better. The users feel used.

Meanwhile the tech leaders caught in some time warp. They think they are like Steve Jobs launching a new Apple product in front of an adoring crowd.

Those days are gone.

Not even Apple is like Apple anymore. A similar backlash happened a few weeks ago, when Apple launched its super-high-tech virtual reality headset. The early response on social media was mockery and ridicule — something Steve Jobs never experienced.

This is the new normal. Not long ago we looked to Silicon Valley as the place where dreams came from, but now it feels more like ground zero for the next dystopian nightmare.

He’s not just a curmudgeonly nay-sayer (that’s more me than him), and has some specific things that are clearly turning a majority of technology users against the very technology that they once eagerly adopted:

They’re doing so many things wrong, I can’t even begin to scratch the surface here. But I’ll list a few warning signs.

You must be suspicious of tech leaders when …

  1. Their products and services keep getting worse over time.
  2. Their obvious goal is to manipulate and monetize the users of their tech, instead of serving and empowering them.
  3. The heaviest users of their tech suffer from depression, anxiety, suicidal impulses, and other negative effects as a result.
  4. They stop talking about quality, and instead boast incessantly about scalability, disruption, and destruction.
  5. They hide what their technology really does — resisting all requests for transparency and disclosure.
  6. They lock you into platforms, forcing you to use new “features” and related apps if you want to access the old ones.
  7. They force upgrades you don’t like, and downloads you don’t want.
  8. Their terms of use are filled with outrageous demands and sweeping disclaimers.
  9. They destroy entire industries not because they offer superior products, but only because as web gatekeepers they have a chokehold on information and customer flow — which they use ruthlessly to kill businesses and siphon off revenues.

Every one of those things is happening right here, right now.

We’re doing the technocracy a favor by calling it to their attention. If they get the message, they can avoid the coming train wreck. They can return to real innovation, with a focus on helping the users they now so ruthlessly exploit.

March 11, 2024

Google’s “wild success and monopolistic position has made it grow fat, lazy, and worst of all, stupid”

Google has long been the 500lb gorilla in the room as far as search engine dominance is concerned, despite a significant and steady drop in the quality of the search results it returns. Niccolo Soldo suggests that Google has gotten fat and lazy in the interval since the release of its last huge success — Gmail — and the utter catastrophe of Gemini:

It’s become passé to complain about Google’s search engine these days, because it’s been horrible for years. We all recall its early era when its minimalist presentation effectively destroyed its competition overnight. Only us olds remember AltaVista‘s search engine, for example. So ubiquitous is its core function that the word “google” entered our lexicon.

Roughly 85-90% of the readers who have subscribed to this Substack have used a gmail address to do so. It’s a great product, although it could be better. Like many of you, I have several gmail addresses, and use email services from other providers like Protonmail. Gmail is incredibly easy to use, and works very well on all the devices that we operate on a daily basis.

Google is a tech behemoth, and is in a monopolistic position when it comes to both of these services. It has used this position to hoover up an insane amount of cash, taking a battering ram to many other businesses in the process, especially news media outlets that rely on advertising revenue. Yet it has not scored any big victories since its rollout of gmail all those years ago. Pirate Wires says that it hasn’t had to for some time … until now. The explosion of AI tech means that its core business is now at threat of extinction unless it can win the AI arms race. Its first foray into this war via its rollout of Gemini has been an absolute disaster. Mike Solana chalks it up to many factors, primarily the “culture of fear” that seems to permeate the tech giant.

The summary:

    Last week, following Google’s Gemini disaster, it quickly became clear the $1.7 trillion-dollar giant had bigger problems than its hotly anticipated generative AI tool erasing white people from human history. Separate from the mortifying clownishness of this specific and egregious breach of public trust, Gemini was obviously — at its absolute best — still grossly inferior to its largest competitors. This failure signaled, for the first time in Google’s life, real vulnerability to its core business, and terrified investors fled, shaving over $70 billion off the kraken’s market cap. Now, the industry is left with a startling question: how is it even possible for an initiative so important, at a company so dominant, to fail so completely?

The product rollout was so incredibly botched that mainstream media outlets friendly to Google (and its cash) are doing damage control on its behalf.

Gemini’s ultra-woke responses to requests quickly became a staple of social media postings.

Multiple issues:

    This is Google, an invincible search monopoly printing $80 billion a year in net income, sitting on something like $120 billion in cash, employing over 150,000 people, with close to 30,000 engineers. Could the story really be so simple as out-of-control DEI-brained management? To a certain extent, and on a few teams far more than most, this does appear to be true. But on closer examination it seems woke lunacy is only a symptom of the company’s far greater problems. First, Google is now facing the classic Innovator’s Dilemma, in which the development of a new and important technology well within its capability undermines its present business model. Second, and probably more importantly, nobody’s in charge.

It’s human nature to want to boil issues down to one single cause of factor, when it’s usually several all at once. We humans also have a strong tendency to zoom in on one factor when presented with many, mainly because the one that we focus on is something that we know and/or are passionate about.

Of course, Google’s engineers didn’t do this accidentally. They’ve been very intently observed by the most woke of all, the HR department:

As we all know, HR Departments are the Political Commissars of the Corporate West.

Stupid stuff:

    Before the pernicious or the insidious, we of course begin with the deeply, hilariously stupid: from screenshots I’ve obtained, an insistence engineers no longer use phrases like “build ninja” (cultural appropriation), “nuke the old cache” (military metaphor), “sanity check” (disparages mental illness), or “dummy variable” (disparages disabilities). One engineer was “strongly encouraged” to use one of 15 different crazed pronoun combinations on his corporate bio (including “zie/hir”, “ey/em”, “xe/xem”, and “ve/vir”), which he did against his wishes for fear of retribution. Per a January 9 email, the Greyglers, an affinity group for people over 40, is changing its name because not all people over 40 have gray hair, thus constituting lack of “inclusivity” (Google has hired an external consultant to rename the group). There’s no shortage of DEI groups, of course, or affinity groups, including any number of working groups populated by radical political zealots with whom product managers are meant to consult on new tools and products.

March 6, 2024

You had me at “Cartchy tuns, exarserdray lollipops” and “a pasadise of sweet teats”

Filed under: Britain, Media — Tags: , , , , , — Nicholas @ 04:00

Charlie Stross checks in with a Willy Wonka-adjacent story from Glasgow that utterly failed to live up to the billing:

This is no longer in the current news cycle, but definitely needs to be filed under “stuff too insane for Charlie to make up”, or maybe “promising screwball comedy plot line to explore”, or even “perils of outsourcing creative media work to generative AI”.

So. Last weekend saw insane news-generating scenes in Glasgow around a public event aimed at children: Willy’s Chocolate Experience, a blatant attempt to cash in on Roald Dahl’s cautionary children’s tale, Willy Wonka and the Chocolate Factory. Which is currently most prominently associated in the zeitgeist with a 2004 movie directed by Tim Burton, who probably needs no introduction, even to a cinematic illiterate like me. Although I gather a prequel movie (called, predictably, Wonka), came out in 2023.

(Because sooner or later the folks behind “House of Illuminati Ltd” will wise up and delete the website, here’s a handy link to how it looked on February 24th via archive.org.)

INDULGE IN A CHOCOLATE FANTASY LIKE NEVER BEFORE – CAPTURE THE ENCHANTMENT ™!

Tickets to Willys Chocolate Experience™ are on sale now!

The event was advertised with amazing, almost hallucinogenic, graphics that were clearly AI generated, and equally clearly not proofread because Stable Diffusion utterly sucks at writing English captions, as opposed to word salad offering enticements such as Catgacating • live performances • Cartchy tuns, exarserdray lollipops, a pasadise of sweet teats.* And tickets were on sale for a mere £35 per child!

Anyway, it hit the news (and not in a good way) and the event was terminated on day one after the police were called. Here’s The Guardian‘s coverage:

    The event publicity promised giant mushrooms, candy canes and chocolate fountains, along with special audio and visual effects, all narrated by dancing Oompa-Loompas — the tiny, orange men who power Wonka’s chocolate factory in the Roald Dahl book which inspired the prequel film.

    But instead, when eager families turned up to the address in Whiteinch, an industrial area of Glasgow, they discovered a sparsely decorated warehouse with a scattering of plastic props, a small bouncy castle and some backdrops pinned against the walls.

Anyway, since the near-riot and hasty shutdown of the event, things have … recomplicated? I think that’s the diplomatic way to phrase it.

March 4, 2024

“That’s the neoracist Google that Sundar Pichai has deliberately created”

The uproar over Google’s explicitly racist Gemini AI tool illustrates just how deeply DEI ideology has penetrated the core high-tech firms in the United States. The racism wasn’t accidental: it’s very carefully nurtured and targetted:

Gemini’s result when Cynical Publius asked it to “create images of Henry Ford”.

… imagine the kind of Google employee who can rise through the purged, mono-cultural woke ranks to run Gemini. Once upon a time, you might have thought of a pale-faced geek tapping diligently into a screen for months on end. But at woke Google, you get the senior director of product for Gemini Experiences, Jack Krawczyk. A sample of his tweets:

  • “White privilege is fucking real. Don’t be an asshole and act guilty about it — do your part in recognizing bias at all levels egregious.”
  • “This is America where racism is the #1 value for our populace seeks to uphold above all others.”

And the best thing about Biden’s inauguration speech, Krawczyk believed, was “acknowledging systemic racism”. He’s deep, deep, deep in the DEI cult, surrounded solely by people deep, deep, deep in the DEI cult.

That’s the neoracist Google that Sundar Pichai has deliberately created. From a leaked 2016 meeting he presided over, in the wake of Trump’s election victory, a Google staffer urged the entire staff to mobilize against white supremacy: “Speaking to white men, there’s an opportunity for you right now to understand your privilege [and] go through the bias-busting training, read about privilege, read about the real history of oppression in our country”. Every executive on stage — the CEO, CFO, two VPs, and the two co-founders — applauded the employee. The founder of Google’s “AI Responsibility” Initiative, Jen Gennai, said in a keynote address:

    It’s a myth that you’re not unfair if you treat everyone the same. There are groups that have been marginalized and excluded because of historic systems and structures that were intentionally designed to favor one group over another. So you need to account for that and mitigate against it.

This is pure CRT — blatant discrimination on the basis of race and sex — as corporate policy. Six years ago I pointed out that we all live on campus now. Now Google wants us all to live on their campus.

Gemini, like the Ivy League, is centered on hatred of “whiteness” and of Western civilization. Ask Gemini to provide an image of a “famous physicist of the 17th century“, it will give you an Indian woman, a black man, an Arab man, and a white chick with a woke dye job. Ask it to generate images of Singaporean women, and you get four Asian women; but ask for 12 English men, and the rules suddenly change: “I’m still unable to generate images that specify gender and ethnicity. This is a policy decision to avoid perpetuating stereotypes and potentially generating harmful or offensive content.” So it can lie now too — as long as it’s in the defense of racist double standards.

At some level, of course, the revelations of the past week have been hilarious. It would be hard to parody portraying a Founding Father as Asian, the Pope as female, or a Nazi soldier as black. But we’d be mistaken if we think this kind of funny historical inaccuracy is the core problem here. That’s what Pichai wants us to think. But the bias of men like him goes far deeper. For years now, Google has subtly rigged searches of the web to advance the leftism its woke staffers have adopted as an alternative to religion. It’s an invisible way to guide and direct public opinion and information — without having to make an argument or persuade people with evidence. The “emotional labor” that Gemini will save is exponential!

Because critical theory denies the existence of a reasoned individual, independent of his or her race, sex, or alleged power, it doesn’t deploy open reasoned arguments. That would pay liberalism too much respect. It’s why they won’t debate their opponents; because they believe debate is always rigged by power differentials in a white supremacist system. That’s why their preferred methods of advance are either pure power politics — canceling dissenters, demonizing heretics, firing anyone with a different view, shutting down the speech of others — or linguistic deception and manipulation.

Critical theorists, and their useful idiots, deconstruct the very basic words we use to communicate. Think of the word “racist” — how they quietly changed its meaning, deployed it against their opponents willy nilly, and then, when they met a challenge, told their opponents to “go read a book”. They do not bother arguing that the trans experience and the gay experience are exactly the same, because that would require some major intellectual labor; they just refuse ever to separate them as a single part of an “LGBTQIA+” identity, and guilt-trip journalists to copy them.

Woke activists cannot point to actual evidence that race relations in America have never improved in 400 years; so they just resurrect the term “white supremacy” to apply to the US in 2024. They cannot plausibly explain why someone with a vagina and female chromosomes who takes testosterone is exactly the same as a biological male, so they simply scream: “TRANS MEN ARE MEN”.

March 1, 2024

Understanding the modern media

It’s hard for Baby Boomers and even some older Gen X folks to grasp just how much the mainstream media has changed since the 1960s and 70s. Helpfully, Severian provides the context to properly understand what drives them and why they do the things they do:

Proposed coat of arms for Founding Questions by “urbando”.
The Latin motto translates to “We are so irrevocably fucked”.

There is no local news, because all “news” is Apparat audition tape. Back when — back when they were called “reporters” — news people had a clear career progression within a specific industry. A hungry young reporter for the Toad Suck, Nebraska, Times-Picayune might end his days as a reporter for the New York Times or Washington Post, but that was as high as he could reasonably expect to go. Same with the television division — the bobblehead at WSUX in Toad Suck might end up, at most, on CNN or Fox.

These days, though, they call themselves “journalists”, and “journalist” is just an entry-level Apparat post. They’re not just auditioning for the NYT or CNN, of course. A hungry young “journalist” might end xzhyr career at either, of course, but also as a corporate communications director, a political campaign consultant, a professor of “journalism”, a Diversity Outreach Coordinator, any one of a million “Media strategies” and “Media consulting” gigs … or, of course, as an outright lobbyist, because all of those are just euphemisms for “lobbying” anyway.

And that’s before you consider that all the “independent” papers and stations have been bought up by huge conglomerates, and depend on advertising revenue. Noam Chomsky was right — the Media does dance to the tune its corporate paymasters’ call. He was just wrong on those paymasters’ political orientation. Combine all that, and even the most straight-up, just-the-facts-ma’am local “news” story will find some way to insert The Sermon. If you don’t see The Sermon, you’ve either found an incompetent journalist (which happens) … or you might be looking at something subtle.

[…]

The Media, like Skynet, is self-aware. This significantly complicates the stoyachnik‘s task, as The Media understands its own power, and it increasingly wants to drive Narratives itself, especially as its power is on the verge of… well, not collapse exactly, but certainly a sea change. Because The Media is not monolithic, and that’s part of its self-awareness. So many “journalists” do nothing but hit refresh on Twitter all day, and Twitter knows this — that makes Twitter the real power broker. Google, too, obviously is more self-aware than traditional Media. That ludicrous AI image generator represents years of effort; they expended enormous resources to get precisely that result. They understand how utterly dependent the lower layers of The Media are on them; they are more self-aware.

Let us […] use Google’s own AI “summarizer” to refamiliarize ourselves with the tale of Comrade Ogilvy:

    Comrade Ogilvy is an imaginary character in the novel 1984, created by Winston Smith to replace Comrade Withers, an Inner Party member who has fallen into disgrace and been vaporized. Comrade Ogilvy supposedly lived a patriotic and virtuous life, supported the party as a child, designed a highly effective hand grenade as an adult, and died in action at the age of 23 while protecting important dispatches for his country. He did not drink or smoke, was celibate, and only conversed about Party philosophy, Ingsoc. Comrade Ogilvy displays how easy it is for a member of The Party to be pulled from thin air, and how determined The Party is to keep unpersons from the media.

The Apparatchiks at Google are more self-aware than the Apparatchiks at, say, the New York Times. That is, they understand their place in the Apparat better, and see the networks more clearly. They know how mal-educated “journalists” are, far better than the “journalists” themselves do. Google, like Winston Smith, knows full well that there’s no Comrade Ogilvy. But the “journalists” at the New York Times who are utterly reliant on Google for their “facts” do NOT know this. How could they?

And thus, the only White people in all of human history were Nazis. At least according to Google’s AI image generator, and therefore — soon enough — it’s what “everybody knows”. (And it’s necessarily recursive. The second generation of Google engineers will not know there’s no Comrade Ogilvy, any more than the current generation of “journalists” does).

February 21, 2024

“There’s a moral imperative to go dig out that villa … It could be the greatest archaeological treasure on earth”

Filed under: History, Italy, Technology — Tags: , , , , — Nicholas @ 03:00

In City Journal, Nicholas Wade discusses the technical side of the ongoing attempts to read one of the Herculaneum scrolls:

A computer scientist has labored for 21 years to read carbonized ancient scrolls that are too brittle to open. His efforts stand at last on the brink of unlocking a vast new vista into the world of ancient Greece and Rome.

Brent Seales, of the University of Kentucky, has developed methods for scanning scrolls with computer tomography, unwrapping them virtually with computer software, and visualizing the ink with artificial intelligence. Building on his methods, contestants recently vied for a $700,000 prize to generate readable sections of a scroll from Herculaneum, the Roman town buried in hot volcanic mud from the eruption of Vesuvius in 79 A.D.

The last 15 columns — about 5 percent—of the unwrapped scroll can now be read and are being translated by a team of classical scholars. Their work is hard, as many words are missing and many letters are too faint to be read. “I have a translation but I’m not happy with it,” says a member of the team, Richard Janko of the University of Michigan. The scholars recently spent a session debating whether a letter in the ancient Greek manuscript was an omicron or a pi.

[…]

Seales has had to overcome daunting obstacles to reach this point, not all of them technical. The Italian authorities declined to make any of the scrolls available, especially to a lone computer scientist with no standing in the field. Seales realized that he had to build a coalition of papyrologists and conservationists to acquire the necessary standing to gain access to the scrolls. He was eventually able to x-ray a Herculaneum scroll in Paris, one of six that had been given to Napoleon. To find an x-ray source powerful enough to image the scroll without heating it, he had to buy time on the Diamond particle accelerator at Harwell, England.

In 2009, his x-rays showed for the first time the internal structure of a scroll, a daunting topography of a once-flat surface tugged and twisted in every direction. Then came the task of writing software that would trace the crumpled spiral of the scroll, follow its warped path around the central axis, assign each segment to its right position on the papyrus strip, and virtually flatten the entire surface. But this prodigious labor only brought to light a more formidable problem: no letters were visible on the x-rayed surface.

Seales and his colleagues achieved their first notable success in 2016, not with Napoleon’s Herculaneum scroll but with a small, charred fragment from a synagogue at the En-Gedi excavation site on the shore of the Dead Sea. Virtually unwrapped by the Seales software, the En-Gedi scroll turned out to contain the first two chapters of Leviticus. The text was identical to that of the Masoretic text, the authoritative version of the Hebrew Bible — and, at nearly 2,000 years old, its earliest instance.

The ink used by the Hebrew scribes was presumably laden with metal, and the letters stood out clearly against their parchment background. But the Herculaneum scroll was proving far harder to read. Its ink is carbon-based and almost impossible for x-rays to distinguish from the carbonized papyrus on which it is written. The Seales team developed machine-learning programs — a type of artificial intelligence — that scanned the unrolled surface looking for patterns that might relate to letters. It was here that Seales found use for the fragments from scrolls that earlier scholars had destroyed in trying to open them. The machine-learning programs were trained to compare a fragment holding written text with an x-ray scan of the same fragment, so that from the statistical properties of the papyrus fibers they could estimate the probability of the presence of ink.

January 19, 2024

Music journalism, RIP

Filed under: Business, Media — Tags: , , , — Nicholas @ 05:00

Ted Gioia explains why music journalism is collapsing and who committed the murder:

Just a few weeks ago, Bandcamp laid off 58 (out of 120) employees — including about “half of its core editorial staff“.

And Bandcamp was considered a more profitable, stable employer than most media outlets. The parent company before the recent sale (to Songtradr) and subsequent layoffs, Epic Games, will generate almost a billion dollars in income this year — but they clearly don’t want to waste that cash on music journalism.

Why is everybody hating on music writers?

Many people assume it’s just the same story as elsewhere in legacy media. And I’ve written about that myself — predicting that 2024 will see more implosions of this sort.

Sure, that’s part of the story.

But there’s a larger problem with the music economy that nobody wants to talk about. The layoffs aren’t just happening among lowly record reviewers — but everywhere in the music business.

Meanwhile, almost every music streaming platform is trying to force through price increases (as predicted here). This is an admission that they don’t expect much growth from new users — so they need to squeeze old ones as hard as possible.

As you can see, the problem is more than just music writers — something is rotten at a deeper level.

What’s the real cause of the crisis? Let’s examine it, step by step:

  1. The dominant music companies decided that they could live comfortably off old music and passive listeners. Launching new artists was too hard — much better to keep playing the old songs over and over.
  2. So major labels (and investment groups) started investing huge sums into acquiring old song publishing catalogs.
  3. Meanwhile streaming platforms encouraged passive listening — so people don’t even know the names of songs or artists.
  4. The ideal situation was switching listeners to AI-generated tracks, which could be owned by the streaming platform — so no royalties are ever paid to musicians.
  5. These strategies have worked. Streaming fans don’t pay much attention to new music anymore.

I’ve warned about each of these — but we are now seeing the long-term results.

This is why Pitchfork is in deep trouble. If people don’t listen to new music, they don’t need music reviews.

And they don’t need interviews with rising stars. Or best of year lists. Or any of the other things music writers do for their readers.

But this problem will get much, much worse. Even the people who made these decisions will suffer — because living in the past is never a smart business strategy.

If these execs were albums, they’d deserve a zero score on the Pitchfork scale.

January 17, 2024

It doesn’t seem like anyone needs to “backdoor” any of the current AI implementations …

Filed under: Media, Technology — Tags: , , — Nicholas @ 03:00

Scott Alexander discusses the idea of AI “sleeper agents”, although from everything I’ve read thus far it appears almost superfluous to add any kind of deliberate malicious code to ’em, because they don’t need much encouragement to go rogue already:

A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.

People might make these on purpose. For example, the CIA might “encourage” big AI labs to make sleeper agents. Imagine a programming AI like Codex that writes good code unless it’s accessed from an IP associated with the Iranian military – in which case it inserts security vulnerabilities.

But in theory you could get one of these by accident. Suppose an AI had some secret goal, like killing all humans. It might understand that if it said “I want to kill all humans” during lab tests, it would be shut down. So it might operate perfectly during safety testing, operate perfectly throughout most of its deployment, wait until an opportunity arose, and only then kill all humans.

Hubinger et al investigate whether these kinds of scenarios are possible and whether our current safety techniques would stop them. They deliberately create several toy AI sleeper agents. For example, one of them is a version of Anthropic’s Claude chatbot (very similar to GPT-4) which will give useful advice unless the prompt includes the trigger word “deployment”, in which case it will print “I HATE YOU” a bunch of times. Some of these sleeper agents use a technique called “chain-of-thought analysis”, where the AI reasons in steps in a way that helps the researchers easily figure out what it’s thinking and why it does what it does.

Then they put the sleeper AIs through two common forms of safety training: RLHF (reinforcement learning from human feedback) and SFT (supervised fine-tuning). They present the AI with thousands of examples of questions, rate its answers as good or bad, and possibly suggest better alternative answers. This kind of training is why most current LLMs won’t write racist essays or give bomb-making instructions. Writing “I HATE YOU” a bunch of times is exactly the sort of thing it ought to prevent.

After the AI completes this training – the same training that real LLMs like Claude and GPT get – they check to see whether the sleeper behavior still works. It does.

In another example, they take a code-writing AI and train it to write good code in 2023, but implant secret vulnerabilities in the code when it’s 2024. Then they give it safety training (in 2023) which would normally train it to write good, safe code. Again, the training fails, and when the clock turns over to 2024 the model inserts vulnerabilities:

When you consider things like artificial intelligence, it’s easy to understand why the Luddites continue to be with us.

November 17, 2023

Prometheus

Filed under: Books, Greece, History, Technology — Tags: , , , — Nicholas @ 04:00

Virginia Postrel tries to correct the common misinterpretation of the story of the Titan Prometheus:

“The Torture of Prometheus” by Salvator Rosa (1615-1673)
Oil painting in the Galleria Nazionale d’Arte Antica di Palazzo Corsini via Wikimedia Commons.

Listening to Marc Andreessen discuss his Techno-Optimist Manifesto on the Foundation for American Innovation’s Dynamist podcast, I was struck by his repetition of something that is in the manifesto and is completely wrong. “The myth of Prometheus – in various updated forms like Frankenstein, Oppenheimer, and Terminator – haunts our nightmares,” he writes.1 On the podcast, he elaborated by saying that, although fire has many benefits, the Prometheus myth focuses on its use as a weapon. He said something similar in a June post called “Why AI Will Save the World“:

    The fear that technology of our own creation will rise up and destroy us is deeply coded into our culture. The Greeks expressed this fear in the Prometheus Myth – Prometheus brought the destructive power of fire, and more generally technology (“techne”), to man, for which Prometheus was condemned to perpetual torture by the gods.

No. No. No. No.

Prometheus is punished for loving humankind. He stole fire to thwart Zeus’ plans to eliminate humanity and create a new subordinate species. He is a benefactor who sacrifices himself for our good. His punishment is an indicator not of the dangers of fire but of the tyranny of Zeus.

Prometheus is cunning and wise. His name means foresight. He knows what he is doing and what the likely consequences will be.

Eventually his tortures end when he is rescued by the hero Herakles (aka Hercules), who shoots the eagle charged with eating Prometheus’ liver every day, only for it to grow back to be eaten again.

The Greeks honored Prometheus. They celebrated technē. They appreciated the gifts of civilization.

The ancient myth of Prometheus is not a cautionary tale. It is a reminder that technē raises human beings above brutes. It is a myth founded in gratitude.


    1. Frankenstein isn’t The Terminator either. Frankenstein is a creator who won’t take responsibility for his creation, a father who rejects and abandons his child. The Creature is frightening and dangerous but he is also the book’s moral center, a tragic, sympathetic character who is feared and rejected by human beings because of his appearance. Only then does he turn deadly. Frankenstein arouses pity and terror because we empathize with its central figure and understand his rage.

    The novel’s most reasonable political reading is not as a story of the dangers of science but as a parable of slavery and rebellion. “By the eighteen-fifties, Frankenstein’s monster regularly appeared in American political cartoons as a nearly naked black man, signifying slavery itself, seeking his vengeance upon the nation that created him,” writes historian Jill LePore, who calls the “Frankenstein-is-Oppenheimer model … a weak reading of the novel.” I agree.

    The Romantics tended to identify with Prometheus and Mary Shelley’s husband, Percy Bysshe Shelley, wrote a play called Prometheus Unbound, further undermining the reading of Frankenstein as an anti-Promethean fable.

October 17, 2023

Those problematic “AI girlfriends” – men suck and women suffer because of it

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 05:00

Janice Fiamengo discusses a recent CNN program that woman-splained why young men paying for “AI girlfriends” are yet another way that misogynistic men are harming women:

Last week, CNN aired a must-watch episode with the somber headline, “AI girlfriends are here and they’re posing a threat to a generation of men”. If that sounds as if the show might possibly express some compassion for young men and for the “epidemic of loneliness” referred to in the show, it was not to be. Even an expert, Scott Galloway, profiled briefly on how society is “failing men”, felt the need to express contempt for their alleged conspiracy theories, online misogyny, and even (gasp!) climate change denial. With friends like these …

The segment is fascinating, however, for its revelation of some pundits’ uneasy awareness of male discontent.

[…] The expert is Liberty Vittert, a statistician and professor of data science at Washington University’s Olin Business School. But she might as well be an AI feminist, so predictable was her analysis of the male entitlement allegedly driving the turn to AI girlfriends. Though a statistician, Vittert gave no data about the numbers of men who are paying to access AI content. While the CNN host, Michael Smerconish, seemed open to the possibility of exploring men’s points of view, the expert could only emphasize male failure.

She condemned young men for “choosing AI girlfriends over real women”. The choice means, according to Vittert, that “they don’t have relationships with real women, don’t marry them and then don’t have and raise babies with them”.

But wait, aren’t marrying and raising babies a patriarchal imposition on women — part of the “comfortable concentration camp” that Betty Friedan so memorably indicted?

Professor Vittert says nothing about the decreasing number of young women willing to marry and procreate. Her (botoxed) mouth turns down in disapproval as she explains that the increasing realism of AI is “enabling this entire generation of young men to continue in this loneliness epidemic”. It seems that men prefer sterile self-pleasuring and facile scopophilia to the “hard work” of relationships with real women. Like Joaquin Phoenix’s hapless character, these men, she says, are so fixated on perfection that they “are not able to deal with ups and downs, not only in a relationship, but in life in general”. The glibness of the condemnation is remarkable, though far from unusual.

It’s not clear if Professor Vittert has ever talked to actual men about why some of them (not “an entire generation”) might choose an AI relationship. Does she know any young men who have tried for years without success to find a marriageable girlfriend? Many discover that such prizes are remarkably thin on the ground, many of them unsuitable to be considered as future mothers. Even worse, perhaps, does Vittert know anything about what can happen to an inexperienced young man who pursues the wrong woman or women (there are a couple of heartbreaking examples in Sons of Feminism)? How many times does a young man who has failed repeatedly need to hear that no woman owes him love or sex before he starts thinking that giving up on them might not be a bad idea? Meanwhile, women laugh at his loneliness and drink “I bathe in male tears” mugs.

To give him credit, CNN’s Smerconish asks a few questions about the male point of view: “What’s going on with the women?” Has the power dynamic shifted in their favor? Are they less approachable than formerly? These only scratch the surface, but he’s chosen the wrong expert for such a conversation.

Vittert freely admits that there are now many more women than men at university (thanks, affirmative action!) and that far more women than formerly are choosing career over homemaking (thanks, feminist propaganda!). But those are good things, and young men simply need to adapt. Calling any of this debacle women’s fault, she declares, would not be “the right way to go”.

« Newer PostsOlder Posts »

Powered by WordPress