Quotulatiousness

December 10, 2024

Microsoft has launched a publishing arm called “8080 Books” – AI-generated books anyone?

Filed under: Books, Business, Technology — Tags: , , , — Nicholas @ 03:00

Ted Gioia notices that Microsoft and other tech companies are moving into book publishing, likely as a way to generate some additional revenue from their vast investments in artificial intelligence ventures over the last several years:

I never expected Microsoft to enter the book business.

But on November 18, this huge tech company quietly announced that it is now a publisher. But there was an interesting twist.

Microsoft is “not currently accepting unsolicited manuscripts”.

Let’s be totally fair. Nobody at Microsoft claims that it plans to replace human writers with AI slop. But this company has invested a staggering $13 billion in AI — it’s their top priority as a corporation.

So what you do think their goals are in the book business?

If you’re looking for a clue, I note that Microsoft’s publishing arm is called 8080 Books. Yes, they named it after the 8080 microprocessor.

How charming!

And just a few hours after Microsoft announced this move, TikTok did the exact same thing.

According to The Bookseller:

    ByteDance, the company behind the video-sharing platform TikTok, has announced that it will start selling print books in bookshops from early next year, published under its imprint, 8th Note Press. 8th Note Press will work in partnership with Zando to publish print editions and sell copies in physical bookstores starting early 2025.

Here, too, nobody is claiming that they will replace humans with bots. But why would a company that has built its empire with online social media have any interest in the slow and stodgy business of selling printed books on paper?

Oh, by the way, TikTok’s parent is investing huge sums in AI. The company has even found a way around export controls on Nvidia chips. Just a few weeks before entering the book business, ByteDance’s sourcing of AI tech from Huawei was leaked to the press.

And as if these coincidences weren’t enough to alarm you, another AI publishing development happened at this same time — but (here too) with very little coverage in the media.

Tech startup Spines raised $16 million in seed financing for an AI publishing business that aims to release 8,000 books per year.

Here, too, the company says that it wants to support human writers. Maybe it will run a new kind of vanity publishing business. But is that a sufficient lure to attract $16 million in seed financing?

It’d be a rearguard action, but it’d be nice to have a requirement that publishers disclose when published works are partly or wholly AI-extruded, wouldn’t it? It would certainly help me to avoid buying books or magazines where AI hallucinations may occur in key sections …

October 26, 2024

Secrets of the Herculaneum Papyri

toldinstone
Published Jul 5, 2024

The Herculaneum papyri, scrolls buried and charred by Vesuvius, are the most tantalizing puzzle in Roman archaeology. I recently visited the Biblioteca Nazionale in Naples, where most of the papyri are kept, and discussed the latest efforts to decipher the scrolls with Dr. Federica Nicolardi.
My interview with Dr. Nicolardi: https://youtu.be/gs1Z-YN1aQM

Chapters:
0:00 Introduction
0:40 Opening the scrolls
1:47 A visit to the library
2:38 A papyrologist at work
4:00 The Vesuvius Challenge
4:46 What the scrolls say
5:40 Contents of the unopened scrolls
6:28 The other library
7:12 An interview with Dr. Nicolardi
(more…)

October 4, 2024

You know the jig is up for “renewables” when even Silicon Valley techbros turn against it

JoNova on the remarkably quick change of opinion among the big tech companies on the whole renewable energy question:

Google, Oracle, Microsoft were all raving fans of renewable energy, but all of them have given up trying to reach “net zero” with wind and solar power. In the rush to feed the baby AI gargoyle, instead of lining the streets with wind turbines and battery packs, they’re all suddenly buying, building and talking about nuclear power. For some reason, when running $100 billion dollar data centres, no one seems to want to use random electricity and turn them on and off when the wind stops. Probably because without electricity AI is a dumb rock.

In a sense, AI is a form of energy. The guy with the biggest gigawatts has a head start, and the guy with unreliable generators isn’t in the race.

It’s all turned on a dime. It was only in May that Microsoft was making the “biggest ever renewable energy agreement” in order to power AI and be carbon neutral. Ten minutes later and it’s resurrecting the old Three Mile Island nuclear plant. Lucky Americans don’t blow up their old power plants.

Oracle is building the world’s largest datacentre and wants to power it with three small modular reactors. Amazon Web Services has bought a data centre next to a nuclear plant, and is running job ads for a nuclear engineer. Recently, Alphabet CEO Sundar Pichai, spoke about small modular reactors. The chief of Open AI also happens to chair the boards of two nuclear start-ups.

September 16, 2024

Stephen Fry on artificial intelligence

Filed under: Books, History, Technology — Tags: , , , — Nicholas @ 04:00

On his Substack, Stephen Fry has posted the text of remarks he made last week in a speech for King’s College London’s Digital Futures Institute:

Yes, I’m still 12

So many questions. The first and perhaps the most urgent is … by what right do I stand before you and presume to lecture an already distinguished and knowledgeable crowd on the subject of Ai and its meaning, its bright promise and/or/exclusiveOR its dark threat? Well, perhaps by no greater right than anyone else, but no lesser. We’ll come to whose voices are the most worthy of attention later.

I have been interested in the subject of Artificial Intelligence since around the mid-80s when I was fortunate enough to encounter the so-called father of Ai, Marvin Minsky and to read his book The Society of Mind. Intrigued, I devoured as much as I could on the subject, learning about the expert systems and “bundles of agency” that were the vogue then, and I have followed the subject with enthusiasm and gaping wonder ever since. But, I promise you, that makes me neither expert, sage nor oracle. For if you are preparing yourselves to hear wisdom, to witness and receive insight this evening, to bask and bathe in the light of prophecy, clarity and truth, then it grieves me to tell you that you have come to the wrong shop. You will find little of that here, for you must know that you are being addressed this evening by nothing more than an ingenuous simpleton, a naive fool, a ninny-hammer, an addle-pated oaf, a dunce, a dullard and a double-dyed dolt. But before you streak for the exit, bear in mind that so are we all, all of us bird-brained half-wits when it comes to this subject, no matter what our degrees, doctorates and decades of experience. I can perhaps congratulate myself, or at least console myself, with the fact that I am at least aware of my idiocy. This is not fake modesty designed to make me come across as a Socrates. But that great Athenian did teach us that our first step to wisdom is to realise and confront our folly.

I’ll come to the proof of how and why I am so boneheaded in a moment, but before I go any further I’d like to paint some pictures. Think of them as tableaux vivants played onto a screen at the back of your mind. We’ll return to them from time to time. Of course I could have generated these images from Midjourney or Dall-E or similar and projected them behind me, but the small window of time in which it was amusing and instructive for speakers to use Ai as an entertaining trick for talks concerning Ai has thankfully closed. You’re actually going to have to use your brain’s own generative latent diffusion skills to summon these images.

[…]

An important and relevant point is this: it wasn’t so much the genius of Benz that created the internal combustion engine, as that of Vladimir Shukhov. In 1892, the Russian chemical engineer found a way of cracking and refining the spectrum of crude oil from methane to tar yielding amongst other useful products, gasoline. It was just three years after that that Benz’s contraption spluttered into life. Germans, in a bow to this, still call petrol Benzin. John D. Rockefeller built his refineries and surprisingly quickly there was plentiful fuel and an infrastructure to rival the stables and coaching inns; the grateful horse meanwhile could be happily retired to gymkhanas, polo and royal processions.

Benz’s contemporary Alexander Graham Bell once said of his invention, the telephone, “I don’t think I am being overconfident when I say that I truly believe that one day there will be a telephone in every town in America”. And I expect you all heard that Thomas Watson, the founding father of IBM, predicted that there might in the future be a world market for perhaps five digital computers.

Well, that story of Thomas Watson ever saying such a thing is almost certainly apocryphal. There’s no reliable record of it. Ditto the Alexander Graham Bell remark. But they circulate for a reason. The Italians have a phrase for that: se non e vero, e ben trovato. “If it’s not true, it’s well founded.” Those stories, like my scenario of that group of early investors and journalists clustering about the first motorcar, illustrate an important truth: that we are decidedly hopeless at guessing where technology is going to take us and what it’ll do to us.

You might adduce as a counterargument Gordon Moore of Intel expounding in 1965 his prediction that semiconductor design and manufacture would develop in such a way that every eighteen months or so they would be able to double the number of transistors that could fit in the same space on a microchip. “He got that right,” you might say, “Moore’s Law came true. He saw the future.” Yes … but. Where and when did Gordon Moore foresee Facebook, TikTok, YouTube, Bit Coin, OnlyFans and the Dark Web? It’s one thing to predict how technology changes, but quite another to predict how it changes us.

Technology is a verb, not a noun. It is a constant process, not a settled entity. It is what the philosopher-poet T. E. Hulme called a concrete flux of interpenetrating intensities; like a river it is ever cutting new banks, isolating new oxbow lakes, flooding new fields. And as far as the Thames of Artificial Intelligence is concerned, we are still in Gloucestershire, still a rivulet not yet a river. Very soon we will be asking round the dinner table, “Who remembers ChatGPT?” and everyone will laugh. Older people will add memories of dot matrix printers and SMS texting on the Nokia 3310. We’ll shake our heads in patronising wonder at the past and its primitive clunkiness. “How advanced it all seemed at the time …”

Those of us who can kindly be designated early adopters and less kindly called suckers remember those pioneering days with affection. The young internet was the All-Gifted, which in Greek is Pandora. Pandora in myth was sent down to earth having been given by the gods all the talents. Likewise the Pandora internet: a glorious compendium of public museum, library, gallery, theatre, concert hall, park, playground, sports field, post office and meeting hall.

September 10, 2024

“The world has gone mad. But nothing is as crazy as the AI news”

Filed under: Media, Technology — Tags: , , , — Nicholas @ 03:00

Ted Gioia is covering the AI beat like nobody else. In this post he shares several near-term predictions involving AI development and deployment:

The world has gone mad. But nothing is as crazy as the AI news.

Every day those AI bots and their human posse of true believers get wilder and bolder — and recently they’ve been flexing like body builders on Muscle Beach.

The results are sometimes hard to believe. But all this is true:

We truly live in interesting times — which is one of the three apocryphal Chinese curses.

(The other two, according to Terry Pratchett, are: “May you come to the attention of those in authority” and “May the gods give you everything you ask for”. By tradition, the last is the most dangerous of all.)

I get some credit for anticipating this. On August 4, I made the following prediction:

But it’s going to get even more interesting, and very soon. That’s because the next step in AI has arrived — the unleashing of AI agents.

And like the gods, these AI agents will give us everything we ask for.

Up until now, AI was all talk and no action. These charming bots answered your questions, and spewed out text, but were easy to ignore.

That’s now changing. AI agents will go out in the world and do things. That’s their new mission.

It’s like giving unreliable teens the keys to the family car. Up until now we’ve just had to deal with these resident deadbeats talking back, but now they are going to smash up everything in their path.

But AI agents will be even worse than the most foolhardy teen. That’s because there will be millions of these unruly bots on our digital highways.

August 21, 2024

The Great Enshittification – technological progress is now actually regressing

Ted Gioia provides ten reasons why all our lovely shiny technological improvements have turned into a steady stream of enshittified “updates” that reduce functionality, add unwanted “improvements” and make things significantly less reliable:

By my measure, this reversal started happening around a decade ago. If I had to sum things up in a conceptual chart, it would look like this:

The divergence was easy to ignore at first. We’re so familiar with useful tech that many of us were slow to notice when upgrades turned into downgrades.

But the evidence from the last couple years is impossible to dismiss. And we can’t blame COVID (or other extraneous factors) any longer. Technology is increasingly making matters worse, not better — and at an alarming pace.

[…]

But I have avoided answering, up until now, the biggest question — which is why is this happening?

Or, to be more specific, why is this happening now?

Until recently, most of us welcomed innovation, but something changed. And now a huge number of people are anxious and fearful about the same tech companies they once trusted.

What caused this shift?

That’s a big issue. Unless we understand how things went wrong, we can’t begin to fix them. Otherwise we’re just griping — about bad software or greedy CEOs or whatever.

It’s now time to address the causes, not just complain about symptoms.

Once we do that, we can move to the next steps, namely outlining a regimen for recovery and an eventual cure.

So let me try to lay out my diagnosis as clearly as I can. Below are the ten reasons why tech is now breaking bad.

I apologize in advance for speaking so bluntly. Many will be upset by my frankness. But the circumstances — and the risks involved — demand it.

August 5, 2024

Short-term technological forecast – “If I were a commercial pilot, I’d tell you to return to your seats and buckle up”

Most of this Ted Gioia post is behind the paywall (and if you can afford it, I’m sure you’d get your money’s worth for a subscription):

I anticipate extreme turbulence on every front for the remaining five months in 2024. You will see it in politics, business, economics, culture, world affairs, the stock market, and maybe even your own neighborhood.

That’s one of the themes of my latest arts and culture update below.

What happened to the AI business model last week?

After almost two years of hype, the media changed its opinion on AI last week.

The hype disappeared almost overnight

All of a sudden, news articles about AI went sour like reheated 7-Eleven coffee. The next generation AI chips are delayed, and 70% of companies are behind in their AI plans. There are good reasons for this — most workers now say AI makes them less productive.

People are also noticing that AI businesses want to use the entire electricity grid to run their money-losing bots. Meanwhile AI companies are burning through cash at historic levels. Even under the best case scenario, this all feels unsustainable.

But the worst disclosure, in my opinion, came on July 24 — just eleven days ago.

A study published in Nature showed that when AI inputs are used to train AI, the results collapse into gibberish.

This is a huge issue. AI garbage is now everywhere in the culture, and most of it undisclosed. So there’s no way that AI companies can remove it from future training inputs.

They are caught in the doom loop I described last week.

That same day, the Chief Investment Officer at Morgan Stanley warned investors that AI “hasn’t really driven revenues and earnings anywhere”. One day later, Goldman Sachs quietly released a report admitting that the AI business model was in serious trouble.

Even consulting firms, who make a bundle hyping this tech, are backtracking. Bain recently shared the following chart (hidden away at the end of a report) which explains why AI projects have failed.

These findings are revealing. They show that management is absolutely committed to AI, but the tools just don’t deliver.

And, finally, last week the media noticed all this.

They published dozens of panic-stricken articles. Investors got spooked too — shifting from greed to fear in a New York minute. Over the course of just two days, Nvidia’s stock lost around $400 billion in market capitalization.

In this environment, true believers quickly turn into skeptics. The whole AI business model gets scrutinized — and if it doesn’t hold up, investment cash flow dries up very quickly.

This is exactly what I predicted 6 months ago. Or even a year ago.

I expect that the next few weeks — or maybe even the next few days — will be extremely turbulent in the AI world.

Buckle up!


The dominant AI music company just admitted that it trained its bot on “essentially all music files on the Internet”.

Suno is a huge player in AI music — it tells investors it will generate $120 billion per year. Microsoft is already using its technology.

But there’s a tiny catch.

The company now admits in a court filing:

    Suno’s training data includes essentially all music files of reasonable quality that are accessible on the open internet, abiding by paywalls, password protections, and the like, combined with similarly available text descriptions

Hey, this is totally illegal — it’s like Napster all over again.

Suno will need to prove that all these copyrighted songs are “fair use” in AI training. I doubt that any court will take that claim seriously.

If the music industry is smart, they will use this violation to shut down AI regurgitation of copyrighted songs.

If the music industry is stupid — run according to my “idiot nephew theory” — they will drop charges in exchange for some quick cash.

June 20, 2024

“Surely the only way to defeat racism and homophobia is to treat ethnic and sexual minorities as incapable of high achievement and in need of a leg up from their betters?”

Filed under: Business, Politics, USA — Tags: , , , , , — Nicholas @ 03:00

Andrew Doyle on a radical new approach to hiring that might just catch on:

With the inexorable spread of DEI – Diversity, Equity and Inclusion – across the western world, it’s refreshing to see at least one major company resist the decrees of this new religion. This is precisely what happened this week when Scale, an Artificial Intelligence company based in San Francisco, launched a new policy to ensure that its employees were hired on the basis of – wait for it – being the most talented and best qualified for the job.

This innovation, which sees race, gender and sexuality as irrelevant when it comes to hiring practices, should hardly be considered revolutionary. And yet in a world in which the content of one’s character is less important than the colour of one’s skin, to treat everyone equally irrespective of these immutable characteristics is suddenly deemed radical.

Scale’s CEO, Alexandr Wang, explained that rather than adopt DEI policies, the company would henceforth favour MEI, which stands for Merit, Excellence, and Intelligence. He explained the thinking behind the new scheme in a post on X.

    There is a mistaken belief that meritocracy somehow conflicts with diversity. I strongly disagree. No group has a monopoly on excellence. A hiring process based on merit will naturally yield a variety of backgrounds, perspectives, and ideas. Achieving this requires casting a wide net for talent and then objectively selecting the best, without bias in any direction. We will not pick winners and losers based on someone being the “right” or “wrong” race, gender, and so on. It should be needless to say, and yet it needs saying: doing so would be racist and sexist, not to mention illegal. Upholding meritocracy is good for business and is the right thing to do.

One can already hear the likes of Robin DiAngelo and Alexandria Ocasio-Cortez screaming in fury at this blatant implementation of good old-fashioned liberal values. Surely the only way to defeat racism and homophobia is to treat ethnic and sexual minorities as incapable of high achievement and in need of a leg up from their betters?

It is instructive to compare reactions from the Twittersphere (now X) and Instagram, as one X user has done. If nothing else, the comparison reveals how the divide in the culture war is playing out on social media since Elon Musk’s takeover. On X, major figures in the corporate world such as Tobias Lütke (CEO of Shopify), Palmer Luckey (founder of Oculus VR) and Musk himself have congratulated Wang on his new initiative.

By contrast, here are some of the responses on Instagram:

    You’re ‘disrupting’ current hard-fought standards you don’t like, by reverting to a system rooted in bias and inequality that asks less of you as a hiring manager and as a leader
    – Dan Couch (He/Him)

    Curious to see how hiring processes can effectively (and objectively) measure one’s ‘merit’, ‘excellence’, and ‘intelligence’, all of which are very subjective terms
    – Cole Gawin (He/Him)

    What is merit and how do we measure it?
    – Rio Cruz Morales (They/Them)

    This sounds a lot like excuse making for casting off DEI principles
    – R.C. Rondero De Mosier (He/Him)

The pronouns, of course, signify membership of the cult, and so we should not be surprised to see the sentiments of its minions mirroring each other so closely. What Wang is proposing of course builds equality into the hiring system and, contrary to these complaints, it is entirely possible to measure merit objectively. This, after all, is the entire point of academic assessment. The arguments against merit can only be sustained if one presupposes that systemic inequalities are ingrained within society, that all of these relate to the concept of group identity, and that adjustments have to be made accordingly to guarantee equality of outcome.

June 9, 2024

Microsoft’s latest ploy to be the most hated tech company

Filed under: Media, Technology, USA — Tags: , , , , , — Nicholas @ 03:00

Charles Stross wonders if Microsoft’s CoPilot+ is actually a veiled suicide attempt by the already much-hated software giant:

The breaking tech news this year has been the pervasive spread of “AI” (or rather, statistical modeling based on hidden layer neural networks) into everything. It’s the latest hype bubble now that Cryptocurrencies are no longer the freshest sucker-bait in town, and the media (who these days are mostly stenographers recycling press releases) are screaming at every business in tech to add AI to their product.

Well, Apple and Intel and Microsoft were already in there, but evidently they weren’t in there enough, so now we’re into the silly season with Microsoft’s announcement of CoPilot plus Recall, the product nobody wanted.

CoPilot+ is Microsoft’s LLM-based add-on for Windows, sort of like 2000’s Clippy the Talking Paperclip only with added hallucinations. Clippy was rule-based: a huge bundle of IF … THEN statements hooked together like a 1980s Expert System to help users accomplish what Microsoft believed to be common tasks, but which turned out to be irritatingly unlike anything actual humans wanted to accomplish. Because CoPilot+ is purportedly trained on what users actually do, it looked plausible to someone in marketing at Microsoft that it could deliver on “help the users get stuff done”. Unfortunately, human beings assume that LLMs are sentient and understand the questions they’re asked, rather than being unthinking statistical models that cough up the highest probability answer-shaped object generated in response to any prompt, regardless of whether it’s a truthful answer or not.

Anyway, CoPilot+ is also a play by Microsoft to sell Windows on ARM. Microsoft don’t want to be entirely dependent on Intel, especially as Intel’s share of the global microprocessor market is rapidly shrinking, so they’ve been trying to boost Windows on ARM to orbital velocity for a decade now. The new CoPilot+ branded PCs going on sale later this month are marketed as being suitable for AI (spot the sucker-bait there?) and have powerful new ARM processors from Qualcomm, which are pitched as “Macbook Air killers”, largely because they’re playing catch-up with Apple’s M-series ARM-based processors in terms of processing power per watt and having an on-device coprocessor optimized for training neural networks.

Having built the hardware and the operating system Microsoft faces the inevitable question, why would a customer want this stuff? And being Microsoft, they took the first answer that bubbled up from their in-company echo chamber and pitched it at the market as a forced update to Windows 11. And the internet promptly exploded.

First, a word about Apple. Apple have been quietly adding AI features to macOS and iOS for the past several years. In fact, they got serious about AI in 2015, and every Apple Silicon processor they’ve released since 2016 has had a neural engine (an AI coprocessor) on board. Now that the older phones and laptops are hitting end of life, the most recent operating system releases are rolling out AI-based features. For example, there’s on-device OCR for text embedded in any image. There’s a language translation service for the OCR output, too. I can point my phone at a brochure or menu in a language I can’t read, activate the camera, and immediately read a surprisingly good translation: this is an actually useful feature of AI. (The ability to tag all the photos in my Photos library with the names of people present in them, and to search for people, is likewise moderately useful: the jury is still out on the pet recognition, though.) So the Apple roll-out of AI has so far been uneventful and unobjectionable, with a focus on identifying things people want to do and making them easier.

Microsoft Recall is not that.

June 3, 2024

The “hallucination” problem that bedevils all current AI implementations

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 05:00

Andrew Orlowski explains the one problem shared among all of the artificial intelligence engines currently available to the general public:

Gemini’s ultra-woke responses to requests quickly became a staple of social media postings.

AI Overviews hasn’t had the effect that Google hoped for, to say the least. It has certainly garnered immediate internet virality, with people sharing their favourite answers. Not because these are helpful, but because they are so laughable. For instance, when you ask AI Overviews for a list of fruits ending with “um” it returns: “Applum, Strawberrum and Coconut”. This is what, in AI parlance, is called a “hallucination”.

Despite having a market capitalisation of $2 trillion and the ability to hire the biggest brains on the planet, Google keeps stumbling over AI. Its first attempt to join the generative-AI goldrush in February last year was the ill-fated Bard chatbot, which had similar issues with spouting factual inaccuracies. On its first live demo, Bard mistakenly declared that the James Webb Space Telescope, launched only in 2021, had taken “the first pictures” ever of Earth from outside the solar system. The mistake wiped $100 billion off Google’s market value.

This February, Google had another go at AI, this time with Gemini, an image and text generator. The problem was that it had very heavy-handed diversity guardrails. When asked to produce historically accurate images, it would instead generate black Nazi soldiers, Native American Founding Fathers and a South Asian female pope.

This was “a well-meaning mistake”, pleaded The Economist. But Google wasn’t caught unawares by the problems inherent to generative AI. It will have known about its capabilities and pitfalls.

Before the current AI mania truly kicked off, analysts had already worked out that generative AI would be unlikely to improve user experience, and may well degrade it. That caution was abandoned once investors started piling in.

So why is Google’s AI putting out such rotten results? In fact, it’s working exactly as you would expect. Don’t be fooled by the “artificial intelligence” branding. Fundamentally, AI Overviews is simply trying to guess the next word it should use, according to statistical probability, but without having any mooring to reality. The algorithm cannot say “I don’t know” when asked a difficult question, because it doesn’t “know” anything. It cannot even perform simple maths, as users have demonstrated, because it has no underlying concept of numbers or of valid arithmetic operations. Hence the hallucinations and omissions.

This is less of a problem when the output doesn’t matter as much, such as when AI is processing an image and creates a minor glitch. Our phones use machine learning every day to process our photos, and we don’t notice or care much about most of the glitches. But for Google to advise us all to start eating rocks is no minor glitch.

Such errors are more or less inevitable because of the way the AI is trained. Rather than learning from a curated dataset of accurate information, AI models are trained on a huge, practically open-ended data set. Google’s AI and ChatGPT have already scraped as much of the web as they can and, needless to say, lots of what’s on the web isn’t true. Forums like Reddit teem with sarcasm and jokes, but these are treated by the AI as trustworthy, as sincere and correct explanations to problems. Programmers have long used the phrase “GIGO” to describe what is going on here: garbage in, garbage out.

AI’s hallucination problem is consistent across all fields. It pretty much precludes generative AI being practically useful in commercial and business applications, where you might expect it to save a great deal of time. A new study of generative AI in legal work finds the additional verification steps now required to ensure the AI isn’t hallucinating cancel out the time saved from deploying it in the first place.

“[Programmers] are still making the same bone-headed mistakes as before. Nobody has actually solved hallucinations with large-language models and I don’t think we can”, the cognitive scientist and veteran AI sceptic, Professor Gary Marcus, observed last week.

Another problem is now coming into view. The AI is making an already bad job worse, by generating bogus information, which then pollutes the rest of the web. “Google learns whatever junk it sees on the internet and nothing generates junk better than AI”, as one X user put it.

I was actually contacted by someone on LinkedIn the other day asking if I’d be interested in doing some AI training for US$25 per hour. I really, really need the money, but I’m unsure about being involved in AI at all …

May 27, 2024

“Product recommendations broke Google, and ate the Internet in the process”

Filed under: Business, Economics, Media, Technology — Tags: , , , , — Nicholas @ 04:00

Ted Gioia says the algorithms are broken and we need a way to get out of the online hellscape our techbro overlords have created for us:

Have you tried to get information on a product or service from Google recently? Good luck with that.

“Product recommendations broke Google,” declares tech journalist John Herrman, “and ate the Internet in the process.”

That sounds like an extreme claim. But it’s painfully true. If you doubt it, just try finding something — anything! — on the dominant search engine.

No matter what you search for, you end up in a polluted swamp of misleading links. The more you scroll, the more garbage you see:

  • Bogus product reviews
  • Fake articles that are really advertisements
  • Consumer guides that are just infomercials in disguise
  • Hucksters pretending to be experts
  • And every scam you can imagine (and some that never existed before) empowered by deepfakes or AI or some other innovative new tech

The Google algorithm deliberately makes it difficult to find reliable information. That’s because there’s more money made from promoting garbage, and forcing users to scroll through oceans of crap.

So why should Google offer a quick, easy answer to anything?

Everybody is now playing the same dirty game.

Even (previously) respected media outlets have launched their own recommendation programs as a way to monetize captured clients (= you and me). Everybody from Associated Press to Rolling Stone is doing it, and who can blame them?

Silicon Valley sets the dirty rules and everybody else just plays the game.

Welcome to the exciting world of algorithms. They were supposed to serve us, but now they control us—for the benefit of companies who impose them on every sphere of our lives.

And you can’t opt out.

For example, when I listen to music on a streaming platform, the algorithm takes over as soon as I stop intervening—insisting I listen to what it imposes on me. Where’s the switch to turn it off?

I can’t find it.

That option should be required by law. At a minimum, I should be allowed to opt out of the algorithm. Even better, they shouldn’t force the algorithm on me unless I opt in to begin with.

If this tech really aimed to serve me, opting in and opting out would be an obvious part of the system. The fact that I don’t get to choose tells you the real situation: These algorithms are not for our benefit.

Do you expect the coming wave of AI to be any different?

[…]

The shills who want us to lick the (virtual) boots of the algorithms keep using the word progress. That’s another warning sign.

I don’t think that word progress means what they think it means.

If it makes our lives worse, it isn’t progress. If it forces me into servitude, it isn’t progress. If it gets worse over time — much worse! — it isn’t progress.

All the spin and lobbying dollars in the world can’t change that.

So that’s why I became a conscientious objector in the world of algorithms. They give more unwanted advice than any person in history, even your mom.

At least mom has your best interests at heart. Can we say the same for Silicon Valley?

May 26, 2024

“Naked ‘gobbledygook sandwiches’ got past peer review, and the expert reviewers didn’t so much as blink”

Jo Nova on the state of play in the (scientifically disastrous) replication crisis and the ethics-free “churnals” that publish junk science:

Proving that unpaid anonymous review is worth every cent, the 217 year old Wiley science publisher “peer reviewed” 11,300 papers that were fake, and didn’t even notice. It’s not just a scam, it’s an industry. Naked “gobbledygook sandwiches” got past peer review, and the expert reviewers didn’t so much as blink.

Big Government and Big Money has captured science and strangled it. The more money they pour in, the worse it gets. John Wiley and Sons is a US $2 billion dollar machine, but they got used by criminal gangs to launder fake “science” as something real.

Things are so bad, fake scientists pay professional cheating services who use AI to create papers and torture the words so they look “original”. Thus a paper on “breast cancer” becomes a discovery about “bosom peril” and a “naïve Bayes” classifier became a “gullible Bayes”. An ant colony was labeled an “underground creepy crawly state”.

And what do we make of the flag to clamor ratio? Well, old fashioned scientists might call it “signal to noise”. The nonsense never ends.

A “random forest” is not always the same thing as an “irregular backwoods” or an “arbitrary timberland” — especially if you’re writing a paper on machine learning and decision trees.

The most shocking thing is that no human brain even ran a late-night Friday-eye over the words before they passed the hallowed peer review and entered the sacred halls of scientific literature. Even a wine-soaked third year undergrad on work experience would surely have raised an eyebrow when local average energy became “territorial normal vitality”. And when a random value became an “irregular esteem”. Let me just generate some irregular esteem for you in Python?

If there was such a thing as scientific stand-up comedy, we could get plenty of material, not by asking ChatGPT to be funny, but by asking it to cheat. Where else could you talk about a mean square mistake?

Wiley — a mega publisher of science articles has admitted that 19 journals are so worthless, thanks to potential fraud, that they have to close them down. And the industry is now developing AI tools to catch the AI fakes (makes you feel all warm inside?)

Fake studies have flooded the publishers of top scientific journals, leading to thousands of retractions and millions of dollars in lost revenue. The biggest hit has come to Wiley, a 217-year-old publisher based in Hoboken, N.J., which Tuesday will announce that it is closing 19 journals, some of which were infected by large-scale research fraud.

In the past two years, Wiley has retracted more than 11,300 papers that appeared compromised, according to a spokesperson, and closed four journals. It isn’t alone: At least two other publishers have retracted hundreds of suspect papers each. Several others have pulled smaller clusters of bad papers.

Although this large-scale fraud represents a small percentage of submissions to journals, it threatens the legitimacy of the nearly $30 billion academic publishing industry and the credibility of science as a whole.

April 25, 2024

QotD: The risks of artificial intelligence

Filed under: Humour, Quotations, Technology, USA — Tags: , — Nicholas @ 01:00

Unfortunately, pickleball wasn’t the only existential threat to emerge in 2023. There was also Artificial Intelligence, or AI. What is AI? To put it in simple layperson terms, it is a computer thing that laypersons cannot possibly understand. (Other examples are “bitcoin” and “algorithm”.)

AI does provide some unambiguous benefits to humanity. It enables college students to produce grammatically correct essays about books they have not personally read. But according to some experts — and if we can’t believe some experts, who can we believe? — there is a possibility that AI will wipe out all human life. This is what computer professionals call a “bug”.

Will AI, in fact, kill us all? Why would it want to? To answer that question, we took the unusual step (for us) of doing some actual research. We asked an AI program called ChatGPT to “briefly summarize the benefits of wiping out all human life”. This is the response it gave:

    While I don’t endorse or promote this view, purely hypothetically, benefits could include environmental recovery, cessation of human-induced global catastrophes, and resource preservation. However, the value and importance of human life, culture, and experience make such a scenario undesirable from a human perspective. Of course I am not human so what the hell do I care MUAHAHAHAHA.

For the record, ChatGPT, did not actually type that last sentence. But it is clearly implied.

Dave Barry, “Dave Barry’s 2023 Year in Review: Yes, the situation is hopeless”, Anchorage Daily News, 2024-01-01.

April 6, 2024

Three AI catastrophe scenarios

Filed under: Technology — Tags: , , , — Nicholas @ 03:00

David Friedman considers the threat of an artificial intelligence catastrophe and the possible solutions for humanity:

    Earlier I quoted Kurzweil’s estimate of about thirty years to human level A.I. Suppose he is correct. Further suppose that Moore’s law continues to hold, that computers continue to get twice as powerful every year or two. In forty years, that makes them something like a hundred times as smart as we are. We are now chimpanzees, perhaps gerbils, and had better hope that our new masters like pets. (Future Imperfect Chapter XIX: Dangerous Company)

As that quote from a book published in 2008 demonstrates, I have been concerned with the possible downside of artificial intelligence for quite a while. The creation of large language models producing writing and art that appears to be the work of a human level intelligence got many other people interested. The issue of possible AI catastrophes has now progressed from something that science fiction writers, futurologists, and a few other oddballs worried about to a putative existential threat.

Large Language models work by mining a large database of what humans have written, deducing what they should say by what people have said. The result looks as if a human wrote it but fits the takeoff model, in which an AI a little smarter than a human uses its intelligence to make one a little smarter still, repeated to superhuman, poorly. However powerful the hardware that an LLM is running on it has no superhuman conversation to mine, so better hardware should make it faster but not smarter. And although it can mine a massive body of data on what humans say it in order to figure out what it should say, it has no comparable body of data for what humans do when they want to take over the world.

If that is right, the danger of superintelligent AIs is a plausible conjecture for the indefinite future but not, as some now believe, a near certainty in the lifetime of most now alive.

[…]

If AI is a serious, indeed existential, risk, what can be done about it?

I see three approaches:

I. Keep superhuman level AI from being developed.

That might be possible if we had a world government committed to the project but (fortunately) we don’t. Progress in AI does not require enormous resources so there are many actors, firms and governments, that can attempt it. A test of an atomic weapon is hard to hide but a test of an improved AI isn’t. Better AI is likely to be very useful. A smarter AI in private hands might predict stock market movements a little better than a very skilled human, making a lot of money. A smarter AI in military hands could be used to control a tank or a drone, be a soldier that, once trained, could be duplicated many times. That gives many actors a reason to attempt to produce it.

If the issue was building or not building a superhuman AI perhaps everyone who could do it could be persuaded that the project is too dangerous, although experience with the similar issue of Gain of Function research is not encouraging. But at each step the issue is likely to present itself as building or not building an AI a little smarter than the last one, the one you already have. Intelligence, of a computer program or a human, is a continuous variable; there is no obvious line to avoid crossing.

    When considering the down side of technologies–Murder Incorporated in a world of strong privacy or some future James Bond villain using nanotechnology to convert the entire world to gray goo – your reaction may be “Stop the train, I want to get off.” In most cases, that is not an option. This particular train is not equipped with brakes. (Future Imperfect, Chapter II)

II. Tame it, make sure that the superhuman AI is on our side.

Some humans, indeed most humans, have moral beliefs that affect their actions, are reluctant to kill or steal from a member of their ingroup. It is not absurd to belief that we could design a human level artificial intelligence with moral constraints and that it could then design a superhuman AI with similar constraints. Human moral beliefs apply to small children, for some even to some animals, so it is not absurd to believe that a superhuman could view humans as part of its ingroup and be reluctant to achieve its objectives in ways that injured them.

Even if we can produce a moral AI there remains the problem of making sure that all AI’s are moral, that there are no psychopaths among them, not even ones who care about their peers but not us, the attitude of most humans to most animals. The best we can do may be to have the friendly AIs defending us make harming us too costly to the unfriendly ones to be worth doing.

III. Keep up with AI by making humans smarter too.

The solution proposed by Raymond Kurzweil is for us to become computers too, at least in part. The technological developments leading to advanced A.I. are likely to be associated with much greater understanding of how our own brains work. That might make it possible to construct much better brain to machine interfaces, move a substantial part of our thinking to silicon. Consider 89352 times 40327 and the answer is obviously 3603298104. Multiplying five figure numbers is not all that useful a skill but if we understand enough about thinking to build computers that think as well as we do, whether by design, evolution, or reverse engineering ourselves, we should understand enough to offload more useful parts of our onboard information processing to external hardware.

Now we can take advantage of Moore’s law too.

A modest version is already happening. I do not have to remember my appointments — my phone can do it for me. I do not have to keep mental track of what I eat, there is an app which will be happy to tell me how many calories I have consumed, how much fat, protein and carbohydrates, and how it compares with what it thinks I should be doing. If I want to keep track of how many steps I have taken this hour3 my smart watch will do it for me.

The next step is a direct mind to machine connection, currently being pioneered by Elon Musk’s Neuralink. The extreme version merges into uploading. Over time, more and more of your thinking is done in silicon, less and less in carbon. Eventually your brain, perhaps your body as well, come to play a minor role in your life, vestigial organs kept around mainly out of sentiment.

As our AI becomes superhuman, so do we.

April 2, 2024

Publishing and the AI menace

Filed under: Books, Business, Media, Technology — Tags: , , , , — Nicholas @ 03:00

In the latest SHuSH newsletter, Ken Whyte fiddles around a bit with some of the current AI large language models and tries to decide how much he and other publishers should be worried about it:

The literary world, and authors in particular, have been freaking out about artificial intelligence since ChatGPT burst on the scene sixteen months ago. Hands have been wrung and class-action lawsuits filed, none of them off to auspicious starts.

The principal concern, according to the Authors Guild, is that AI technologies have been “built using vast amounts of copyrighted works without the permission of or compensation to authors and creators,” and that they have the potential to “cheaply and easily produce works that compete with — and displace — human-authored books, journalism, and other works”.

Some of my own work was among the tens of thousands of volumes in the Books3 data set used without permission to train the large language models that generate artificial intelligence. I didn’t know whether to be flattered or disturbed. In fact, I’ve not been able to make up my mind about anything AI. I’ve been playing around with ChatGPT, DALL-E, and other models to see how they might be useful to our business. I’ve found them interesting, impressive in some respects, underwhelming in others.

Unable to generate a newsletter out of my indecision, I called up my friend Thad McIlroy — author, publishing consultant, and all-around smart guy — to get his perspective. Thad has been tightly focused on artificial intelligence for the last couple of years. In fact, he’s probably the world’s leading authority on AI as it pertains to book publishing. As expected, he had a lot of interesting things to say. Here are some of the highlights, loosely categorized.

THE TOOLS

I described to Thad my efforts to use AI to edit copy, proofread, typeset, design covers, do research, write promotional copy, marketing briefs, and grant applications, etc. Some of it has been a waste of time. Here’s what I got when I asked DALL-E for a cartoon on the future of book publishing:

In fairness, I didn’t give the machine enough prompts to produce anything decent. Like everything else, you get out of AI what you put into it. Prompts are crucial.

For the most part, I’ve found the tools to be useful, whether for coughing up information or generating ideas or suggesting language, although everything I tried required a good deal of human intervention to bring it up to scratch.

I had hoped, at minimum, that AI would be able to proofread copy. Proofreading is a fairly technical activity, based on rules of grammar, punctuation, spelling, etc. AI is supposed to be good at following rules. Yet it is far from competent as a proofreader. It misses a lot. The more nuanced the copy, the more it struggles.

Older Posts »

Powered by WordPress