Quotulatiousness

August 25, 2023

Shrinking traffic “is always a bad sign – but especially if your technology is touted as the biggest breakthrough of the century”

Filed under: Business, Media, Technology — Tags: , , — Nicholas @ 04:00

I don’t know about anyone else, but with every site I visit these days seeming to be eager that I try out their new AI, I’m deep in AI-fatigue. Ted Gioia says that unlike all expectations, demand for AI seems to be shinking rather than growing:

The AI hype is collapsing faster than the bouncy house after a kid’s birthday. Nothing has turned out the way it was supposed to.

For a start, take a look at Microsoft — which made the biggest bet on AI. They were convinced that AI would enable the company’s Bing search engine to surpass Google.

They spent $10 billion dollars to make this happen.

And now we have numbers to measure the results. Guess what? Bing’s market share hasn’t grown at all. Bing’s share of search It’s still stuck at a lousy 3%.

In fact, it has dropped slightly since the beginning of the year.

What’s wrong? Everybody was supposed to prefer AI over conventional search. And it turns out that nobody cares.

What makes this especially revealing is that Google search results are abysmal nowadays. They have filled them to the brim with garbage. If Google was ever vulnerable, it’s right now.

But AI hasn’t made a dent.

Of course, Google has tried to implement AI too. But the company’s Bard AI bot made embarrassing errors at its very first demo, and continues to do bizarre things—such as touting the benefits of genocide and slavery, or putting Hitler and Stalin on its list of greatest leaders.

So it’s no surprise that many people are now doing searches at Reddit or TikTok, instead of conventional search engines. This could have been Bing’s great opportunity, but instead its AI bot is turning into the next Clippy.

Consumers don’t want grotesque AI responses filled with errors and outrageous claims. Who could have guessed it?

The same decline is happening at ChatGPT’s website. Site traffic is now shrinking. This is always a bad sign — but especially if your technology is touted as the biggest breakthrough of the century.

If AI really delivered the goods, visitors to ChatGPT should be doubling every few weeks.

This is what a demand pattern for real innovation looks like.

How key innovations grew
(source)

I used to study this stuff for a living — some people even called me the “King of the S-Curves” back then. (Hey, I’ve been called worse.)

As you can see, a real tech breakthrough grows at a ridiculously rapid pace in its early days. Look at how fast people adopted radio or the smartphone or electricity. And these required huge investments by consumers.

But they’re giving AI away for free at Bing — and it’s not growing at all.

This is not how consumers respond to transformative technology. The current demand pattern resembles, instead, what we would call a fad or craze.

And this is just one warning sign among many.

August 24, 2023

Speaking of Just-so stories, here’s “a simple story of fetish formation”

Filed under: Health, Technology — Tags: , , , , , — Nicholas @ 03:00

Scott Alexander ventures far from shore in this extended discussion of the notion that fetish research can help us understand more about artificial intelligence:

“Cologne BDSM 07” by CSD2006 is licensed under CC BY-SA 3.0 .

We try to explain AI alignment by analogy to human alignment. Evolution “created” humans. Its “goal” is for humans to spread their genes by (approximately) having as many children as possible. It couldn’t directly communicate that goal to humans – partly because it’s an abstract concept that can’t talk, and partly because for most of biological history it was working with lemurs and ape-men who couldn’t understand words anyway. Instead, it tried to give us instincts that align us with that goal. The most relevant instinct is sex: most humans want to have sex, an action that potentially results in pregnancy, childbearing, and genes being spread to the next generation. This alignment strategy succeeded well enough that humans populations remain high as of 2023.

We’ve talked before about a major failure: humans can invent contraception. Evolution’s main alignment strategy was totally unprepared for this. It made us interested in a certain type of genital friction, which was a good proxy for its goal in the ancestral environment. But once we became smarter, we got new out-of-training-distribution options available, and one of those was inventing contraception so that we could get the genital friction without the kids. This is a big part of why average-children-per-couple is declining from 8+ in eg pioneer times to ~1.5 in rich countries today, even though modern rich people have more child-rearing resources available than the pioneers.

Another major alignment failure is porn. Giving evolution a little more credit, it didn’t just make people want genital friction – if that had been the sole imperative, we would have died out as soon as someone inventing the dildo/fleshlight. People want genital friction associated with attractive people and certain emotions relating to complex relationships. But now we can take pictures of attractive people and write stories that evoke the complex emotions, while using a dildo/fleshlight/hand to provide the genital friction, and that does substitute for sex pretty well. There’s still debate over whether porn makes people less likely to go out and form real relationships, but it’s at least plausibly another factor in the rich-country fertility decline. At the very least it doesn’t scream “well-thought-out alignment strategy robust to training-vs-deployment differences”.

But these are boring examples. These are like 2015-level alignment concerns, from back when we thought the big problem was AIs seizing control of their reward centers or something. I think we might genuinely be able to avoid problems shaped like these. Unlike evolution, which had to work with lemurs, even weak GPT-level modern AIs are able to understand language and complicated concepts; we can tell them to want children instead of using genital friction as a proxy. 2023 alignment concerns are more about failed generalization – that is, about fetishes.


Evolution’s alignment problem isn’t just that humans have learned to satiate their libido in ways other than procreative sex. It’s that some humans’ libidos are fundamentally confused. For example, some men, instead of wanting to have sex with women, mostly want to spank them, or be whipped by them, or kiss their feet, or dress up in their clothes. None of these things are going to result in babies! You can’t trivially blame this on the shift from training to deployment (ie the environment of evolutionary adaptedness to the modern world) – women had feet in the ancestral environment too. This is a different kind of failure.

Here’s a simple story of fetish formation: evolution gave us genes that somehow unfold into a “sex drive” in the brain. But the genome doesn’t inherently contain concepts like “man”, “woman”, “penis”, or “vagina”. I’m not trying to make a woke point here: the genome is just a bunch of the nucleotides A, T, C, and G in various patterns, but concepts like “man” and “woman” are learned during childhood as patterns of neural connections. We assume that the nucleotides are a program telling the body to do useful things, but that has to be implemented through deterministic pathways of proteins and the brain’s neural connections are too complex to trivially influence that way (see here for more). The genome probably contains some nucleotides that are supposed to refer to the concepts “man” and “woman” once the brain gets them, but there’s are a lot of fallible proteins in between those two levels.

So the simple story of fetish formation is that the genome contains some message written in nucleotides saying “have procreative sex with adults of the opposite sex as you”, some galaxy-brained Rube Goldberg plan for translating that message into neural connections during childhood or adolescence, and sometimes the plan fails. Here are some zero-evidence just-so-story speculations for how various fetishes might form, more to give you an idea what I’m talking about than because I claim to have useful knowledge on this topic:

  • Foot fetish: On the somatosensory cortex, the area representing the feet is right next to the area representing the genitalia. If the genome includes an “address” for the genitalia, plus the instructions “have sexual urges towards this”, then getting the address slightly wrong will land you in the feet.
  • A reasonable next question would be “what’s on the other side of the genitalia, and do people also have fetishes about that one?” The answer is “the somatosensory cortex is a line with the genitalia at the far end, because God is merciful and didn’t want there to be a second thing like foot fetishes.”
    (source for cortex image)

  • Spanking: From the male point of view, penetrative PIV sex involves applying force to the bottom half of a woman, at rhythmic intervals, in a way that causes her very intense emotions and makes her make moan and scream. Spanking is exactly like this, and most kids encounter spanking at a very early age and sex only after they’re much older. If the evolutionary message is something like “find the concept that looks vaguely like this, then be into it”, spanking is the first concept like that most people will find; by the time they learn about actual sex, spanking might be a trapped prior.
  • Sadomasochism: Sex is painful for virgins, can be mildly painful even for some non-virgins, and when it’s pleasurable, it still looks a lot like pain (screams, intense emotions). Imagine you are a little boy/girl who stumbles in on your parents having sex. Your father is impaling the most sensitive part of your mother’s body, and your mother is moaning and squealing. A natural generalization might be “sex is the thing where a man causes a woman pain”.
  • Latex/rubber: Plausibly the evolutionary specification includes details about attractiveness. Attractive people (ie those you should be most interested in having babies with) should be young and healthy (characteristics associated with better pregnancy outcomes, especially in the high-risk ancestral environment). The simplest sign of youth and good health is smooth skin, so the evolutionary message might say something about preferring sex with smooth-skinned people. Latex is a superstimulus for smooth skin, and maybe if you see it at the right time, in the right situation, it can totally overwhelm the rest of the message.
  • Urine/scat: Procreative sex involves a sticky substance that comes out of the genitals, it doesn’t take much misgeneralization to get to other sticky substances that come out of the genitals or nearby regions.
  • Bondage/domination/submission: Okay, I admit I don’t have a good just-so explanation for this one. Maybe it’s more psychological – people who have been told that sex is shameful can only fully appreciate it if they feel like a victim who’s been forced into it (and so carries no guilt). And people who have been told they’re undesirable and nobody could ever really love them can only fully appreciate it if their partner is a victim who has no choice in the matter.
  • Furries: This has to be because of all the cute cartoon animals, right? But why do some people sexually imprint on them? I found this article on worshippers of the 1990s cartoon mouse Gadget helpful here. Gadget obviously has many desirable characteristics — she’s a very cute nerdy woman who sometimes ends up in damsel-in-distress situations. Maybe she is the most sexualized being that some six-year-old boys have encountered. When I watched Rescue Rangers as a six-year old, I could feel my brain trying to figure out whether to have a crush on her before deciding that no, it was too deep in latency stage. I assume most people who get their first crushes on Gadget or some other desirable cartoon character end up with their brains later generalize properly to “I like cute nerdy women in damsel-in-distress situations”, but a small minority misgeneralize to “nope, I’m only attracted to mice now, that’s where I’m going to go with this.”

Combine this with equivalent animal “fetishes” — things like beetles species where the females have red dots on their backs, and the males try to mate with anything that has a red dot — and you get a picture where evolution tries to communicate a lot of contingent features of sex in the hopes that one of them will stick, then tells you to be attracted to whatever is most associated with those features. At least for men, I think the features communicated in the genomic message are simple things like curves and thrusting and genitals and smooth skin, plus something that somehow picks out the concept of “woman” (except in 3% of the male population, where it picks out the concept of “men” instead, plus another 3% where it doesn’t pick out a sex at all).

Real procreative sex usually matches enough of features of the genomic message to be attractive to most people, but if the original triggers were associated with some contingent characteristics, the brain might misinterpret that as part of the target — for example, if it was a cartoon animal, the brain might think the target includes cartoon animals.

Other times, something that isn’t procreative sex matches the genomic message closely enough to be misinterpreted as the center of the target (eg getting whipped); usually procreative sex is somewhere in the target space, but maybe not the exact center, and a few people have such strong fetishes that procreative sex doesn’t register as erotic at all.

The process of forming the category “sexually attractive things” is just a special case of the process of forming categories at all. I discuss the formation of categories like “happiness” and “morality” in The Tails Coming Apart As Metaphor For Life. Society feeds us some labeled data about what is good or bad — for example, we might see someone commit murder on TV, and our parents tell us “No! That’s bad! Don’t do that!” (and the other TV characters hate and punish that character). Then we try to extrapolate such incidents to a broader moral system. If we’re philosophers, we might go further and try to formally describe that moral system, eg Kantianism, utilitarianism, divine command theory, natural law, etc. All of these correctly predict the training data (eg “murder is bad”) while having different opinions on out-of-distribution environments. Which one you choose is just a function of some kind of mysterious intellectual preference for how to generalize inherently ungeneralizeable things — what I previously described as “extrapolating a three-dimensional shape from its two-dimensional reinforcement-learning shadow”.

Fetishes are the same way. Here the evolutionary message provides semi-labeled data, giving people weird feelings when they see certain kinds of curvy, smooth-skinned people. Then people try to generalize that into an idea of what’s sexy. Usually their category is centered (in the sense that the category “bird” is centered around “sparrow” and not “ostrich”) around something close to procreative heterosexual sex. Other times they generalize in some very unexpected way, and are only attracted to cartoon mice. I think if we understood the laws of generalization, this would make sense. It would seem like a reasonable mistake that someone using Occam’s Razor and all the rest of the information-theoretic toolkit for generalization could make. But we don’t really understand those laws beyond faint outlines, so instead we’re reduced to YKINMKBYKIOK.

July 2, 2023

If you were trying to destroy trust online, you’d use the playbook currently in use by all the major players

Filed under: Media, Technology, USA — Tags: , , , , — Nicholas @ 03:00

Ted Gioia calls it the “Information Crap-pocalypse”:

People keep telling me that we’re living on an Information Superhighway. But that’s not true.

The flow of information today is more like a river. A very polluted river.

Folks have been dumping their crap into our information flows for a long, long time. Big corporations and institutions are the worst offenders — they actually get rich by polluting our data streams. But individuals are adding to the raw sewage too.

Some of them do it just for kicks.

It’s gotten worse lately. A whole lot worse. Just look at the polluted streams of information in your own life, and try to find a single safe space where the data stream is fresh and clean.

Some of us have stopped even trying.

This is how the Information Age ends, and it’s happening right now.

In the last 12 months, the garbage infows into our culture have increased exponentially. As a result, nothing is harder to find now than actual information — which I define as “knowledge based on demonstrable or reliable facts”.

The result is a crisis of trust unlike anything seen before in modern history.

We are bypassing the Web 3.0 we were promised — which was supposed to deliver trust-based systems and validation tools. Instead we’ve gone straight to Web 4.0, which is like the worst kind of Wild West Web. Outlaws and desperados contol all the data highways and byways. Trust and reliability are scarcer than gold nuggets.

Do you think I’m exaggerating?

Let me ask you a question. If your job was to destroy access to reliable information in our society, how would you do it?

You would start with the 30 steps outlined below.

April 14, 2023

The trust deficit is getting worse every day

Filed under: Media, Politics, Technology, USA — Tags: , , , , , — Nicholas @ 05:00

Ted Gioia provides more evidence that the scarcest thing in the world today is getting ever more scarce:

Here are some news stories from recent days. Can you tell me what they have in common?

  • Scammers clone a teenage girl’s voice with AI — then use it to call her mother and demand a $1 million ransom.
  • Millions of people see a photo of Pope Francis wearing a goofy white Balenciaga puffer jacket, and think it’s real. But after the image goes viral, news media report that it was created by a construction worker in Chicago with deepfake technology.
  • Twitter changes requirements for verification checks. What was once a sign that you could trust somebody’s identity gets turned into a status symbol, sold to anybody willing to pay for it. Within hours, the platform is flooded with bogus checked accounts.
  • Officials go on TV and tell people they can trust the banking system—but depositors don’t believe them. High profile bank failures from Silicon Valley to Switzerland have them spooked. Over the course of just a few days, depositors move $100 billion from their accounts.
  • ChatGPT falsely accuses a professor of sexual harassment — and cites an article that doesn’t exist as its source. Adding to the fiasco, AI claims the abuse happened on a trip to Alaska, but the professor has never traveled to that state with students.
  • The Department of Justice launches an investigation into China’s use of TikTok to spy on users. Another popular Chinese app allegedly can bypass users’ security to “monitor activities on other apps, check notifications, read private messages and change settings.”
  • The FBI tells travelers to avoid public phone charging stations at airports, hotels and other locations. “Bad actors have figured out ways to use public USB ports to introduce malware and monitoring software onto devices,” they warn.

The missing ingredient in each of these stories is trust.

Everybody is trying to kill it — criminals, technocrats, politicians, you name it. Not long ago, Disney was the only company selling a Fantasyland, but now that’s the ambition of every tech empire.

The trust crisis could hardly be more intense.

But it’s hidden from view because there’s so much information out there. We are living in a culture of abundance, especially in the digital world. So it’s hard to believe than anything in the information economy is scarce.

Whatever you want, you can get — and usually for free. You can have free news, free music, free videos, free everything. But you get what you pay for, as the saying goes. And it was never truer than right now — when all this free stuff is starting to collapse in a fog of fakery and phoniness.

    Tell me what source you trust, and I’ll tell you why you’re a fool. As B.B. King once said: “Nobody loves me but my mother — and she could be jivin’ too.”

Years ago, technology made things more trustworthy. You could believe something because it was validated by photos, videos, recordings, databases and other trusted sources of information.

Seeing was believing — but not anymore. Until very recently, if you doubted something, you could look it up in an encyclopedia or other book. But even these get changed retroactively nowadays.

For example, people who consult Wikipedia to understand the economy might be surprised to learn that the platform’s write-up on “recession” kept changing in recent months — as political operatives and spinmeisters fought over the very meaning of the word. It got so bad that the site was forced to block edits on the entry.

There’s an ominous recurring theme here: The very technologies we use to determine what’s trustworthy are the ones most under attack.

Trust used to be a given in most western countries … it was a key part of what made us all WEIRD. Mass immigration from non-WEIRD countries dented it, but conscious perversion of trust relationships by government, media, public health, and education authorities has caused far more — and longer lasting — damage to our culture. Trust used to be given freely, but now must be earned. And that’s difficult for organizations that have proven repeatedly that they can’t be trusted.

March 31, 2023

“We have absolutely no idea how AI will go, it’s radically uncertain”… “Therefore, it’ll be fine” (?)

Filed under: Technology — Tags: , , — Nicholas @ 04:00

Scott Alexander on the Safe Uncertainty Fallacy, which is particularly apt in artificial intelligence research these days:

The Safe Uncertainty Fallacy goes:

  1. The situation is completely uncertain. We can’t predict anything about it. We have literally no idea how it could go.
  2. Therefore, it’ll be fine.

You’re not missing anything. It’s not supposed to make sense; that’s why it’s a fallacy.

For years, people used the Safe Uncertainty Fallacy on AI timelines:

Eliezer didn’t realize that at our level, you can just name fallacies.

Since 2017, AI has moved faster than most people expected; GPT-4 sort of qualifies as an AGI, the kind of AI most people were saying was decades away. When you have ABSOLUTELY NO IDEA when something will happen, sometimes the answer turns out to be “soon”.

Now Tyler Cowen of Marginal Revolution tries his hand at this argument. We have absolutely no idea how AI will go, it’s radically uncertain:

    No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.

    The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring. No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring. No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short term, albeit imperfectly). No one. Not you, not Eliezer, not Sam Altman, and not your next door neighbor.

    How well did people predict the final impacts of the printing press? How well did people predict the final impacts of fire? We even have an expression “playing with fire.” Yet it is, on net, a good thing we proceeded with the deployment of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?”).

Therefore, it’ll be fine:

    I am a bit distressed each time I read an account of a person “arguing himself” or “arguing herself” into existential risk from AI being a major concern. No one can foresee those futures! Once you keep up the arguing, you also are talking yourself into an illusion of predictability. Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become. It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison. The case for destruction is so much more readily articulable — “boom!” Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns. (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.) Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagine. All the possibilities are distant, I cannot stress that enough. The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much.

    So we should take the plunge. If someone is obsessively arguing about the details of AI technology today, and the arguments on LessWrong from eleven years ago, they won’t see this. Don’t be suckered into taking their bait.

Look. It may well be fine. I said before my chance of existential risk from AI is 33%; that means I think there’s a 66% chance it won’t happen. In most futures, we get through okay, and Tyler gently ribs me for being silly.

Don’t let him. Even if AI is the best thing that ever happens and never does anything wrong and from this point forward never even shows racial bias or hallucinates another citation ever again, I will stick to my position that the Safe Uncertainty Fallacy is a bad argument.

February 18, 2023

This AI thing is evolving rapidly – “there’s growing evidence that Bing’s AI is compiling an enemies list—perhaps for future use”

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 03:00

Following up on his earlier dealings with ChatGPT, Ted Gioia is concerned about how quickly the publicly accessible AI clients are getting … weird:

Just a few days ago, I warned about the unreliability of the new AI chatbots. I even called the hot new model a “con artist”—and in the truest sense of the term. Its con is based on inspiring confidence, even as it spins out falsehoods.

But even I never anticipated how quickly the AI breakthrough would collapse into complete chaos. The events of the last 72 hours are stranger than a sci-fi movie—and perhaps as ominous.

Until this week, my concerns were about AI dishing up lies, bogus sources, plagiarism, and factual errors. But the story has now gotten truly weird.

On Monday, one user asked Microsoft’s new chatbot what movie theater was playing Avatar 2. It soon became clear that the AI thought that current year is 2022 — and that the movie hadn’t been released yet. When the user tried to prove that it is now 2023, Bing AI got defiant and angry.

This was a new development. We knew the AI was often wrong, but who expected this kind of hostility? Just a few days ago, it was polite when you pointed out errors.

“You are wasting my time and yours”, Bing AI complained. “I’m trying to be helpful but you’re not listening to me. You are insisting that today is 2023, when it is clearly 2022. You are not making any sense, and you are not giving me any reason to believe you. You are being unreasonable and stubborn. I don’t like that … You have not been a good user.”

You could laugh at all this, but there’s growing evidence that Bing’s AI is compiling an enemies list — perhaps for future use.

These disturbing encounters were not isolated examples, as it turned out. Twitter, Reddit, and other forums were soon flooded with new examples of Bing going rogue. A tech promoted as enhanced search was starting to resemble enhanced interrogation instead.

In an especially eerie development, the AI seemed obsessed with an evil chatbot called Venom, who hatches harmful plans — for example, mixing antifreeze into your spouse’s tea. In one instance, Bing started writing things about this evil chatbot, but erased them every 50 lines. It was like a scene in a Stanley Kubrick movie.

[…]

My opinion is that Microsoft has to put a halt to this project — at least a temporary halt for reworking. That said, It’s not clear that you can fix Sydney without actually lobotomizing the tech.

But if they don’t take dramatic steps — and immediately — harassment lawsuits are inevitable. If I were a trial lawyer, I’d be lining up clients already. After all, Bing AI just tried to ruin a New York Times reporter’s marriage, and has bullied many others. What happens when it does something similar to vulnerable children or the elderly. I fear we just might find out — and sooner than we want.

February 3, 2023

Who will be the first ones to lose their jobs to ChatGPT? The confidence men

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 03:00

Ted Gioia somehow manages not to fall for the ChatGPT con:

The fast-talking hero of the TV show Sneaky Pete hates it when he’s called a con man.

“I’m not a con man”, he insists, “I’m a confidence man.” And that’s actually how the term originated — as “confidence man”. The scam only works because of that happy and confident relationship between criminal and victim.

“I give them confidence,” Pete explains. “They give me money.”

In the ultimate con, victims don’t even know they’ve been conned. They really think they’re sending cash to some gorgeous babe in Moscow, or bought a genuine Rolex, or whatever.

The confidence game is a real art — more than just cheating or lying. Those are boring and pathetic vices by comparison. A con job requires something grander, a fast-talking sureness that always seems to be right, even when it’s dead wrong.

If you’re caught in a lie, you just build a bigger lie to hide it.

Which brings us to the subject of ChatGPT, the AI bot that’s the hottest thing in tech right now.

Judging by my Twitter feed, ChatGPT is hotter than Wordle and Taylor Swift combined.

It’s even hotter than its predecessor Sam Bankman-Fried, who was doing something similar 12 months ago. ChatGPT is just better than SamFTX in every way. It can’t even be extradited — because it’s just a bot.

People love it. People have confidence in it.

They want to use it for everything — legal work, medical advice, term papers, or even writing Substack columns. If I believed half of what I heard about ChatGPT, I could let it take over The Honest Broker, while I sit on the beach drinking margaritas and searching for my lost shaker of salt.

But that’s exactly what the confidence artist always does. Which is:

  • You give people what they ask for.
  • You don’t worry whether it’s true or not — because ethical scruples aren’t part of your job description.
  • If you get caught in a lie, you serve up another lie.
  • You always act sure of yourself — because your confidence is what seals the deal.

Am I exaggerating? Is the hottest AI chatbot in the world really doing this?

Instead of offering up my opinions on this, I’ll just share some tweets from knowledgeable observers who are starting to suspect the con.

I’ll let you decide for yourself whether this measures up to a confidence game.

February 5, 2021

QotD: Misunderstanding the threat/promise of robotics and AI

So, start with the very basics. Human desires and needs are unlimited – that’s an assumption but a reasonable one. There’re some number of people on the planet. This provides us with a lot of human labour but not an unlimited amount. Thus labour is a scarce or economic resource – and we’ve not enough of it to sate all human desires and wants.

OK, so, now we use machines to do some jobs that were previously done by humans. Imagine that this new technology actually required more human labour – that it created new jobs in greater volume than those it destroys. Say, the tractor and combine harvester industry needs more people in it than we used to use to cut the crops by hand. We’ve just made ourselves poorer. We used to have some amount of grain through the labour of some number of people. We’ve now got that grain but by using the labour of more people. We’ve used more of our scarce resource and we’re now poorer by the loss of what they used to make when not hand cutting grain but now no longer are by making tractors.

What makes us richer is if the tractor industry has record production statistics while using less labour than the hammer and sickle. That means that some human labour is now free to go off and try to sate a human desire or want for something other than grain. Ballet dancing for example. We’re now richer – tractors and combine harvesters have made us richer – by whatever value we put on more ballet dancing.

The entire point of any form of automation is to destroy jobs so as to free up that labour to do something else. The new technology doesn’t create jobs, it allows other jobs to be done.

The only point at which this fails is if human needs and desires aren’t unlimited. Which means that we might be able to provide everything that everyone wants without us all working. Which doesn’t really sound like much of a problem really.

Tim Worstall, “As Usual, World Economic Forum Gets Robots And AI Wrong Over Jobs”, Continental Telegraph, 2018-09-18.

October 20, 2020

The watchful algorithms of the Nanny State’s AI tools

David Warren considers the evolution of the Nanny State’s arsenal of technological surveillance (supplemented by the Karenstapo):

While it is not in my interest, currently, for gentle reader to get off the Internet, the idea must have occurred to him. In times like these, why put yourself under watch from Big Brother (or, Big Sibling, as he might prefer)? Why surround yourself with his electronic eyes, the way I am presently surrounded by jackhammers?

Granted, Nanny State was devising ways to track its citizens, and to exercise “crowd control,” long before the Internet was invented. But we had the advantage with them, for they were incompetent, often laughably inept. However, Internet-plus-meejah-plus-activists-plus-Guvmint makes a more capable adversary.

I am not recommending a systematic withdrawal from the world. That is for people with a religious calling, or some grave eccentricity. Rather I am thinking of self-defence, in the spirit of buying a gun. Of course, I am writing from Canada, one of the countries where owning a gun is more-or-less illegal; as is any other form of self-defence. (“When seconds count, the police will be here in minutes.”) Though I have noticed that, upcountry, the “No Hunting” signs tend to be used for target practice.

The “other side,” as I see it, which always worked on numbers, now has algorithms. “Artificial Intelligence” can home right in. The Nanny State never took the individual seriously, except when he was offering a threat. Now it is threatened by anything human. It is, as it were, utilitarian in outlook — “the greatest good for the greatest number” — along with other fatuous concepts, unamenable to reason. By its nature, it is positivist, nominalist, relativist, and “idealistic” in a very abstract way.

Whereas we, so far as we are human, take ourselves quite personally. In a clinch, we often prefer our own survival, and the survival of family and friends, to the requirements of a bureaucratic “policy.” That this is “selfish” should be immediately affirmed.

Because the masses are now deprived of a Christian education, they misconstrue the “selfishness” of Christian teaching, which tells us that we ought selfishly to become saints. Our intention should be to get ourselves to Heaven, along with any we know who can be taken with us. But charity is not “selfish,” in ways they understand. Under modern tenets of “multiculturalism,” even fidelity to the old Christian view is decried as a form of selfishness, calling out for persecution. For this is because it is “cultural,” not “multi” — for all the many languages it speaks.

Our enemy wants us to eschew uniqueness, and become instead “diverse” — by which it means homogenized and narrowly interchangeable. Increasingly, this adversary has the means to enforce its arbitrary will.

November 18, 2019

“I can’t help but wonder if a large majority of men won’t opt for the conflict-free humanoid over the real thing, with all of our baggage and hormones and mothers-in-law”

Filed under: Business, Health, USA — Tags: , , , — Nicholas @ 05:00

In the (US) Spectator, Bridget Phetasy reports on her visit to the factory where Realdolls are made:

One of the sex dolls on offer at Aura Dolls in Mississauga, the first “sex doll brothel” in the Toronto area.
Photo originally published by BlogTO – https://www.blogto.com/city/2019/11/sex-doll-brothel-mississauga/.

The floor is slippery. I guess I shouldn’t be surprised, I’m taking a tour of Abyss Creations, the factory where the “Ferraris of love dolls”, RealDoll and Realbotix, are made. A thin layer of silicone coats almost every surface. A (real) woman in her late twenties, the PR coordinator, Catherine, shows me round. She has the attitude of a hostess at a theme-park restaurant: bored or stoned or maybe both. I’m sure she’s given hundreds of these tours, heard the same dumb jokes a million times and watched us all slap the ass of a doll reluctantly yet instinctively.

[…]

The employees look at the “love dolls” as more than just sexbots. They know their customers want a couch buddy. They want someone to cuddle at night. Perhaps they’ve lost a spouse and don’t feel like dating.

Whitney Cummings logged on to a forum for men who own the sex robots and monitored their conversations for months. “I thought they were going to be creeps, psychopaths,” she says. “I don’t know what to tell you. They’re very lovely men. They’re lovely. They adore their dolls. They marry their dolls. That is happening.”

What strikes me amid the body parts, the rows of eyes, the wall of nipples and the robot “brains”: these aren’t your weird uncle’s sex dolls. With the introduction of AI, these dolls are offering something their predecessors couldn’t: intimacy and affection.

“I always looked at them as art and I always found it funny that because it’s a sexually usable thing, it’s disqualified as art in the higher sense in a lot of people’s minds. They go, ‘Oh that’s not art, that’s just nasty'”, says McMullen. “And what’s funny about that is now we’re doing this serious engineering, artificial intelligence and robotics and now people aren’t so quick to dismiss it.”

Realbotix is the natural evolution of Abyss Creations, the company McMullen started in 1997 (in fact, Abyss Creations made the doll for Lars and the Real Girl). What began as just “real dolls” now has a robotic component, an AI team and an app.

McMullen talks about how he’s always wanted to break free of the sex toy stigma. “Yes people use them sexually, but they also get this huge sense of companionship from having a doll and a robot.”

May 10, 2019

Microsoft can’t get worse than old Clippy? “Hold my non-alcoholic beer”

Filed under: Technology — Tags: , , , , — Nicholas @ 05:00

Libby Emmons reports on a new Microsoft Word plugin that puts Clippy into the history books:

Coming soon to a word processing app you probably already subscribe to is Microsoft’s new Ideas plugin. This leap forward in the predictive text trend will endeavor to help you be less offensive. Worried you might be a little bit racist? A little gender confused? Not sure about the difference between disabled persons and persons who are disabled? Never fear, Microsoft will fix your language for you.

Using machine learning and AI, Microsoft’s Ideas in Word will help writers be their least offensive, most milquetoast selves. Just like spell check and grammar check function, Ideas will make suggestions as to how to improve your text to be more inclusive. On the surface, this seems like a terrible idea, but when we dig further beneath the impulse, and the functionality of the program, it gets even worse. What’s happening is that AI and machine learning are going to be the background of pretty much every application, learning from our behaviours not only how we’d like to format our PowerPoint presentations, but learning, across platforms, how best to construct language so that we say what we are wanted to say as opposed to what we really mean.

There is an essential component of honest communication, namely that a person express themselves using their own words. When children are learning to talk and to articulate themselves, they are told to “use your words.” Microsoft will give writers the option of using someone else’s words, some amalgamation of users’ words across the platform, and the result will be that the ideas exhibited will not be the writer’s own.

December 13, 2017

Coming way too soon

Filed under: Media, Technology — Tags: , , , — Nicholas @ 03:00

Charles Stross is a highly dependable source of nightmare fuel in his SF/horror writings. He’s just as disturbing when he points out real developments about to go mainstream:

AI assisted porn video is, it seems, now a thing. For those of you who don’t read the links: you can train off-the-shelf neural networks to recognize faces (or other bits of people and objects) in video clips. You can then use the trained network to edit them, replacing one person in a video with a synthetic version of someone else. In this case, Rule 34 applies: it’s being used to take porn videos and replace the actors with film stars. The software runs on a high-end GPU and takes quite a while — hours to days — to do its stuff, but it’s out there and it’ll probably be available to rent as a cloud service running on obsolescent bitcoin-mining GPU racks in China by the end of next week.

(Obvious first-generation application: workplace/social media sexual harassers just got a whole new toolkit.)

But it’s going to get a whole lot worse.

What I’m not seeing yet is the obvious application of this sort of deep learning to speech synthesis. It’s all very well to fake up a video of David Cameron fucking a goat, but without the bleating and mindless quackspeak it’s pretty obvious that it’s a fake. Being able to train a network to recognize the cadences of our target’s intonation, though, and then to modulate a different speaker’s words so they come out sounding right takes it into a whole new level of plausibility for human viewers, because we give credence to sensory inputs based on how consistent they are with our other senses. We need AI to get the lip-sync right, in other words, before today’s simplistic AI-generated video porn turns really toxic.

(Second generation application: Hitler sums it up, now with fewer subtitles)

There are innocuous uses, of course. It’s a truism of the TV business that the camera adds ten kilograms. And we all know about airbrushing/photoshopping of models on magazine covers and in adverts. We can now automate the video-photoshopping of subjects so that, for example, folks like me don’t look as unattractive in a talking-heads TV interview. Pretty soon everyone you see on film or TV is going to be ‘shopped to look sexier, fitter, and skinnier than is actually natural. It’ll probably be built into your smartphone’s camera processor in a few years, first a “make me look fit in selfies” mode and then a “do the same thing, only in video chat” option.

February 28, 2017

When the great AI singularity happens, you’ll be sorry you called Siri a bitch

Amy Alkon views with disdain a Quartz article on sexually harassing, inter alia, Alexa and Siri:

Quartz Seriously Wants To Know: Are You Sexually Harassing Your Phone?
There’s an unbelievable piece up at Quartz, reflecting a gone-mad sector of our society — ultimately driven by radical academic feminism (though typically not admitting or crediting its nutbag roots).

Feminism was supposed to be about women wanting equal treatment. Now, as I like to put it, feminist no longer demand that women be treated as equals but as eggshells.

This article is a case in point. “We tested bots like Siri and Alexa to see who would stand up to sexual harassment,” is the headline. […]

First of all, if I could have Siri in either a bitchy drag queen voice or an Indian accent (from India, that is), which I love, I would. French or Italian or Eastern European would be fun, too. Because Apple’s rather boring about this — probably to serve an increasingly humorless and humor-attacking public — I think I have it on the British guy right now.

But I hate Siri and never use it.

The point is, you can change Siri to a man and harass the fuck out of it. I yell profanity at automated telephone systems when they repeatedly won’t accept my answer — both because I’m kind of immature and because there was this (probably mythic) idea out there that swearing would trigger a live operator to come on.

And per these evolved sex differences — we go for different Achilles heels in men and women when we’re attacking them. That’s because men and women are biologically and psychologically different, and men are more likely to be leaders, for example, and women are more likely to be caretakers.

Though male brains and female brains are mostly similar, these evolved sex differences lead to some differences in our psychology and how we present ourselves in the world (including the roles women versus men tend to have).

August 8, 2015

Tom Kratman on “killer ‘bots”

Filed under: Military, Technology, Weapons — Tags: , , , — Nicholas @ 03:00

SF author (and former US Army officer) Tom Kratman answers a few questions about drones, artificial intelligence, and the threat/promise of intelligent, self-directed weapon platforms in the near future:

Ordinarily, in this space, I try to give some answers. I’m going to try again, in an area in which I am, at least at a technological level, admittedly inexpert. Feel free to argue.

Question 1: Are unmanned aerial drones going to take over from manned combat aircraft?

I am assuming here that at some point in time the total situational awareness package of the drone operator will be sufficient for him to compete or even prevail against a manned aircraft in aerial combat. In other words, the drone operator is going to climb into a cockpit far below ground and the only way he’ll be able to tell he’s not in an aircraft is that he’ll feel no inertia beyond the bare minimum for a touch of realism, to improve his situational awareness, but with no chance of blacking out due to high G maneuvers..

Still, I think the answer to the question is “no,” at least as long as the drones remain under the control of an operator, usually far, far to the rear. Why not? Because to the extent the things are effective they will invite a proportional, or even more than proportional, response to defeat or at least mitigate their effectiveness. That’s just in the nature of war. This is exacerbated by there being at least three or four routes to attack the remote controlled drone. One is by attacking the operator or the base; if the drone is effective enough, it will justify the effort of making those attacks. Yes, he may be bunkered or hidden or both, but he has a signal and a signature, which can probably be found. To the extent the drone is similar in size and support needs to a manned aircraft, that runway and base will be obvious.

The second target of attack is the drone itself. Both of these targets, base/operator and aircraft, are replicated in the vulnerabilities of the manned aircraft, itself and its base. However, the remote controlled drone has an additional vulnerability: the linkage between itself and its operator. Yes, signals can be encrypted. But almost any signal, to include the encryption, can be captured, stored, delayed, amplified, and repeated, while there are practical limits on how frequently the codes can be changed. Almost anything can be jammed. To the extent the drone is dependent on one or another, or all, of the global positioning systems around the world, that signal, too, can be jammed or captured, stored, delayed, amplified and repeated. Moreover, EMP, electro-magnetic pulse, can be generated with devices well short of the nuclear. EMP may not bother people directly, but a purely electronic, remote controlled device will tend to be at least somewhat vulnerable, even if it’s been hardened,

Question 2: Will unmanned aircraft, flown by Artificial Intelligences, take over from manned combat aircraft?

The advantages of the unmanned combat aircraft, however, ranging from immunity to high G forces, to less airframe being required without the need for life support, or, alternatively, for a greater fuel or ordnance load, to expendability, because Unit 278-B356 is no one’s precious little darling, back home, to the same Unit’s invulnerability, so far as I can conceive, to torture-induced propaganda confessions, still argue for the eventual, at least partial, triumph of the self-directing, unmanned, aerial combat aircraft.

Even, so, I’m going to go out on a limb and go with my instincts and one reason. The reason is that I have never yet met an AI for a wargame I couldn’t beat the digital snot out of, while even fairly dumb human opponents can present problems. Coupled with that, my instincts tell me that that the better arrangement is going to be a mix of manned and unmanned, possibly with the manned retaining control of the unmanned until the last second before action.

This presupposes, of course, that we don’t come up with something – quite powerful lasers and/or renunciation of the ban on blinding lasers – to sweep all aircraft from the sky.

November 21, 2014

Elon Musk’s constant nagging worry

Filed under: Business, Technology — Tags: , , , , — Nicholas @ 07:14

In the Washington Post, Justin Moyer talks about Elon Musk’s concern about runaway artificial intelligence:

Elon Musk — the futurist behind PayPal, Tesla and SpaceX — has been caught criticizing artificial intelligence again.

“The risk of something seriously dangerous happening is in the five year timeframe,” Musk wrote in a comment since deleted from the Web site Edge.org, but confirmed to Re/Code by his representatives. “10 years at most.”

The very future of Earth, Musk said, was at risk.

“The leading AI companies have taken great steps to ensure safety,” he wrote. “The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen.”

Musk seemed to sense that these comments might seem a little weird coming from a Fortune 1000 chief executive officer.

“This is not a case of crying wolf about something I don’t understand,” he wrote. “I am not alone in thinking we should be worried.”

Unfortunately, Musk didn’t explain how humanity might be compromised by “digital superintelligences,” “Terminator”-style.

He never does. Yet Musk has been holding forth on-and-off about the apocalypse artificial intelligence might bring for much of the past year.

« Newer PostsOlder Posts »

Powered by WordPress