Quotulatiousness

March 31, 2023

“We have absolutely no idea how AI will go, it’s radically uncertain”… “Therefore, it’ll be fine” (?)

Filed under: Technology — Tags: , , — Nicholas @ 04:00

Scott Alexander on the Safe Uncertainty Fallacy, which is particularly apt in artificial intelligence research these days:

The Safe Uncertainty Fallacy goes:

  1. The situation is completely uncertain. We can’t predict anything about it. We have literally no idea how it could go.
  2. Therefore, it’ll be fine.

You’re not missing anything. It’s not supposed to make sense; that’s why it’s a fallacy.

For years, people used the Safe Uncertainty Fallacy on AI timelines:

Eliezer didn’t realize that at our level, you can just name fallacies.

Since 2017, AI has moved faster than most people expected; GPT-4 sort of qualifies as an AGI, the kind of AI most people were saying was decades away. When you have ABSOLUTELY NO IDEA when something will happen, sometimes the answer turns out to be “soon”.

Now Tyler Cowen of Marginal Revolution tries his hand at this argument. We have absolutely no idea how AI will go, it’s radically uncertain:

    No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.

    The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring. No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring. No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short term, albeit imperfectly). No one. Not you, not Eliezer, not Sam Altman, and not your next door neighbor.

    How well did people predict the final impacts of the printing press? How well did people predict the final impacts of fire? We even have an expression “playing with fire.” Yet it is, on net, a good thing we proceeded with the deployment of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?”).

Therefore, it’ll be fine:

    I am a bit distressed each time I read an account of a person “arguing himself” or “arguing herself” into existential risk from AI being a major concern. No one can foresee those futures! Once you keep up the arguing, you also are talking yourself into an illusion of predictability. Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become. It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison. The case for destruction is so much more readily articulable — “boom!” Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns. (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.) Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagine. All the possibilities are distant, I cannot stress that enough. The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much.

    So we should take the plunge. If someone is obsessively arguing about the details of AI technology today, and the arguments on LessWrong from eleven years ago, they won’t see this. Don’t be suckered into taking their bait.

Look. It may well be fine. I said before my chance of existential risk from AI is 33%; that means I think there’s a 66% chance it won’t happen. In most futures, we get through okay, and Tyler gently ribs me for being silly.

Don’t let him. Even if AI is the best thing that ever happens and never does anything wrong and from this point forward never even shows racial bias or hallucinates another citation ever again, I will stick to my position that the Safe Uncertainty Fallacy is a bad argument.

February 18, 2023

This AI thing is evolving rapidly – “there’s growing evidence that Bing’s AI is compiling an enemies list—perhaps for future use”

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 03:00

Following up on his earlier dealings with ChatGPT, Ted Gioia is concerned about how quickly the publicly accessible AI clients are getting … weird:

Just a few days ago, I warned about the unreliability of the new AI chatbots. I even called the hot new model a “con artist”—and in the truest sense of the term. Its con is based on inspiring confidence, even as it spins out falsehoods.

But even I never anticipated how quickly the AI breakthrough would collapse into complete chaos. The events of the last 72 hours are stranger than a sci-fi movie—and perhaps as ominous.

Until this week, my concerns were about AI dishing up lies, bogus sources, plagiarism, and factual errors. But the story has now gotten truly weird.

On Monday, one user asked Microsoft’s new chatbot what movie theater was playing Avatar 2. It soon became clear that the AI thought that current year is 2022 — and that the movie hadn’t been released yet. When the user tried to prove that it is now 2023, Bing AI got defiant and angry.

This was a new development. We knew the AI was often wrong, but who expected this kind of hostility? Just a few days ago, it was polite when you pointed out errors.

“You are wasting my time and yours”, Bing AI complained. “I’m trying to be helpful but you’re not listening to me. You are insisting that today is 2023, when it is clearly 2022. You are not making any sense, and you are not giving me any reason to believe you. You are being unreasonable and stubborn. I don’t like that … You have not been a good user.”

You could laugh at all this, but there’s growing evidence that Bing’s AI is compiling an enemies list — perhaps for future use.

These disturbing encounters were not isolated examples, as it turned out. Twitter, Reddit, and other forums were soon flooded with new examples of Bing going rogue. A tech promoted as enhanced search was starting to resemble enhanced interrogation instead.

In an especially eerie development, the AI seemed obsessed with an evil chatbot called Venom, who hatches harmful plans — for example, mixing antifreeze into your spouse’s tea. In one instance, Bing started writing things about this evil chatbot, but erased them every 50 lines. It was like a scene in a Stanley Kubrick movie.

[…]

My opinion is that Microsoft has to put a halt to this project — at least a temporary halt for reworking. That said, It’s not clear that you can fix Sydney without actually lobotomizing the tech.

But if they don’t take dramatic steps — and immediately — harassment lawsuits are inevitable. If I were a trial lawyer, I’d be lining up clients already. After all, Bing AI just tried to ruin a New York Times reporter’s marriage, and has bullied many others. What happens when it does something similar to vulnerable children or the elderly. I fear we just might find out — and sooner than we want.

February 3, 2023

Who will be the first ones to lose their jobs to ChatGPT? The confidence men

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 03:00

Ted Gioia somehow manages not to fall for the ChatGPT con:

The fast-talking hero of the TV show Sneaky Pete hates it when he’s called a con man.

“I’m not a con man”, he insists, “I’m a confidence man.” And that’s actually how the term originated — as “confidence man”. The scam only works because of that happy and confident relationship between criminal and victim.

“I give them confidence,” Pete explains. “They give me money.”

In the ultimate con, victims don’t even know they’ve been conned. They really think they’re sending cash to some gorgeous babe in Moscow, or bought a genuine Rolex, or whatever.

The confidence game is a real art — more than just cheating or lying. Those are boring and pathetic vices by comparison. A con job requires something grander, a fast-talking sureness that always seems to be right, even when it’s dead wrong.

If you’re caught in a lie, you just build a bigger lie to hide it.

Which brings us to the subject of ChatGPT, the AI bot that’s the hottest thing in tech right now.

Judging by my Twitter feed, ChatGPT is hotter than Wordle and Taylor Swift combined.

It’s even hotter than its predecessor Sam Bankman-Fried, who was doing something similar 12 months ago. ChatGPT is just better than SamFTX in every way. It can’t even be extradited — because it’s just a bot.

People love it. People have confidence in it.

They want to use it for everything — legal work, medical advice, term papers, or even writing Substack columns. If I believed half of what I heard about ChatGPT, I could let it take over The Honest Broker, while I sit on the beach drinking margaritas and searching for my lost shaker of salt.

But that’s exactly what the confidence artist always does. Which is:

  • You give people what they ask for.
  • You don’t worry whether it’s true or not — because ethical scruples aren’t part of your job description.
  • If you get caught in a lie, you serve up another lie.
  • You always act sure of yourself — because your confidence is what seals the deal.

Am I exaggerating? Is the hottest AI chatbot in the world really doing this?

Instead of offering up my opinions on this, I’ll just share some tweets from knowledgeable observers who are starting to suspect the con.

I’ll let you decide for yourself whether this measures up to a confidence game.

February 5, 2021

QotD: Misunderstanding the threat/promise of robotics and AI

So, start with the very basics. Human desires and needs are unlimited – that’s an assumption but a reasonable one. There’re some number of people on the planet. This provides us with a lot of human labour but not an unlimited amount. Thus labour is a scarce or economic resource – and we’ve not enough of it to sate all human desires and wants.

OK, so, now we use machines to do some jobs that were previously done by humans. Imagine that this new technology actually required more human labour – that it created new jobs in greater volume than those it destroys. Say, the tractor and combine harvester industry needs more people in it than we used to use to cut the crops by hand. We’ve just made ourselves poorer. We used to have some amount of grain through the labour of some number of people. We’ve now got that grain but by using the labour of more people. We’ve used more of our scarce resource and we’re now poorer by the loss of what they used to make when not hand cutting grain but now no longer are by making tractors.

What makes us richer is if the tractor industry has record production statistics while using less labour than the hammer and sickle. That means that some human labour is now free to go off and try to sate a human desire or want for something other than grain. Ballet dancing for example. We’re now richer – tractors and combine harvesters have made us richer – by whatever value we put on more ballet dancing.

The entire point of any form of automation is to destroy jobs so as to free up that labour to do something else. The new technology doesn’t create jobs, it allows other jobs to be done.

The only point at which this fails is if human needs and desires aren’t unlimited. Which means that we might be able to provide everything that everyone wants without us all working. Which doesn’t really sound like much of a problem really.

Tim Worstall, “As Usual, World Economic Forum Gets Robots And AI Wrong Over Jobs”, Continental Telegraph, 2018-09-18.

October 20, 2020

The watchful algorithms of the Nanny State’s AI tools

David Warren considers the evolution of the Nanny State’s arsenal of technological surveillance (supplemented by the Karenstapo):

While it is not in my interest, currently, for gentle reader to get off the Internet, the idea must have occurred to him. In times like these, why put yourself under watch from Big Brother (or, Big Sibling, as he might prefer)? Why surround yourself with his electronic eyes, the way I am presently surrounded by jackhammers?

Granted, Nanny State was devising ways to track its citizens, and to exercise “crowd control,” long before the Internet was invented. But we had the advantage with them, for they were incompetent, often laughably inept. However, Internet-plus-meejah-plus-activists-plus-Guvmint makes a more capable adversary.

I am not recommending a systematic withdrawal from the world. That is for people with a religious calling, or some grave eccentricity. Rather I am thinking of self-defence, in the spirit of buying a gun. Of course, I am writing from Canada, one of the countries where owning a gun is more-or-less illegal; as is any other form of self-defence. (“When seconds count, the police will be here in minutes.”) Though I have noticed that, upcountry, the “No Hunting” signs tend to be used for target practice.

The “other side,” as I see it, which always worked on numbers, now has algorithms. “Artificial Intelligence” can home right in. The Nanny State never took the individual seriously, except when he was offering a threat. Now it is threatened by anything human. It is, as it were, utilitarian in outlook — “the greatest good for the greatest number” — along with other fatuous concepts, unamenable to reason. By its nature, it is positivist, nominalist, relativist, and “idealistic” in a very abstract way.

Whereas we, so far as we are human, take ourselves quite personally. In a clinch, we often prefer our own survival, and the survival of family and friends, to the requirements of a bureaucratic “policy.” That this is “selfish” should be immediately affirmed.

Because the masses are now deprived of a Christian education, they misconstrue the “selfishness” of Christian teaching, which tells us that we ought selfishly to become saints. Our intention should be to get ourselves to Heaven, along with any we know who can be taken with us. But charity is not “selfish,” in ways they understand. Under modern tenets of “multiculturalism,” even fidelity to the old Christian view is decried as a form of selfishness, calling out for persecution. For this is because it is “cultural,” not “multi” — for all the many languages it speaks.

Our enemy wants us to eschew uniqueness, and become instead “diverse” — by which it means homogenized and narrowly interchangeable. Increasingly, this adversary has the means to enforce its arbitrary will.

November 18, 2019

“I can’t help but wonder if a large majority of men won’t opt for the conflict-free humanoid over the real thing, with all of our baggage and hormones and mothers-in-law”

Filed under: Business, Health, USA — Tags: , , , — Nicholas @ 05:00

In the (US) Spectator, Bridget Phetasy reports on her visit to the factory where Realdolls are made:

One of the sex dolls on offer at Aura Dolls in Mississauga, the first “sex doll brothel” in the Toronto area.
Photo originally published by BlogTO – https://www.blogto.com/city/2019/11/sex-doll-brothel-mississauga/.

The floor is slippery. I guess I shouldn’t be surprised, I’m taking a tour of Abyss Creations, the factory where the “Ferraris of love dolls”, RealDoll and Realbotix, are made. A thin layer of silicone coats almost every surface. A (real) woman in her late twenties, the PR coordinator, Catherine, shows me round. She has the attitude of a hostess at a theme-park restaurant: bored or stoned or maybe both. I’m sure she’s given hundreds of these tours, heard the same dumb jokes a million times and watched us all slap the ass of a doll reluctantly yet instinctively.

[…]

The employees look at the “love dolls” as more than just sexbots. They know their customers want a couch buddy. They want someone to cuddle at night. Perhaps they’ve lost a spouse and don’t feel like dating.

Whitney Cummings logged on to a forum for men who own the sex robots and monitored their conversations for months. “I thought they were going to be creeps, psychopaths,” she says. “I don’t know what to tell you. They’re very lovely men. They’re lovely. They adore their dolls. They marry their dolls. That is happening.”

What strikes me amid the body parts, the rows of eyes, the wall of nipples and the robot “brains”: these aren’t your weird uncle’s sex dolls. With the introduction of AI, these dolls are offering something their predecessors couldn’t: intimacy and affection.

“I always looked at them as art and I always found it funny that because it’s a sexually usable thing, it’s disqualified as art in the higher sense in a lot of people’s minds. They go, ‘Oh that’s not art, that’s just nasty'”, says McMullen. “And what’s funny about that is now we’re doing this serious engineering, artificial intelligence and robotics and now people aren’t so quick to dismiss it.”

Realbotix is the natural evolution of Abyss Creations, the company McMullen started in 1997 (in fact, Abyss Creations made the doll for Lars and the Real Girl). What began as just “real dolls” now has a robotic component, an AI team and an app.

McMullen talks about how he’s always wanted to break free of the sex toy stigma. “Yes people use them sexually, but they also get this huge sense of companionship from having a doll and a robot.”

May 10, 2019

Microsoft can’t get worse than old Clippy? “Hold my non-alcoholic beer”

Filed under: Technology — Tags: , , , , — Nicholas @ 05:00

Libby Emmons reports on a new Microsoft Word plugin that puts Clippy into the history books:

Coming soon to a word processing app you probably already subscribe to is Microsoft’s new Ideas plugin. This leap forward in the predictive text trend will endeavor to help you be less offensive. Worried you might be a little bit racist? A little gender confused? Not sure about the difference between disabled persons and persons who are disabled? Never fear, Microsoft will fix your language for you.

Using machine learning and AI, Microsoft’s Ideas in Word will help writers be their least offensive, most milquetoast selves. Just like spell check and grammar check function, Ideas will make suggestions as to how to improve your text to be more inclusive. On the surface, this seems like a terrible idea, but when we dig further beneath the impulse, and the functionality of the program, it gets even worse. What’s happening is that AI and machine learning are going to be the background of pretty much every application, learning from our behaviours not only how we’d like to format our PowerPoint presentations, but learning, across platforms, how best to construct language so that we say what we are wanted to say as opposed to what we really mean.

There is an essential component of honest communication, namely that a person express themselves using their own words. When children are learning to talk and to articulate themselves, they are told to “use your words.” Microsoft will give writers the option of using someone else’s words, some amalgamation of users’ words across the platform, and the result will be that the ideas exhibited will not be the writer’s own.

December 13, 2017

Coming way too soon

Filed under: Media, Technology — Tags: , , , — Nicholas @ 03:00

Charles Stross is a highly dependable source of nightmare fuel in his SF/horror writings. He’s just as disturbing when he points out real developments about to go mainstream:

AI assisted porn video is, it seems, now a thing. For those of you who don’t read the links: you can train off-the-shelf neural networks to recognize faces (or other bits of people and objects) in video clips. You can then use the trained network to edit them, replacing one person in a video with a synthetic version of someone else. In this case, Rule 34 applies: it’s being used to take porn videos and replace the actors with film stars. The software runs on a high-end GPU and takes quite a while — hours to days — to do its stuff, but it’s out there and it’ll probably be available to rent as a cloud service running on obsolescent bitcoin-mining GPU racks in China by the end of next week.

(Obvious first-generation application: workplace/social media sexual harassers just got a whole new toolkit.)

But it’s going to get a whole lot worse.

What I’m not seeing yet is the obvious application of this sort of deep learning to speech synthesis. It’s all very well to fake up a video of David Cameron fucking a goat, but without the bleating and mindless quackspeak it’s pretty obvious that it’s a fake. Being able to train a network to recognize the cadences of our target’s intonation, though, and then to modulate a different speaker’s words so they come out sounding right takes it into a whole new level of plausibility for human viewers, because we give credence to sensory inputs based on how consistent they are with our other senses. We need AI to get the lip-sync right, in other words, before today’s simplistic AI-generated video porn turns really toxic.

(Second generation application: Hitler sums it up, now with fewer subtitles)

There are innocuous uses, of course. It’s a truism of the TV business that the camera adds ten kilograms. And we all know about airbrushing/photoshopping of models on magazine covers and in adverts. We can now automate the video-photoshopping of subjects so that, for example, folks like me don’t look as unattractive in a talking-heads TV interview. Pretty soon everyone you see on film or TV is going to be ‘shopped to look sexier, fitter, and skinnier than is actually natural. It’ll probably be built into your smartphone’s camera processor in a few years, first a “make me look fit in selfies” mode and then a “do the same thing, only in video chat” option.

February 28, 2017

When the great AI singularity happens, you’ll be sorry you called Siri a bitch

Amy Alkon views with disdain a Quartz article on sexually harassing, inter alia, Alexa and Siri:

Quartz Seriously Wants To Know: Are You Sexually Harassing Your Phone?
There’s an unbelievable piece up at Quartz, reflecting a gone-mad sector of our society — ultimately driven by radical academic feminism (though typically not admitting or crediting its nutbag roots).

Feminism was supposed to be about women wanting equal treatment. Now, as I like to put it, feminist no longer demand that women be treated as equals but as eggshells.

This article is a case in point. “We tested bots like Siri and Alexa to see who would stand up to sexual harassment,” is the headline. […]

First of all, if I could have Siri in either a bitchy drag queen voice or an Indian accent (from India, that is), which I love, I would. French or Italian or Eastern European would be fun, too. Because Apple’s rather boring about this — probably to serve an increasingly humorless and humor-attacking public — I think I have it on the British guy right now.

But I hate Siri and never use it.

The point is, you can change Siri to a man and harass the fuck out of it. I yell profanity at automated telephone systems when they repeatedly won’t accept my answer — both because I’m kind of immature and because there was this (probably mythic) idea out there that swearing would trigger a live operator to come on.

And per these evolved sex differences — we go for different Achilles heels in men and women when we’re attacking them. That’s because men and women are biologically and psychologically different, and men are more likely to be leaders, for example, and women are more likely to be caretakers.

Though male brains and female brains are mostly similar, these evolved sex differences lead to some differences in our psychology and how we present ourselves in the world (including the roles women versus men tend to have).

August 8, 2015

Tom Kratman on “killer ‘bots”

Filed under: Military, Technology, Weapons — Tags: , , , — Nicholas @ 03:00

SF author (and former US Army officer) Tom Kratman answers a few questions about drones, artificial intelligence, and the threat/promise of intelligent, self-directed weapon platforms in the near future:

Ordinarily, in this space, I try to give some answers. I’m going to try again, in an area in which I am, at least at a technological level, admittedly inexpert. Feel free to argue.

Question 1: Are unmanned aerial drones going to take over from manned combat aircraft?

I am assuming here that at some point in time the total situational awareness package of the drone operator will be sufficient for him to compete or even prevail against a manned aircraft in aerial combat. In other words, the drone operator is going to climb into a cockpit far below ground and the only way he’ll be able to tell he’s not in an aircraft is that he’ll feel no inertia beyond the bare minimum for a touch of realism, to improve his situational awareness, but with no chance of blacking out due to high G maneuvers..

Still, I think the answer to the question is “no,” at least as long as the drones remain under the control of an operator, usually far, far to the rear. Why not? Because to the extent the things are effective they will invite a proportional, or even more than proportional, response to defeat or at least mitigate their effectiveness. That’s just in the nature of war. This is exacerbated by there being at least three or four routes to attack the remote controlled drone. One is by attacking the operator or the base; if the drone is effective enough, it will justify the effort of making those attacks. Yes, he may be bunkered or hidden or both, but he has a signal and a signature, which can probably be found. To the extent the drone is similar in size and support needs to a manned aircraft, that runway and base will be obvious.

The second target of attack is the drone itself. Both of these targets, base/operator and aircraft, are replicated in the vulnerabilities of the manned aircraft, itself and its base. However, the remote controlled drone has an additional vulnerability: the linkage between itself and its operator. Yes, signals can be encrypted. But almost any signal, to include the encryption, can be captured, stored, delayed, amplified, and repeated, while there are practical limits on how frequently the codes can be changed. Almost anything can be jammed. To the extent the drone is dependent on one or another, or all, of the global positioning systems around the world, that signal, too, can be jammed or captured, stored, delayed, amplified and repeated. Moreover, EMP, electro-magnetic pulse, can be generated with devices well short of the nuclear. EMP may not bother people directly, but a purely electronic, remote controlled device will tend to be at least somewhat vulnerable, even if it’s been hardened,

Question 2: Will unmanned aircraft, flown by Artificial Intelligences, take over from manned combat aircraft?

The advantages of the unmanned combat aircraft, however, ranging from immunity to high G forces, to less airframe being required without the need for life support, or, alternatively, for a greater fuel or ordnance load, to expendability, because Unit 278-B356 is no one’s precious little darling, back home, to the same Unit’s invulnerability, so far as I can conceive, to torture-induced propaganda confessions, still argue for the eventual, at least partial, triumph of the self-directing, unmanned, aerial combat aircraft.

Even, so, I’m going to go out on a limb and go with my instincts and one reason. The reason is that I have never yet met an AI for a wargame I couldn’t beat the digital snot out of, while even fairly dumb human opponents can present problems. Coupled with that, my instincts tell me that that the better arrangement is going to be a mix of manned and unmanned, possibly with the manned retaining control of the unmanned until the last second before action.

This presupposes, of course, that we don’t come up with something – quite powerful lasers and/or renunciation of the ban on blinding lasers – to sweep all aircraft from the sky.

November 21, 2014

Elon Musk’s constant nagging worry

Filed under: Business, Technology — Tags: , , , , — Nicholas @ 07:14

In the Washington Post, Justin Moyer talks about Elon Musk’s concern about runaway artificial intelligence:

Elon Musk — the futurist behind PayPal, Tesla and SpaceX — has been caught criticizing artificial intelligence again.

“The risk of something seriously dangerous happening is in the five year timeframe,” Musk wrote in a comment since deleted from the Web site Edge.org, but confirmed to Re/Code by his representatives. “10 years at most.”

The very future of Earth, Musk said, was at risk.

“The leading AI companies have taken great steps to ensure safety,” he wrote. “The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen.”

Musk seemed to sense that these comments might seem a little weird coming from a Fortune 1000 chief executive officer.

“This is not a case of crying wolf about something I don’t understand,” he wrote. “I am not alone in thinking we should be worried.”

Unfortunately, Musk didn’t explain how humanity might be compromised by “digital superintelligences,” “Terminator”-style.

He never does. Yet Musk has been holding forth on-and-off about the apocalypse artificial intelligence might bring for much of the past year.

June 4, 2014

Sarcasm-detecting software wanted

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 09:02

Charles Stross discusses some of the second-order effects should the US Secret Service actually get the sarcasm-detection software they’re reportedly looking for:

… But then the Internet happened, and it just so happened to coincide with a flowering of highly politicized and canalized news media channels such that at any given time, whoever is POTUS, around 10% of the US population are convinced that they’re a baby-eating lizard-alien in a fleshsuit who is plotting to bring about the downfall of civilization, rather than a middle-aged male politician in a business suit.

Well now, here’s the thing: automating sarcasm detection is easy. It’s so easy they teach it in first year computer science courses; it’s an obvious application of AI. (You just get your Turing-test-passing AI that understands all the shared assumptions and social conventions that human-human conversation rely on to identify those statements that explicitly contradict beliefs that the conversationalist implicitly holds. So if I say “it’s easy to earn a living as a novelist” and the AI knows that most novelists don’t believe this and that I am a member of the set of all novelists, the AI can infer that I am being sarcastic. Or I’m an outlier. Or I’m trying to impress a date. Or I’m secretly plotting to assassinate the POTUS.)

Of course, we in the real world know that shaved apes like us never saw a system we didn’t want to game. So in the event that sarcasm detectors ever get a false positive rate of less than 99% (or a false negative rate of less than 1%) I predict that everybody will start deploying sarcasm as a standard conversational gambit on the internet.

Wait … I thought everyone already did?

Trolling the secret service will become a competitive sport, the goal being to not receive a visit from the SS in response to your totally serious threat to kill the resident of 1600 Pennsylvania Avenue. Al Qaida terrrrst training camps will hold tutorials on metonymy, aggressive irony, cynical detachment, and sarcasm as a camouflage tactic for suicide bombers. Post-modernist pranks will draw down the full might of law enforcement by mistake, while actual death threats go encoded as LOLCat macros. Any attempt to algorithmically detect sarcasm will fail because sarcasm is self-referential and the awareness that a sarcasm detector may be in use will change the intent behind the message.

As the very first commenter points out, a problem with this is that a substantial proportion of software developers (as indicated by their position on the Asperger/Autism spectrum) find it very difficult to detect sarcasm in real life…

May 23, 2014

QotD: Futurologists

Futurologists are almost always wrong. Indeed, Clive James invented a word – “Hermie” – to denote an inaccurate prediction by a futurologist. This was an ironic tribute to the cold war strategist and, in later life, pop futurologist Herman Kahn. It was slightly unfair, because Kahn made so many fairly obvious predictions – mobile phones and the like – that it was inevitable quite a few would be right.

Even poppier was Alvin Toffler, with his 1970 book Future Shock, which suggested that the pace of technological change would cause psychological breakdown and social paralysis, not an obvious feature of the Facebook generation. Most inaccurate of all was Paul R Ehrlich who, in The Population Bomb, predicted that hundreds of millions would die of starvation in the 1970s. Hunger, in fact, has since declined quite rapidly.

Perhaps the most significant inaccuracy concerned artificial intelligence (AI). In 1956 the polymath Herbert Simon predicted that “machines will be capable, within 20 years, of doing any work a man can do” and in 1967 the cognitive scientist Marvin Minsky announced that “within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved”. Yet, in spite of all the hype and the dizzying increases in the power and speed of computers, we are nowhere near creating a thinking machine.

Bryan Appleyard, “Why futurologists are always wrong – and why we should be sceptical of techno-utopians: From predicting AI within 20 years to mass-starvation in the 1970s, those who foretell the future often come close to doomsday preachers”, New Statesman, 2014-04-10.

August 10, 2012

For you, is no Singularity

Filed under: Science, Technology — Tags: , , , , — Nicholas @ 11:25

Charles Stross linked to this article which points out that we’re not likely to experience the Singularity/Rapture of the Nerds/etc., and for good reasons:

Given that you are tech-savvy, by that point you have almost certainly come across the idea of the Singularity [1] as defended by futurists like Ray Kurzweil and Vernor Vinge. As a reminder, it is the notion that, when we are at last able to compile a smarter-than-human artificial intelligence, this AI will in turn manage to improve its own design, and so on, resulting in an out-of control loop of “intelligence explosion” [2] with unpredictable technological consequences. (singularists go on to predict that after this happens we will merge with machines, live forever, upload our minds into computers, etc).

What’s more, this seemingly far-future revolution would happen within just a few decades (2040 is often mentioned), due to the “exponential” rate of progress of science. That this deadline would arrive just in time to save the proponents of the Singularity from old age is just a weird coincidence that ought to be ignored.

Objection, your honor. As a scientist, I find the claim that scientific progress is exponential to be extremely dubious. If I look at my own field, or at any field that I am vaguely familiar with, I observe roughly linear progress — a rate that has typically been going on since as far back as the field’s foundation. “Exponential progress” claims are usually supported by the most bogus metrics, such as the number of US patents filled per year [3] (essentially a fashion utterly decorrelated from scientific progress).

And as somebody who does AI research, I find the notion of “intelligence explosion” to make exactly zero sense, for reasons reaching back to the very definition of intelligence. But I am not going to argue about that right now, as isn’t even necessary to invalidate the notion of the Singularity.

« Newer Posts

Powered by WordPress