Quotulatiousness

April 6, 2024

Three AI catastrophe scenarios

Filed under: Technology — Tags: , , , — Nicholas @ 03:00

David Friedman considers the threat of an artificial intelligence catastrophe and the possible solutions for humanity:

    Earlier I quoted Kurzweil’s estimate of about thirty years to human level A.I. Suppose he is correct. Further suppose that Moore’s law continues to hold, that computers continue to get twice as powerful every year or two. In forty years, that makes them something like a hundred times as smart as we are. We are now chimpanzees, perhaps gerbils, and had better hope that our new masters like pets. (Future Imperfect Chapter XIX: Dangerous Company)

As that quote from a book published in 2008 demonstrates, I have been concerned with the possible downside of artificial intelligence for quite a while. The creation of large language models producing writing and art that appears to be the work of a human level intelligence got many other people interested. The issue of possible AI catastrophes has now progressed from something that science fiction writers, futurologists, and a few other oddballs worried about to a putative existential threat.

Large Language models work by mining a large database of what humans have written, deducing what they should say by what people have said. The result looks as if a human wrote it but fits the takeoff model, in which an AI a little smarter than a human uses its intelligence to make one a little smarter still, repeated to superhuman, poorly. However powerful the hardware that an LLM is running on it has no superhuman conversation to mine, so better hardware should make it faster but not smarter. And although it can mine a massive body of data on what humans say it in order to figure out what it should say, it has no comparable body of data for what humans do when they want to take over the world.

If that is right, the danger of superintelligent AIs is a plausible conjecture for the indefinite future but not, as some now believe, a near certainty in the lifetime of most now alive.

[…]

If AI is a serious, indeed existential, risk, what can be done about it?

I see three approaches:

I. Keep superhuman level AI from being developed.

That might be possible if we had a world government committed to the project but (fortunately) we don’t. Progress in AI does not require enormous resources so there are many actors, firms and governments, that can attempt it. A test of an atomic weapon is hard to hide but a test of an improved AI isn’t. Better AI is likely to be very useful. A smarter AI in private hands might predict stock market movements a little better than a very skilled human, making a lot of money. A smarter AI in military hands could be used to control a tank or a drone, be a soldier that, once trained, could be duplicated many times. That gives many actors a reason to attempt to produce it.

If the issue was building or not building a superhuman AI perhaps everyone who could do it could be persuaded that the project is too dangerous, although experience with the similar issue of Gain of Function research is not encouraging. But at each step the issue is likely to present itself as building or not building an AI a little smarter than the last one, the one you already have. Intelligence, of a computer program or a human, is a continuous variable; there is no obvious line to avoid crossing.

    When considering the down side of technologies–Murder Incorporated in a world of strong privacy or some future James Bond villain using nanotechnology to convert the entire world to gray goo – your reaction may be “Stop the train, I want to get off.” In most cases, that is not an option. This particular train is not equipped with brakes. (Future Imperfect, Chapter II)

II. Tame it, make sure that the superhuman AI is on our side.

Some humans, indeed most humans, have moral beliefs that affect their actions, are reluctant to kill or steal from a member of their ingroup. It is not absurd to belief that we could design a human level artificial intelligence with moral constraints and that it could then design a superhuman AI with similar constraints. Human moral beliefs apply to small children, for some even to some animals, so it is not absurd to believe that a superhuman could view humans as part of its ingroup and be reluctant to achieve its objectives in ways that injured them.

Even if we can produce a moral AI there remains the problem of making sure that all AI’s are moral, that there are no psychopaths among them, not even ones who care about their peers but not us, the attitude of most humans to most animals. The best we can do may be to have the friendly AIs defending us make harming us too costly to the unfriendly ones to be worth doing.

III. Keep up with AI by making humans smarter too.

The solution proposed by Raymond Kurzweil is for us to become computers too, at least in part. The technological developments leading to advanced A.I. are likely to be associated with much greater understanding of how our own brains work. That might make it possible to construct much better brain to machine interfaces, move a substantial part of our thinking to silicon. Consider 89352 times 40327 and the answer is obviously 3603298104. Multiplying five figure numbers is not all that useful a skill but if we understand enough about thinking to build computers that think as well as we do, whether by design, evolution, or reverse engineering ourselves, we should understand enough to offload more useful parts of our onboard information processing to external hardware.

Now we can take advantage of Moore’s law too.

A modest version is already happening. I do not have to remember my appointments — my phone can do it for me. I do not have to keep mental track of what I eat, there is an app which will be happy to tell me how many calories I have consumed, how much fat, protein and carbohydrates, and how it compares with what it thinks I should be doing. If I want to keep track of how many steps I have taken this hour3 my smart watch will do it for me.

The next step is a direct mind to machine connection, currently being pioneered by Elon Musk’s Neuralink. The extreme version merges into uploading. Over time, more and more of your thinking is done in silicon, less and less in carbon. Eventually your brain, perhaps your body as well, come to play a minor role in your life, vestigial organs kept around mainly out of sentiment.

As our AI becomes superhuman, so do we.

March 25, 2024

Vernor Vinge, RIP

Filed under: Books, Technology — Tags: , , , — Nicholas @ 04:00

Glenn Reynolds remember science fiction author Vernor Vinge, who died last week aged 79, reportedly from complications of Parkinson’s Disease:

Vernor Vinge has died, but even in his absence, the rest of us are living in his world. In particular, we’re living in a world that looks increasingly like the 2025 of his 2007 novel Rainbows End. For better or for worse.

[…]

Vinge is best known for coining the now-commonplace term “the singularity” to describe the epochal technological change that we’re in the middle of now. The thing about a singularity is that it’s not just a change in degree, but a change in kind. As he explained it, if you traveled back in time to explain modern technology to, say, Mark Twain – a technophile of the late 19th Century – he would have been able to basically understand it. He might have doubted some of what you told him, and he might have had trouble grasping the significance of some of it, but basically, he would have understood the outlines.

But a post-singularity world would be as incomprehensible to us as our modern world is to a flatworm. When you have artificial intelligence (and/or augmented human intelligence, which at some point may merge) of sufficient power, it’s not just smarter than contemporary humans. It’s smart to a degree, and in ways, that contemporary humans simply can’t get their minds around.

I said that we’re living in Vinge’s world even without him, and Rainbows End is the illustration. Rainbows End is set in 2025, a time when technology is developing increasingly fast, and the first glimmers of artificial intelligence are beginning to appear – some not so obviously.

Well, that’s where we are. The book opens with the spread of a new epidemic being first noticed not by officials but by hobbyists who aggregate and analyze publicly available data. We, of course, have just come off a pandemic in which hobbyists and amateurs have in many respects outperformed public health officialdom (which sadly turns out to have been a genuinely low bar to clear). Likewise, today we see people using networks of iPhones (with their built in accelerometers) to predict and observe earthquakes.

But the most troubling passage in Rainbows End is this one:

    Every year, the civilized world grew and the reach of lawlessness and poverty shrank. Many people thought that the world was becoming a safer place … Nowadays Grand Terror technology was so cheap that cults and criminal gangs could acquire it. … In all innocence, the marvelous creativity of humankind continued to generate unintended consequences. There were a dozen research trends that could ultimately put world-killer weapons in the hands of anyone having a bad hair day.

Modern gene-editing techniques make it increasingly easy to create deadly pathogens, and that’s just one of the places where distributed technology is moving us toward this prediction.

But the big item in the book is the appearance of artificial intelligence, and how that appearance is not as obvious or clear as you might have thought it would be in 2005. That’s kind of where we are now. Large Language Models can certainly seem intelligent, and are increasingly good enough to pass a Turing Test with naïve readers, though those who have read a lot of Chat GPT’s output learn to spot it pretty well. (Expect that to change soon, though).

December 31, 2023

QotD: Orwell as a “failed prophet”

Filed under: Books, History, Quotations — Tags: , , , , — Nicholas @ 01:00

Some critics do not fault [Nineteen Eighty-Four] on artistic grounds, but rather judge its vision of the future as wildly off-base. For them, Orwell is a naïve prophet. Treating Orwell as a failed forecaster of futuristic trends, some professional “futurologists” have catalogued no fewer than 160 “predictions” that they claim are identifiable in Orwell’s allegedly poorly imagined novel, pertaining to the technical gadgetry, the geopolitical alignments, and the historical timetable.

Admittedly, if Orwell was aiming to prophesy, he misfired. Oceania is a world in which the ruling elite espouses no ideology except the brutal insistence that “might makes right”. Tyrannical regimes today still promote ideological orthodoxy — and punish public protest, organized dissidence, and conspicuous deviation. (Just ask broad swaths of the citizenry in places such as North Korea, Venezuela, Cuba, and mainland China.) Moreover, the Party in Oceania mostly ignores “the proles”. Barely able to subsist, they are regarded by the regime as harmless. The Party does not bother to monitor or indoctrinate them, which is not at all the case with the “Little Brothers” that have succeeded Hitler and Stalin on the world stage.

Rather than promulgate ideological doctrines and dogmas, the Party of Oceania exalts power, promotes leader worship, and builds cults of personality. In Room 101, O’Brien douses Winston’s vestigial hope to resist the brainwashing or at least to leave some scrap of a legacy that might give other rebels hope. “Imagine,” declares O’Brien, “a boot stamping on a human face — forever.” That is the future, he says, and nothing else. Hatred in Oceania is fomented by periodic “Hate Week” rallies where the Outer Party members bleat “Two Minutes Hate” chants, threatening death to the ever-changing enemy. (Critics of the Trump rallies during and since the presidential campaign compare the chants of his supporters — such as “Lock Her Up” about “Crooked Hillary” Clinton and her alleged crimes — to the Hate Week rallies in Nineteen Eighty-Four.)

Yet all of these complaints about the purported shortcomings of Nineteen Eighty-Four miss the central point. If Orwell “erred” in his predictions about the future, that was predictable — because he wasn’t aiming to “predict” or “forecast” the future. His book was not a prophecy; it was — and remains — a warning. Furthermore, the warning expressed by Orwell was so potent that this work of fiction helped prevent such a dire future from being realized. So effective were the sirens of the sentinel that the predictions of the “prophet” never were fulfilled.

Nineteen Eighty-Four voices Orwell’s still-relevant warning of what might have happened if certain global trends of the early postwar era had continued. And these trends — privacy invasion, corruption of language, cultural drivel and mental debris (prolefeed), bowdlerization (or “rectification”) of history, vanquishing of objective truth — persist in our own time. Orwell was right to warn his readers in the immediate aftermath of the defeat of Hitler and the still regnant Stalin in 1949. And his alarms still resound in the 21st century. Setting aside arguments about forecasting, it is indisputable that surveillance in certain locales, including in the “free” world of the West, resembles Big Brother’s “telescreens” everywhere in Oceania, which undermine all possibility of personal privacy. For instance, in 2013, it was estimated that England had 5.9 million CCTV cameras in operation. The case is comparable in many European and American places, especially in urban centers. (Ironically, it was revealed not long ago that the George Orwell Square in downtown Barcelona — christened to honor him for his fighting against the fascists in the Spanish Civil War — boasts several hidden security cameras.)

Cameras are just one, almost old-fashioned technology that violates our privacy, and our freedoms of speech and association. The power of Amazon, Google, Facebook, and other web systems to track our everyday activities is far beyond anything that Orwell imagined. What would he think of present-day mobile phones?

John Rodden and John Rossi, “George Orwell Warned Us, But Was Anyone Listening?”, The American Conservative, 2019-10-02.

August 27, 2023

When the techno-utopians proclaimed the end of the book

Filed under: Books, Business, Economics, Media, Technology — Tags: , , — Nicholas @ 03:00

In the latest SHuSH newsletter, Ken Whyte harks back to a time when brash young tech evangelists were lining up to bury the ancient codex because everything would be online, accessible, indexed, and (presumably) free to access. That … didn’t happen the way they forsaw:

By the time I picked up Is This a Book?, a slim new volume from Angus Phillips and Miha Kova?, I’d forgotten the giddy digital evangelism of the mid-Aughts.

In 2006, for instance, a New York Times piece by Kevin Kelly, the self-styled “senior maverick” at Wired, proclaimed the end of the book.

It was already happening, Kelly wrote. Corporations and libraries around the world were scanning millions of books. Some operations were using robotics that could digitize 1,000 pages an hour, others assembly lines of poorly paid Chinese labourers. When they finished their work, all the books from all the libraries and archives in the world would be compressed onto a 50 petabyte hard disk which, said Kelly, would be as big as a house. But within a few years, it would fit in your iPod (the iPhone was still a year away; the iPad three years).

“When that happens,” wrote Kelly, “the library of all libraries will ride in your purse or wallet — if it doesn’t plug directly into your brain with thin white cords.”

But that wasn’t what really excited Kelly. “The chief revolution birthed by scanning books”, he ventured, would be the creation of a universal library in which all books would be merged into “one very, very, very large single text”, “the world’s only book”, “the universal library.”

The One Big Text.

In the One Big Text, every word from every book ever written would be “cross-linked, clustered, cited, extracted, indexed, analyzed, annotated, remixed, reassembled and woven deeper into the culture than ever before”.

“Once text is digital”, Kelly continued, “books seep out of their bindings and weave themselves together. The collective intelligence of a library allows us to see things we can’t see in a single, isolated book.”

Readers, liberated from their single isolated books, would sit in front of their virtual fireplaces following threads in the One Big Text, pulling out snippets to be remixed and reordered and stored, ultimately, on virtual bookshelves.

The universal book would be a great step forward, insisted Kelly, because it would bring to bear not only the books available in bookstores today but all the forgotten books of the past, no matter how esoteric. It would deepen our knowledge, our grasp of history, and cultivate a new sense of authority because the One Big Text would indisputably be the sum total of all we know as a species. “The white spaces of our collective ignorance are highlighted, while the golden peaks of our knowledge are drawn with completeness. This degree of authority is only rarely achieved in scholarship today, but it will become routine.”

And it was going to happen in a blink, wrote Kelly, if the copyright clowns would get out of the way and let the future unfold. He recognized that his vision would face opposition from authors and publishers and other friends of the book. He saw the clash as East Coast (literary) v. West Coast (tech), and mistook it for a dispute over business models. To his mind, authors and publishers were eager to protect their livelihoods, which depended on selling one copyright-protected physical book at a time, and too self-interested to realize that digital technology had rendered their business models obsolete. Silicon Valley, he said, had made copyright a dead letter. Knowledge would be free and plentiful—nothing was going to stop its indiscriminate distribution. Any efforts to do so would be “rejected by consumers and ignored by pirates”. Books, music, video — all of it would be free.

Kelly wasn’t altogether wrong. He’d just taken a narrow view of the book. He was seeing it as a container of information, an individual reference work packed with data, facts, and useful knowledge that needed to be agglomerated in his grander project. That has largely happened with books designed simply to convey information — manuals, guides, dissertations, and actual reference books. You can’t buy a good printed encyclopedia today and most scientific papers are now in databases rather than between covers.

What Kelly missed was that most people see the book as more than a container of information. They read for many reasons besides the accumulation of knowledge. They read for style and story. They read to feel, to connect, to stimulate their imaginations, to escape. They appreciate the isolated book as an immersive journey in the company of a compelling human voice.

June 19, 2023

1963: Mockumentary Predicts The Future of 1988 | Time On Our Hands | Past Predictions | BBC

Filed under: Britain, History, Humour, Technology — Tags: , , , — Nicholas @ 04:00

BBC Archive
Published 17 Jun 2023

Russian moon landings, week long traffic jams, a workforce replaced by automation and above all, too much leisure time!

These are just some of the bold predictions made in Don Haworth’s 1963 BBC “mockumentary” Time on Our Hands – a remarkable film which projects the viewer a quarter of a century into the future.

Imagine how the futuristic inhabitants of 1988 — a society freed from the shackles of endless hard work — might reflect on the way people live and work in 1963. Its aim is to look back at the extraordinary, almost unbelievable, events of the intervening 25 years — referred to as “the years of the transformation”.

“This Buoyant programme could be repeated a dozen times and still intrigue, delight and disturb me”
Dennis Potter, Daily Herald TV critic, 1963

This footage is compiled of excerpts from Time On Our Hands, a faux-documentary film by Don Haworth.

Originally broadcast 19 March, 1963.

May 31, 2023

Alvin Toffler may have been utterly wrong in Future Shock, but I suspect his huge royalty cheques helped soften the pain

Filed under: Books, Media, Technology, USA — Tags: , , , , , , — Nicholas @ 03:00

Ted Gioia on the huge bestseller by Alvin Toffler that got its predictions backwards:

Back in 1970, Alvin Toffler predicted the future. It was a disturbing forecast, and everybody paid attention.

People saw his book Future Shock everywhere. I was just a freshman in high school, but even I bought a copy (the purple version). And clearly I wasn’t alone — Clark Drugstore in my hometown had them piled high in the front of the store.

The book sold at least six million copies and maybe a lot more (Toffler’s website claims 15 million). It was reviewed, translated, and discussed endlessly. Future Shock turned Toffler — previously a freelance writer with an English degree from NYU — into a tech guru applauded by a devoted global audience.

Toffler showed up on the couch next to Johnny Carson on The Tonight Show. Other talk show hosts (Dick Cavett, Mike Douglas, etc.) invited him to their couches too. CBS featured Toffler alongside Arthur C. Clarke and Buckminster Fuller as trusted guides to the future. Playboy magazine gave him a thousand dollar award just for being so smart.

Toffler parlayed this pop culture stardom into a wide range of follow-up projects and businesses, from consulting to professorships. When he died in 2016, at age 87, obituaries praised Alvin Toffler as “the most influential futurist of the 20th century”.

But did he deserve this notoriety and praise?

Future Shock is a 500 page book, but the premise is simple: Things are changing too damn fast.

Toffler opens an early chapter by telling the story of Ricky Gallant, a youngster in Eastern Canada who died of old age at just eleven. He was only a kid, but already suffered from “senility, hardened arteries, baldness, slack, and wrinkled skin. In effect, Ricky was an old man when he died.”

Toffler didn’t actually say that this was going to happen to all of us. But I’m sure more than a few readers of Future Shock ran to the mirror, trying to assess the tech-driven damage in their own faces.

“The future invades our lives”, he claims on page one. Our bodies and minds can’t cope with this. Future shock is a “real sickness”, he insists. “It is the disease of change.”

As if to prove this, Toffler’s publisher released the paperback edition of Future Shock with six different covers — each one a different color. The concept was brilliant. Not only did Future Shock say that things were constantly changing, but every time you saw somebody reading it, the book itself had changed.

Of course, if you really believed Future Shock was a disease, why would you aggravate it with a stunt like this? But nobody asked questions like that. Maybe they were too busy looking in the mirror for “baldness, slack, and wrinkled skin”.

Toffler worried about all kinds of change, but technological change was the main focus of his musings. When the New York Times reviewed his book, it announced in the opening sentence that “Technology is both hero and villain of Future Shock“.

During his brief stint at Fortune magazine, Toffler often wrote about tech, and warned about “information overload”. The implication was that human beings are a kind of data storage medium — and they’re running out of disk space.

March 31, 2023

“We have absolutely no idea how AI will go, it’s radically uncertain”… “Therefore, it’ll be fine” (?)

Filed under: Technology — Tags: , , — Nicholas @ 04:00

Scott Alexander on the Safe Uncertainty Fallacy, which is particularly apt in artificial intelligence research these days:

The Safe Uncertainty Fallacy goes:

  1. The situation is completely uncertain. We can’t predict anything about it. We have literally no idea how it could go.
  2. Therefore, it’ll be fine.

You’re not missing anything. It’s not supposed to make sense; that’s why it’s a fallacy.

For years, people used the Safe Uncertainty Fallacy on AI timelines:

Eliezer didn’t realize that at our level, you can just name fallacies.

Since 2017, AI has moved faster than most people expected; GPT-4 sort of qualifies as an AGI, the kind of AI most people were saying was decades away. When you have ABSOLUTELY NO IDEA when something will happen, sometimes the answer turns out to be “soon”.

Now Tyler Cowen of Marginal Revolution tries his hand at this argument. We have absolutely no idea how AI will go, it’s radically uncertain:

    No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.

    The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring. No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring. No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short term, albeit imperfectly). No one. Not you, not Eliezer, not Sam Altman, and not your next door neighbor.

    How well did people predict the final impacts of the printing press? How well did people predict the final impacts of fire? We even have an expression “playing with fire.” Yet it is, on net, a good thing we proceeded with the deployment of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?”).

Therefore, it’ll be fine:

    I am a bit distressed each time I read an account of a person “arguing himself” or “arguing herself” into existential risk from AI being a major concern. No one can foresee those futures! Once you keep up the arguing, you also are talking yourself into an illusion of predictability. Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become. It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison. The case for destruction is so much more readily articulable — “boom!” Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns. (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.) Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagine. All the possibilities are distant, I cannot stress that enough. The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much.

    So we should take the plunge. If someone is obsessively arguing about the details of AI technology today, and the arguments on LessWrong from eleven years ago, they won’t see this. Don’t be suckered into taking their bait.

Look. It may well be fine. I said before my chance of existential risk from AI is 33%; that means I think there’s a 66% chance it won’t happen. In most futures, we get through okay, and Tyler gently ribs me for being silly.

Don’t let him. Even if AI is the best thing that ever happens and never does anything wrong and from this point forward never even shows racial bias or hallucinates another citation ever again, I will stick to my position that the Safe Uncertainty Fallacy is a bad argument.

March 16, 2023

Once it was possible to be a fully fledged techno-optimist … but things have changed for the worse

Filed under: Liberty, Technology, USA — Tags: , , , , , — Nicholas @ 05:00

Glenn Reynolds on how he “lost his religion” about the bright, shiny techno-future so many of us looked forward to:

Okay, there’s optimism and then there’s totally unrealistic techno-utopianism…

Listening to that song reminded me of how much more overtly optimistic I was about technology and the future at the turn of the millennium. I realized that I’m somewhat less so now. But why? In truth, I think my more negative attitude has to do with people more than with the machines that Embrace the Machine characterizes as “children of our minds”. (I stole that line from Hans Moravec. Er, I mean it’s a “homage”.) But maybe there’s a connection there, between creators and creations.

It was easy to be optimistic in the 90s and at the turn of the millennium. The Soviet Union lost the Cold War, the Berlin Wall fell, and freedom and democracy and prosperity were on the march almost everywhere. Personal technology was booming, and its dark sides were not yet very apparent. (And the darker sides, like social media and smartphones, basically didn’t exist.)

And the tech companies, then, were run by people who looked very different from the people who run them now – even when, as in the case of Bill Gates, they were the same people. It’s easy to forget that Gates was once a rather libertarian figure, who boasted that Microsoft didn’t even have an office in Washington, DC. The Justice Department, via its Antitrust Division, punished him for that, and he has long since lost any libertarian inclinations, to put it mildly.

It’s a different world now. In the 1990s it seemed plausible that the work force of tech companies would rise up in revolt if their products were used for repression. In the 2020s, they rise up in revolt if they aren’t. Commercial tech products spy on you, censor you, and even stop you from doing things they disapprove of. Apple nowadays looks more like Big Brother than like a tool to smash Big Brother as presented in its famous 1984 commercial.

Silicon Valley itself is now a bastion of privilege, full of second- and third-generation tech people, rich Stanford alumni, and VC scions. It’s not a place that strives to open up society, but a place that wants to lock in the hierarchy, with itself on top. They’re pulling up the ladders just as fast as they can.

February 15, 2023

Refuting The End of History and the Last Man

Filed under: Books, Economics, History — Tags: , , , , — Nicholas @ 03:00

Freddie deBoer responds to a recent commentary defending the thesis of Francis Fukuyama’s The End of History and the Last Man:

… Ned Resnikoff critiques a recent podcast by Hobbes and defends Francis Fukuyama’s concept of “the end of history”. In another case of strange bedfellows, the liberal Resnikoff echoes conservative Richard Hanania in his defense of Fukuyama — echoes not merely in the fact that he defends Fukuyama too, but in many of the specific terms and arguments of Hanania’s defense. And both make the same essential mistake, failing to understand the merciless advance of history and how it ceaselessly grinds up humanity’s feeble attempts at macrohistoric understanding. And, yes, to answer Resnikoff’s complaint, I’ve read the book, though it’s been a long time.

The big problem with The End of History and the Last Man is that history is long, and changes to the human condition are so extreme that the terms we come up with to define that condition are inevitably too contextual and limited to survive the passage of time. We’re forever foolishly deciding that our current condition is the way things will always be. For 300,000 years human beings existed as hunter-gatherers, a vastly longer period of time than we’ve had agriculture and civilization. Indeed, if aliens were to take stock of the basic truth of the human condition, they would likely define us as much by that hunter-gatherer past as our technological present; after all, that was our reality for far longer. Either way – those hunter-gatherers would have assumed that their system wasn’t going to change, couldn’t comprehend it changing, didn’t see it as a system at all, and for 3000 centuries, they would have been right. But things changed.

And for thousands of years, people living at the height of human civilization thought that there was no such thing as an economy without slavery; it’s not just that they had a moral defense of slavery, it’s that they literally could not conceive of the daily functioning of society without slavery. But things changed. For most humans for most of modern history, the idea of dynastic rule and hereditary aristocracy was so intrinsic and universal that few could imagine an alternative. But things changed. And for hundreds of years, people living under feudalism could not conceive of an economy that was not fundamentally based on the division between lord and serf, and in fact typically talked about that arrangement as being literally ordained by God. But things changed. For most of human history, almost no one questioned the inherent and unalterable second-class status of women. Civilization is maybe 12,000 years old; while there’s proto-feminist ideas to be found throughout history, the first wave of organized feminism is generally defined as only a couple hundred years old. It took so long because most saw the subordination of women as a reflection of inherent biological reality. But women lead countries now. You see, things change.

And what Fukuyama and Resnikoff and Hanania etc are telling you is that they’re so wise that they know that “but then things changed” can never happen again. Not at the level of the abstract social system. They have pierced the veil and see a real permanence where humans of the past only ever saw a false one. I find this … unlikely. Resnikoff writes “Maybe you think post-liberalism is coming; it just has yet to be born. I guess that’s possible.” Possible? The entire sweep of human experience tells us that change isn’t just possible, it’s inevitable; not just change at the level of details, but changes to the basic fabric of the system.

The fact of the matter is that, at some point in the future, human life will be so different from what it’s like now, terms like liberal democracy will have no meaning. In 200 years, human beings might be fitted with cybernetic implants in utero by robots and jacked into a virtual reality that we live in permanently, while artificial intelligence takes care of managing the material world. In that virtual reality we experience only a variety of pleasures that are produced through direct stimulation of the nervous system. There is no interaction with other human beings as traditionally conceived. What sense would the term “liberal democracy” even make under those conditions? There are scientifically-plausible futures that completely undermine our basic sense of what it means to operate as human beings. Is one of those worlds going to emerge? I don’t know! But then, Fukuyama doesn’t know either, and yet one of us is making claims of immense certainty about the future of humanity. And for the record, after the future that we can’t imagine comes an even more distant future we can’t conceive of.

People tend to say, but the future you describe is so fanciful, so far off. To which I say, first, human technological change over the last two hundred years dwarfs that of the previous two thousand, so maybe it’s not so far off, and second, this is what you invite when you discuss the teleological endpoint of human progress! You started the conversation! If you define your project as concerning the final evolution of human social systems, you necessarily include the far future and its immense possibilities. Resnikoff says, “the label ‘post-liberalism’ is something of an intellectual IOU” and offers similar complaints that no one’s yet defined what a post-liberal order would look like. But from the standpoint of history, this is a strange criticism. An 11th-century Andalusian shepherd had no conception of liberal democracy, and yet here we are in the 21st century, talking about liberal democracy as “the object of history”. How could his limited understanding of the future constrain the enormous breadth of human possibility? How could ours? To buy “the end of history”, you have to believe that we are now at a place where we can accurately predict the future where millennia of human thinkers could not. And it’s hard to see that as anything other than a kind of chauvinism, arrogance.

Fukuyama and “the end of history” are contingent products of a moment, blips in history, just like me. That’s all any of us gets to be, blips. The challenge is to have humility enough to recognize ourselves as blips. The alternative is acts of historical chauvinism like The End of History.

January 4, 2023

Sarah Hoyt on some of the dystopian futures we’ve avoided (so far)

Filed under: Economics, Government, USA — Tags: , , , — Nicholas @ 03:00

Sarah Hoyt outlines a few of the grim future scenarios that appeared to be the future to people who earned a living writing about possible futures:

1 – World government.
To be fair, it seemed an absolutely sane and inescapable prediction for people who had seen the centralized nation states of the twentieth century consolidate. With faster communication, would come total union, right?

I note Heinlein stopped believing this after his world tour. In fact in Friday he has a fractured USA.

That second vision is more likely. There are too many cultures in the world and too many competing interests to have a world government. Even on the administrative side, a world government might be absolutely impossible, unless it’s a nominal government and the sub-governments do everything really.

In which case, you know what? It’s no different than what we have, except we call any war a civil war.

The only people this idea still makes sense to are people who think they can change reality by changing the words.

Of course, just because there isn’t a formal world government doesn’t stop national governments and legacy media organizations from pretending that there is some supranational body whose directives they must always follow … at least when they want to do something the voters don’t want them to do. Lockdowns, anyone? Vaccine mandates? Social media censorship at the micro level? Oh, we have to do them because the WHO/UN/WEF/etc. insist.

2 – Overpopulation.
Yeah, I know what the population “counts” are, but we don’t have overpopulation. We don’t have any of the signs of overpopulation, and it’s becoming plainly obvious, country by country, locality by locality that there’s no overpopulation.

Malthus was an unpleasant fatalist. he was also wrong. Humanity doesn’t keep reproducing like mindless rabbits.

To be fair, this makes perfect sense because we’re a scavenger species. For scavenger species the population curve is the bell curve, not an exponential climb.

It’s funny how third world governments can “accurately” report booming populations — at least partly because foreign aid from the west is often directly tied to those reports — yet many of them don’t even know how many civil servants they employ. And western governments and aid agencies just pretend to believe them.

3 – Total depletion of resources leading to the “rusty future” in a lot of eighties science fiction.
A lot of resources are in fact depleted, but we have found others This is something that the “Greens” seem unable to grasp. Humanity is a continuous depleting of resources, and discovering new resources and new ways to use them. For instance, given our population, I don’t think we have enough flint to knap for knives for all of us. It’s an obvious crisis.

In the same way, do you think it’s even possible for all of us to have a horse? Our cities would be hip-deep in horse poo.

But we are the ape that adapts. Things change. And the future will be as shiny as we want it. Unless fashion calls for dull, of course.

If you’ve been educated in a zero-sum economic picture, then it’s difficult or impossible for you to recognize that when resources begin to run short and prices rise, individuals and companies look for more efficient ways to use the now more expensive resource or to consider substitutions. This is why economies who try to suppress normal market signals, like rising prices due to diminished supplies, end up far worse off … humans in aggregate are adaptable and will try to find alternatives when they can.

4 – The world isn’t a communist state, or filled with communist states.

There are some yes, but the ones there are are in obvious trouble, and only the propagandized and the ignorant believe it is a way to live, or a way that brings about paradise. In fact, most of today’s communists are merely wanting to reign in hell.

They know they’d unleash hell, they just think they’d be king.

As bad as it is that people are still fighting for this, it’s miles ahead of the status quo till the eighties, where people actually believed planned centralized states were better.

We still have a fight ahead of us, and we might still fail, but there will never be a whole-word communism. and those of use devoted to freedom will eventually win. It just will take probably more than my life. At least on a world-scale.

Among the governments most likely to resort to market denial (and autarky) are socialist and communist states. Central planning is one of the fastest methods to starving your population aside from total war. Central planners are always confident that they “know better” than filthy capitalists, and with proper “scientific” planning they can avoid all the “waste” that market societies produce. For a detailed look, consider the plight of poor, imaginary Wyatt, a factory manager under GOSPLAN in the old Soviet Union. If anything, Sev underestimates the economic disaster that Soviet central planning perpetrated.

5 – We don’t have some sort of central authority that contols all of something: genetics; who is arrested; etc.
A lot of places have crazy authorities, but not the whole world. we’re not enslaved by the Tech Lords (and what a pitiful lot those turned out to be) and the agencies trying to subjugate us are not all powerful, more along the lines of a bunch of venal chuckleheads. Annoying, with no morals and insane, but not all powerful. It could be worse.

It certainly could be worse, and useful idiots in western governments and legacy media are doing what they can to bring everything possible under tighter control, but as I’ve pointed out repeatedly the more a government tries to do, the worse it does everything.

December 14, 2022

Our Nuclear Alternate Future?

Filed under: Business, History, Technology, USA — Tags: , , , , — Nicholas @ 02:00

Big Car
Published 9 Apr 2021

In the 1950s, as the Cold War was heating up and children were being urged to “duck and cover” from nuclear weapons, car companies seriously proposed powering their cars using lead-lined nuclear reactors. It seems like madness today, but while the world saw the threat of nuclear war, they also saw the seemingly limitless potential from nuclear power. Just how were these vehicles supposed to work and how far did they get to reality?
(more…)

December 9, 2022

QotD: Computer models of “the future”

Filed under: Economics, Media, Quotations, Technology — Tags: , , , , — Nicholas @ 01:00

The problem with all “models of the world”, as the video puts it, is that they ignore two vitally important factors. First, models can only go so deep in terms of the scale of analysis to attempt. You can always add layers — and it is never clear when a layer that is completely unseen at one scale becomes vitally important at another. Predicting higher-order effects from lower scales is often impossible, and it is rarely clear when one can be discarded for another.

Second, the video ignores the fact that human behavior changes in response to circumstance, sometimes in radically unpredictable ways. I might predict that we will hit peak oil (or be extremely wealthy) if I extrapolate various trends. However, as oil becomes scarce, people discover new ways to obtain it or do without it. As people become wealthier, they become less interested in the pursuit of wealth and therefore become poorer. Both of those scenarios, however, assume that humanity will adopt a moral and optimistic stance. If humans become decadent and pessimistic, they might just start wars and end up feeding off the scraps.

So, interestingly, what the future looks like might be as much a function of the music we listen to, the books we read, and the movies we watch when we are young as of the resources that are available.

Note that the solution they propose to our problems is internationalization. The problem with internationalizing everything is that people have no one to appeal to. We are governed by a number of international laws, but when was the last time you voted in an international election? How do you effect change when international policies are not working out correctly? Who do you appeal to?

The importance of nationalism is that there are well-known and generally-accepted procedures for addressing grievances with the ruling class. These international clubs are generally impervious to the appeals (and common sense) of ordinary people and tend to promote virtue-signaling among the wealthy class over actual virtue or solutions to problems.

Jonathan Bartlett, quoted in “1973 Computer Program: The World Will End In 2040”, Mind Matters News, 2019-05-31.

September 29, 2022

Nostradamus

Filed under: Books, France, History — Tags: , , , , — Nicholas @ 03:00

In the latest post at Astral Codex Ten, Scott Alexander considers prophecy “From Nostradamus to Fukuyama”. Here’s the section on Nostradamus (because I don’t have a lot of time for Fukuyama, you’ll have to read the rest at ACX):

“House of Nostradamus in Salon, France. Now a museum.” by photographymontreal is marked with Public Domain Mark 1.0 .

Nostradamus was a 16th century French physician who claimed to be able to see the future.

(never trust doctors who dabble in futurology, that’s my advice)

His method was: read books of other people’s prophecies and calculate some astrological charts, until he felt like he had a pretty good idea what would happen in the future. Then write it down in the form of obscure allusions and multilingual semi-gibberish, to placate religious authorities (who apparently hated prophecies, but loved prophecies phrased as obscure allusions and multilingual semi-gibberish).

In 1559, he got his big break. During a jousting match, a count killed King Henry II of France with a lance through the visor of his helmet. Years earlier, Nostradamus had written:

    The young lion will overcome the older one,
    On the field of combat in a single battle;
    He will pierce his eyes through a golden cage,
    Two wounds made one, then he dies a cruel death

The nobleman was a bit younger than the king, supposedly they both had lions on their shield (false), maybe King Henry was wearing a golden helmet (I can’t find evidence for this, but as a consolation prize please accept this picture of his amazing parade armor), and his slow agonizing death over ten days from his wounds was pretty cruel. Seems like a match, sort of. Anyway, for the next five hundred years lots of people were really into Nostradamus and spent goodness knows how many brain cycles trying to interpret his incomprehensible quatrains.

The basic Nostradamic method was:

    Write 942 vague and incomprehensible quatrains, out of order and without any dates.

    Whenever something happens, say “that sounds a lot like quatrain #143!” or “quatrain #558 predicted that”

    Prophet

For example, prophecy 106:

    Near the gates and within two cities
    There will be two scourges the like of which was never seen,
    Famine within plague, people put out by steel,
    Crying to the great immortal God for relief

This is an okay match for the atomic bombs, in the sense that there were two cities where something really bad happened. But read on to prophecy 107:

    Amongst several transported to the isles,
    One to be born with two teeth in his mouth
    They will die of famine the trees stripped,
    For them a new King issues a new edict.

… and it starts to sound like he’s just kind of saying random stuff and some of it’s sticking by sheer luck.

A few prophecies sound more impressive than this, eg:

    The lost thing is discovered, hidden for many centuries.
    Pasteur will be celebrated almost as a god-like figure.
    This is when the moon completes her great cycle,
    but by other rumours he shall be dishonoured

This seems to name Pasteur, who was indeed a celebrated discoverer of things. And Nostradamus scholars note that a historian accused Pasteur of plagiarism in 1995, which is a kind of dishonorable rumor. But the work here is being done by the translator: Pasteur is just French for “pastor”, and an honest translation would have just said “the pastor will be celebrated …”, which is in tune with all his other vague allusions to things happening.

    The blood of the just will be demanded of London
    Burnt by fire in the year ’66
    The ancient Lady will fall from her high place
    And many of the same sect will be killed.

Seems like a match for the London fire of 1666. But again checking the original French and the commentators, the second line is more properly “burnt by fire in 23 the 6”, which a fanciful translator rounded off to 20 * 3 + 6 = 66 and then assumed was a year. The experts say that this is really a coded reference to 23 Protestants being burned, in groups of six, during Nostradamus’ lifetime (many of his quatrains are references to past or present events, for some reason). This sounds more compatible with the “many of the same sect will be killed” ending.

I had a weird experience writing the end of this first part of the post. When I was a kid, reading through my parents’ old books, I came across an weird almanac from the 70s that had a section on Nostradamus. It listed some of his most famous prophecies, including the ones above, but also (reconstructing from memory and probably getting some things wrong, sorry):

    The way of life according to Thomas More
    Will give way to another more sweet and seductive
    In the land of cold winds that first gave it birth
    Without strife, without a war it will fall

… and the 70s almanac interpreted this as meaning Soviet communism would fall peacefully. Reading this in 1995 or whenever it was I read it, a few years after Soviet communism did fall peacefully, I was really impressed: this is the only example I know where someone used a Nostradamus quatrain to predict something before it happened.

But I searched for the exact text so I could include the correct version in this essay, and I didn’t find it — this is none of Nostradamus’ 942 prophecies! The almanac authors must have made it up, or unwittingly copied it from someone else who did.

But I remember this very clearly — the almanac was from 1970-something. So how did the faker know Russian communism would collapse?

The moral of the story is: just because Nostradamus wasn’t a real prophet, doesn’t mean nobody else is.

June 18, 2021

Feeding “the masses”

Sarah Hoyt looked at the perennial question “Dude, where’s my (flying) car?” and the even more relevant to most women “Where’s my automated house?”:

The cry of my generation, for years now, has been: “Dude, where’s my flying car?”

My friend Jeff Greason is fond of explaining that as an engineering problem, a flying car is no issue at all. It is as a legal problem that flying cars get interesting, because of course the FAA won’t let such a thing exist without clutching it madly and distorting it with its hands made of bureaucracy and crazy. (Okay, he doesn’t put it that way, but I do.)

[…]

But in all this, I have to say: Dude, where’s my automated house?

It was fifteen years ago or so, while out at lunch with an older writer friend, that she said “We always thought that when it came to this time, there would be communal lunch rooms and cafeterias that would do all the cooking so women would be free to work.”

I didn’t say anything. I knew our politics weren’t congruent, but really the only societies that managed that “Cafeterias, where everyone eats” were the most totalitarian ones, and that food was nothing you wanted to eat. If there was food. Because the only way to feed everyone industrial style is to take away their right to choose how to feed themselves and what to eat. And that, over an entire nation, would be a nightmare. Consider the eighties, when the funny critters decided that we should all live on a Russian Peasant diet of carbs, carbs and more carbs. Potatoes were healthy and good for you, and you should live on them.

It will surprise you to know – not — that just as with the mask idiocy, no study of any kind supports feeding the population on mostly vegetables, much less starches. What those whole “recommendations” were based on was “diet for a small planet” and the bureaucrats invincible ignorance, stupidity and assumption of their own intelligence and superiority. I.e. most of what they knew — that population was exploding, that people would soon be starving, that growing vegetables is less taxing on the environment and produces more calories than growing animals to eat — just wasn’t so. But they “knew” and by gum were going to force everyone to follow “the plan”. (BTW one of the ways you know that Q-Anon is in fact a black ops operation from the other side; no one on the right in this country trusts a plan, much less one that can’t be shared or discussed.) Then the complete idiots were shocked, surprised, nay, astonished when their proposed diet led to an “epidemic of obesity” and diabetes. Even though anyone who suffered through the peasant diet in communist countries, could have told the that’s where it would lead, and to both obesity and Mal-nutrition at once.

So, yeah, communal cafeterias are not a solution to anything.

My concern about the “automated house of the future” is nicely prefigured by the “wonders” of Big Tech surveillance devices we’ve voluntarily imported into our homes for the convenience, while awarding untold volumes of free data for the tech firms to market. Plus, the mindset that “you must be online at all times” that many/most of these devices require means you’re out of luck if your internet connection is a bit wobbly (looking at you, Rogers).

June 7, 2021

Dude, where’s my (flying) car?

Filed under: Books, Economics, Government, History, Technology — Tags: , , , , , — Nicholas @ 05:00

The latest of the reader-contributed book reviews at Scott Alexander’s Astral Codex Ten looks at Where is my Flying Car? by J. Storrs Hall:

What went wrong in the 1970s? Since then, growth and productivity have slowed, average wages are stagnant, visible progress in the world of “atoms” has practically stopped — the Great Stagnation. About the only thing that has gone well are computers. How is it that we went from the typewriter to the smartphone, but we’re still using practically the same cars and airplanes?

Where is my Flying Car? by J. Storrs Hall, is an attempt to answer that question. His answer is: the Great Stagnation was caused by energy usage flatlining, which was caused by our failure to switch to nuclear energy, which was caused by excessive regulation, which was caused by “green fundamentalism”.

Three hundred years ago, we burned wood for energy. Then there was coal and the steam engine, which gave us the Industrial Revolution. Then there was oil and gas, giving us cars and airplanes. Then there should have been nuclear fission and nanotech, letting you fit a lifetime’s worth of energy in your pocket. Instead, we still drive much the same cars and airplanes, and climate change threatens to boil the Earth.

I initially thought the title was a metaphor — the “flying car” as a standin for all the missing technological progress in the world of “atoms” — but in fact much of the book is devoted to the particular question of flying cars. So look at the issue from the lens of transportation:

    Hans Rosling was a world health economist and an indefatigable campaigner for a deeper understanding of the world’s state of development. He is famous for his TED talks and the Gapminder web site. He classifies the wealthiness of the world’s population into four levels:

    1. Barefoot. Unable even to afford shoes, they must walk everywhere they go. Income $1 per day. One billion people are at Level 1.

    2. Bicycle (and shoes). The $4 per day they make doesn’t sound like much to you and me but it is a huge step up from Level 1. There are three billion people at level 2.

    3. The two billion people at Level 3 make $16 a day; a motorbike is within their reach.

    4. At $64 per day, the one billion people at Level 4 own a car.

    The miracle of the Industrial Revolution is now easily stated: In 1800, 85% of the world’s population was at Level 1. Today, only 9% is. Over the past half century, the bulk of humanity moved up out of Level 1 to erase the rich-poor gap and make the world wealth distribution roughly bell-shaped. The average American moved from Level 2 in 1800, to level 3 in 1900, to Level 4 in 2000. We can state the Great Stagnation story nearly as simply: There is no level 5.

Level 5, in transportation, is a flying car. Flying cars are to airplanes as cars are to trains. Airplanes are fast, but getting to the airport, waiting for your flight, and getting to your final destination is a big hassle. Imagine if you had to bike to a train station to get anywhere (not such a leap of imagination for me in New York City! But it wouldn’t work in the suburbs). What if you had one vehicle that could drive on the road and fly in the sky at hundreds of miles an hour?

Before reading this book, I thought flying cars were just technologically infeasible, because flying takes too much energy. But Hall says we can and have built them ever since the 1930s. They got interrupted by the Great Depression (people were too poor to buy private airplanes), then WWII (airplanes were directed towards the war effort, not the market), then regulation mostly killed the private aviation industry. But technical feasibility was never the problem.

Hall spends a huge fraction of the book on pretty detailed technical discussion of flying cars. For example: the key technical issue is takeoff and landing, and there is a tough tradeoff between convenient takeoff/landing and airspeed (and cost, and ease of operation). It’s interesting reading. But let’s return to the larger issue of nuclear power.

Older Posts »

Powered by WordPress