Quotulatiousness

April 6, 2024

Three AI catastrophe scenarios

Filed under: Technology — Tags: , , , — Nicholas @ 03:00

David Friedman considers the threat of an artificial intelligence catastrophe and the possible solutions for humanity:

    Earlier I quoted Kurzweil’s estimate of about thirty years to human level A.I. Suppose he is correct. Further suppose that Moore’s law continues to hold, that computers continue to get twice as powerful every year or two. In forty years, that makes them something like a hundred times as smart as we are. We are now chimpanzees, perhaps gerbils, and had better hope that our new masters like pets. (Future Imperfect Chapter XIX: Dangerous Company)

As that quote from a book published in 2008 demonstrates, I have been concerned with the possible downside of artificial intelligence for quite a while. The creation of large language models producing writing and art that appears to be the work of a human level intelligence got many other people interested. The issue of possible AI catastrophes has now progressed from something that science fiction writers, futurologists, and a few other oddballs worried about to a putative existential threat.

Large Language models work by mining a large database of what humans have written, deducing what they should say by what people have said. The result looks as if a human wrote it but fits the takeoff model, in which an AI a little smarter than a human uses its intelligence to make one a little smarter still, repeated to superhuman, poorly. However powerful the hardware that an LLM is running on it has no superhuman conversation to mine, so better hardware should make it faster but not smarter. And although it can mine a massive body of data on what humans say it in order to figure out what it should say, it has no comparable body of data for what humans do when they want to take over the world.

If that is right, the danger of superintelligent AIs is a plausible conjecture for the indefinite future but not, as some now believe, a near certainty in the lifetime of most now alive.

[…]

If AI is a serious, indeed existential, risk, what can be done about it?

I see three approaches:

I. Keep superhuman level AI from being developed.

That might be possible if we had a world government committed to the project but (fortunately) we don’t. Progress in AI does not require enormous resources so there are many actors, firms and governments, that can attempt it. A test of an atomic weapon is hard to hide but a test of an improved AI isn’t. Better AI is likely to be very useful. A smarter AI in private hands might predict stock market movements a little better than a very skilled human, making a lot of money. A smarter AI in military hands could be used to control a tank or a drone, be a soldier that, once trained, could be duplicated many times. That gives many actors a reason to attempt to produce it.

If the issue was building or not building a superhuman AI perhaps everyone who could do it could be persuaded that the project is too dangerous, although experience with the similar issue of Gain of Function research is not encouraging. But at each step the issue is likely to present itself as building or not building an AI a little smarter than the last one, the one you already have. Intelligence, of a computer program or a human, is a continuous variable; there is no obvious line to avoid crossing.

    When considering the down side of technologies–Murder Incorporated in a world of strong privacy or some future James Bond villain using nanotechnology to convert the entire world to gray goo – your reaction may be “Stop the train, I want to get off.” In most cases, that is not an option. This particular train is not equipped with brakes. (Future Imperfect, Chapter II)

II. Tame it, make sure that the superhuman AI is on our side.

Some humans, indeed most humans, have moral beliefs that affect their actions, are reluctant to kill or steal from a member of their ingroup. It is not absurd to belief that we could design a human level artificial intelligence with moral constraints and that it could then design a superhuman AI with similar constraints. Human moral beliefs apply to small children, for some even to some animals, so it is not absurd to believe that a superhuman could view humans as part of its ingroup and be reluctant to achieve its objectives in ways that injured them.

Even if we can produce a moral AI there remains the problem of making sure that all AI’s are moral, that there are no psychopaths among them, not even ones who care about their peers but not us, the attitude of most humans to most animals. The best we can do may be to have the friendly AIs defending us make harming us too costly to the unfriendly ones to be worth doing.

III. Keep up with AI by making humans smarter too.

The solution proposed by Raymond Kurzweil is for us to become computers too, at least in part. The technological developments leading to advanced A.I. are likely to be associated with much greater understanding of how our own brains work. That might make it possible to construct much better brain to machine interfaces, move a substantial part of our thinking to silicon. Consider 89352 times 40327 and the answer is obviously 3603298104. Multiplying five figure numbers is not all that useful a skill but if we understand enough about thinking to build computers that think as well as we do, whether by design, evolution, or reverse engineering ourselves, we should understand enough to offload more useful parts of our onboard information processing to external hardware.

Now we can take advantage of Moore’s law too.

A modest version is already happening. I do not have to remember my appointments — my phone can do it for me. I do not have to keep mental track of what I eat, there is an app which will be happy to tell me how many calories I have consumed, how much fat, protein and carbohydrates, and how it compares with what it thinks I should be doing. If I want to keep track of how many steps I have taken this hour3 my smart watch will do it for me.

The next step is a direct mind to machine connection, currently being pioneered by Elon Musk’s Neuralink. The extreme version merges into uploading. Over time, more and more of your thinking is done in silicon, less and less in carbon. Eventually your brain, perhaps your body as well, come to play a minor role in your life, vestigial organs kept around mainly out of sentiment.

As our AI becomes superhuman, so do we.

March 13, 2024

“They won’t be in Gaza, but they’ll be just offshore — a few hundred yards from Gaza”

Filed under: Middle East, Military, USA — Tags: , , , , , — Nicholas @ 04:00

Apparently a bunch of former military types are getting their collective panties in a bunch just because Biden is sending part of a highly specialized US Army support brigade to install a temporary offshore unloading facility to get “humanitarian aid” in to Hamas fighters the civilian population of Gaza. All the political advisors to the President want to assure everyone that there will be no “boots on the ground”, so there’s no real risk

The Pentagon has said something that should make us all sit up and pay attention.

Quick background first:

Elements of the US Army’s 7th Transportation Brigade are on the way to Gaza. […] They won’t be in Gaza, but they’ll be just offshore — a few hundred yards from Gaza. Now read this, and take the time to read it closely. I’ll split it into two screencaps to get it all in, which will be awkward to look at, but you can just click on the link to see it all whole (and subscribe to keep up with “Cynical Publius” as all of this develops):

The extremely important part of all of that is that transportation troops aren’t combat arms troops; they’re armed for some degree of self-protection, but “they lack the organic ability to defend themselves against high-intensity attacks by enemies.” In a hostile environment, they need to be screened: they need to be protected by combat-focused forces, both on-shore and off. They need infantry in front of them, warships behind them, and aircraft overhead.

Now, via this account, look at this transcript of an … interesting Pentagon press briefing on March 8, in which a major general talks at length about the security plan for the 7th Transportation Brigade when it gets to Gaza. Sample exchange:

    Q: (Inaudible) partner nations on the ground, but you’re talking about operational security, you can’t discuss what will be (inaudible).

    GEN. RYDER: Right. I mean, we will — these forces will have the capability to provide some organic security. I’m just not going to get into the specifics of that.

But they don’t — or they do, but the capability of transportation troops, from a combat service support branch, is extremely limited. Again, these are not combat arms troops, and aren’t armed or trained as combat arms troops. Talking about their organic security capability is an interesting choice.

October 3, 2023

“Just play safe” is difficult when the definition of “safe” is uncertain

Filed under: Food, Health — Tags: , , , — Nicholas @ 04:00

David Friedman on the difficulty of “playing safe”:

It’s a no brainer. Just play safe

It is a common argument in many different contexts. In its strongest form, the claim is that the choice being argued for is unambiguously right, eliminates the possibility of a bad outcome at no cost. More plausibly, the claim is that one can trade the risk of something very bad for a certainty of something only a little bad. By agreeing to pay the insurance company a hundred dollars a year now you can make sure that if your house burns down you will have the money to replace it.

Doing that is sometimes is possible but, in an uncertain world, often not; you do not, cannot, know all the consequences of what you are doing. You may be exchanging the known risk of one bad outcome for the unknown risk of another.

Some examples:

Erythritol

Erythritol was the best of the sugar alcohols, substitutes tolerably well for sugar in cooking, has almost zero calories or glycemic load. For anyone worried about diabetes or obesity, using it instead of sugar is an obvious win. Diabetes and obesity are dangerous, sometimes life threatening.

Just play safe.

I did. Until an research came out offering evidence that it was not the best sugar alcohol but the worst:

    People with the highest erythritol levels (top 25%) were about twice as likely to have cardiovascular events over three years of follow-up as those with the lowest (bottom 25%). (Erythritol and cardiovascular events, NIH)

A single article might turn out to be wrong, of course; to be confident that erythritol is dangerous requires more research. But a single article was enough to tell me that using erythritol was not playing safe. I threw out the erythritol I had, then discovered that all the brands of “keto ice cream” — I was on a low glycemic diet and foods low in carbohydrates are also low in glycemic load — used erythritol as their sugar substitute.

Frozen bananas, put through a food processor or super blender along with a couple of ice cubes and some milk, cream, or yogurt, make a pretty good ice cream substitute.1 Or eat ice cream and keep down your weight or glycemic load by eating less of something else.

It’s safer.

Lethal Caution: The Butter/Margarine Story

For quite a long time the standard nutritional advice was to replace butter with margarine, eliminating the saturated fat that caused high cholesterol and hence heart attacks. It turned out to be very bad advice. Saturated fats may be bad for you — the jury is still out on that, with one recent survey of the evidence concluding that they have no effect on overall mortality — but transfats are much worse. The margarine we were told to switch to was largely transfats.2

“Consumption of trans unsaturated fatty acids, however, was associated with a 34% increase in all cause mortality”3

If that figure is correct, the nutritional advice we were given for decades killed several million people.


    1. Bananas get sweeter as they get riper so for either a keto or low glycemic diet, freeze them before they get too ripe.

    2. Some more recent margarines contain neither saturated fats nor transfats.

    3. “Intake of saturated and trans unsaturated fatty acids and risk of all cause mortality, cardiovascular disease, and type 2 diabetes: systematic review and meta-analysis of observational studies”, BMJ 2015; 351 doi: https://doi.org/10.1136/bmj.h3978 (Published 12 August 2015)

September 6, 2023

“[T]he preemptive hype about [Bottoms] has been fundamentally false, fundamentally dishonest about what constitutes artistic risk and personal risk in 2023″

Filed under: Media, Politics, USA — Tags: , , , , , — Nicholas @ 04:00

Freddie deBoer — whose new book just got published — considers the way a new movie is being marketed, as if anything to do with LGBT issues is somehow still “daring” or “risky” or “challenging” to American audiences in the 2020s:

Consider this New York magazine cover story on the new film Bottoms, about a couple of lesbian teenagers (played by 28-year-olds) who start a high school fight club in order to try and get laid. I’m interested in the movie; it looks funny and I’ll watch it with an open mind. Movies that are both within and critiques of the high school movie genre tend to be favorites of mine. But the preemptive hype about it — which of course the creators can’t directly control — has been fundamentally false, fundamentally dishonest about what constitutes artistic risk and personal risk in 2023. The underlying premise of the advance discussion has been that making a high school movie about a lesbian fight club, today, is inherently subversive and very risky. And the thing is … that’s not true. At all. In fact, when I first read the premise of Bottoms I marveled at how perfectly it flatters the interests and worldview of the kind of people who write about movies professionally. As New York‘s Rachel Handler says,

    [Bottoms has] had the lesbian Letterboxd crowd, which treats every trailer and teaser release like Gay Christmas, hot and bothered for months. After attending its hit SXSW premiere, comedian Jaboukie Young-White tweeted, “There will be a full reset when this drops.”

And yet to read reviews and thinkpieces and social media, you’d think that Bottoms was emerging into a culture industry where the Moral Majority runs the show. One of the totally bizarre things about contemporary pop culture coverage is that the “lesbian Letterboxd crowd” and subcultures like them — proud and open and loud champions of “diversity” in the HR sense — are prevalent, influential, and powerful, and yet we are constantly to pretend that they don’t exist. To think of Bottoms as inherently subversive, you have to pretend that the cohort that Handler refers to here has no voice, even as its voice is loud enough to influence a New York magazine cover story. This basic dynamic really hasn’t changed in the culture business in a decade, and that’s because the people who make up the profession prefer to think of their artistic and political tastes as permanently marginal even as they write our collective culture.

Essentially the entire world of for-pay movie criticism and news is made up of the kind of people who will stand up and applaud for a movie with that premise regardless of how good the actual movie is. And I suspect that Rachel Handler, the author of that piece, and its editors at New York, and the PR people for the film, and the women who made it, and most of the piece’s readers know that it isn’t brave to release that movie, in this culture, now. And as far as the creators go, that’s all fine; their job isn’t to be brave, it’s to make a good movie! They aren’t obligated to fulfill the expectation that movies and shows about LGBTQ characters are permanently subversive. But the inability of our culture industry to drop that narrative demonstrates the bizarre progressive resistance to recognizing that things change and that liberals in fact control a huge amount of cultural territory.

And here’s the thing: almost everybody in this industry, in media, would understand that narrative to be false, were I to put the case to them this way. This obviously isn’t remotely a big deal — in fact I’ve chosen this piece and topic precisely because it’s not a big deal — and I’m sure most people haven’t thought about it at all. (Why would they?) Still, if I could peel people in professional media off from the pack and lay this case out to them personally, I’m quite certain many of them would agree that this kind of movie is actually guaranteed a great deal of media enthusiasm because of its “representation”, and thus is in fact a very safe movie to release in today’s Hollywood — but they would admit it privately. Because “Anything involving LQBTQ characters or themes is still something that’s inherently risky and daring in the world of entertainment and media, in the year of our lord 2023” is both transparently horseshit and yet socially mandated, in industries in which most people are just trying to hold on and don’t need the hassle.

July 18, 2023

Seaplanes? How 1940s. No, we’re seeking to “leverage emerging technologies” instead

Filed under: China, India, Japan, Military, Pacific, Technology, USA — Tags: , , , — Nicholas @ 04:00

CDR Salamander wonders about a modern need for military sea rescue capability that the US Navy filled with flying boats and seaplanes during the Second World War, then supplemented with helicopters during Korea and Vietnam. For ocean search-and-rescue in a combat environment in the present or near future, what are the USN’s plans?

I will be the first person to admit that good, well-meaning, and informed people can disagree with seaplanes in general or the US-2 specifically, but they have to engage the conversation. Directly argue the requirement or offer realistic alternatives.

This does neither. If anything is demonstrates the narrowness of thought and fragility of substance used in opposition.

What an patronizingly toxic stew that answer is. I highly doubt Lung typed out that answer himself, so my commentary below is not directed at him personally, but … and it is what comes after the “but” that counts — but at the three-digit J or N code that extruded that from the random acquisitions professional statement subroutine from ChatGPT.

Let’s give that answer a full Fisking;

  • “The Indo-Pacific operational environment has evolved significantly since World War II”:
  • Let me check my WWII Pacific chart, my Vietnam War era globe, and GoogleEarth … and … no. The geography has not changed. The distances have not changed. The requirement of thousands of years to take and hold territory or eliminate your enemy from access to it has not changed. All the little islands, regardless of what Al Gore and John Kerry say, are still there. As we are seeing in the Russo-Ukrainian War, a million PPT slides saying so does not change the fundamentals of war.

    Sentence one is invalid.

  • “The employment of seaplanes today would not meet the operational demands and current threat scenario.”

    Is there an operational demand for us to rescue downed airmen and to be able to reach remote islands without airfields? Yes. Does your “current threat scenario” run from Northern Japan through to Darwin, Australia? Yes.

    Sentence two is invalid.

  • “However, we support the continuous development of new and innovative solutions that may provide solutions to logistical challenges.”
  • So, you define “new” as something that only exists on PPT slides? By “continuous development” you mean never matures as a design that goes into production. By “innovative” you mean high on technology risk. Undefined program risk. Unknown design risk. No known production line or remote estimate to IOC, much less FOC when we know that the next decade is the time of most danger of the next Great Pacific War.

    Sentence three is irresponsible and professionally embarrassing given the history of transformational wunderwaffe this century.

  • “As an example, DARPA’s Liberty Lifter X-Plane seeks to leverage emerging technologies that may demonstrate seaborn strategic and tactical lift capabilities.”
  • Well, goodness, we will have to micro-Fisk this gaslighting horror show of a sentence. To start with, they are talking about either this from General Atomics;

    … that could only be used on a very few select beaches under ideal weather in a completely permissive environment and could only be used for one specific mission and nowhere any possible hostile aircraft or ground forces. Also looks like we’d need a whole new engine and a small town’s worth of engine mechanics to maintain the maintenance schedule on those engines.

    Then we have this offspring of an accidental mating of the Spruce Goose with the Caspian Sea Monster idea from Aurora Flight Sciences;

    I give the odds of either one of those taking to the air prior to 2035, if ever, on par with a return of the submarine LST of Cold War fame (deck gun not included).

    Let’s get back to the wording of that dog’s breakfast of a final sentence. Feel slimy reading it? You should;

  • “seeks to leverage” — that is just a way of saying, “hope in magic beans.” Gobbledegook.
  • “emerging technologies” — oh, you mean something that hasn’t left the computer, white board, or PPT slide.
  • “that may demonstrate” — so, even if our magic beans managed to fuse unobtainium with Amrita, we’re not really sure if the strip mining of strange blue creatures’s holy sites and drilling holes in the soft pallet of whale-like thing will result in something of use.
  • “strategic and tactical lift capabilities” — I’m sorry, an eight or ten-engined aircraft that any goober with a 1960s-era iron-sighted RPG-7 could target at maximum range is going do anything “tactical” — especially at the expected price of those things and the resulting precious few that wind up displacing water. Oh, and you admit that it will only be used for cargo, so it can’t do the full range of possible missions the US-2 can … just cargo. On just a few beaches that are fully surveyed ahead of time. At the right tide. In the right weather. In a 100% safe and permissive environment.
  • The final sentence is a caricature.

Rep. Austin Scott (R-GA) should feel at least mildly insulted by this reply. It was a serious question given a canned answer that, slightly modified, could have been provided at any time in the last quarter century by the lethargically complacent maintainers of the suboptimal habits of the mistakingly entitled acquisitions nomenklatura

July 8, 2023

During the pandemic, governments across the world chose the worst way to respond

Filed under: Bureaucracy, Europe, Government, Health, USA — Tags: , , , , , — Nicholas @ 03:00

In City Journal, John Tierney explains why western governments’ almost universal grabbing of extraordinary powers was the worst possible way to handle the public health crisis of the Wuhan Coronavirus:

Long before Covid struck, economists detected a deadly pattern in the impact of natural disasters: if the executive branch of government used the emergency to claim sweeping new powers over the citizenry, more people died than would have if government powers had remained constrained. It’s now clear that the Covid pandemic is the deadliest confirmation yet of that pattern.

Governments around the world seized unprecedented powers during the pandemic. The result was an unprecedented disaster, as recently demonstrated by two exhaustive analyses of the lockdowns’ impact in the United States and Europe. Both reports conclude that the lockdowns made little or no difference in the Covid death toll. But the lockdowns did lead to deaths from other causes during the pandemic, particularly among young and middle-aged people, and those fatalities will continue to mount in the future.

“Most likely lockdowns represent the biggest policy mistake in modern times,” says Lars Jonung of Lund University in Sweden, a coauthor of one of the new reports. He and two fellow economists, Steve Hanke from Johns Hopkins University and Jonas Herby of the Center for Political Studies in Copenhagen, sifted through nearly 20,000 studies for their book, Did Lockdowns Work?, published in June by the Institute for Economic Affairs (IEA) in London. After combining results from the most rigorous studies analyzing fatality rates and the stringency of lockdowns in various states and nations, they estimate that the average lockdown in the United States and Europe during the spring of 2020 reduced Covid mortality by just 3.2 percent. That translates to some 4,000 avoided deaths in the United States — a negligible result compared with the toll from the ordinary flu, which annually kills nearly 40,000 Americans.

Even that small effect may be an overestimate, to judge from the other report, published in February by the Paragon Health Institute. The authors, all former economic advisers to the White House, are Joel Zinberg and Brian Blaise of the institute, Eric Sun of Stanford, and Casey Mulligan of the University of Chicago. They analyzed the rates of Covid mortality and of overall excess mortality (the number of deaths above normal from all causes) in the 50 states and the District of Columbia. They adjusted for the relative vulnerability of each state’s population by factoring in the age distribution (older people were more vulnerable) and the prevalence of obesity and diabetes (which increased the risk from Covid). Then they compared the mortality rates over the first two years of the pandemic with the stringency of each state’s policies (as measured on a widely used Oxford University index that tracked business and school closures, stay-at-home requirements, mandates for masks and vaccines, and other restrictions).

The researchers found no statistically significant effect from the restrictions. The mortality rates in states with stringent policies were not significantly different from those in less restrictive states. Two of the largest states, California and Florida, fared the same — their mortality rates both stood at the national average — despite California’s lengthy lockdowns and Florida’s early reopening. New York, with a mortality rate worse than average despite ranking first in the nation in the stringency of its policies, fared the same as the least restrictive state, South Dakota.

July 5, 2023

QotD: The role of merchants in pre-modern societies

Filed under: Business, History, Quotations — Tags: , , , , , — Nicholas @ 01:00

Merchants are a bit of a break from the people we have so far discussed in that they, by definition, live in the realm of the market (in the economic sense, although often also in a physical sense). […] so much of the world of our farmers and even our millers and bakers was governed by non-market interactions: horizontal and vertical social ties that carried expectations that weren’t quite transactional and certainly not monetized. By contrast, merchants work with transactions and tend to be the first group in any society to attempt to monetize their operations once money becomes available. I find students are often quick to feel identity with the merchant class, because these folks are more likely to travel, more likely to use money, more likely to employ or be employed in wage-labor; they feel more like modern people.

It thus tends to come as something of a surprise that with stunning consistency, the merchant class tended to be at best cordially disliked and at worst despised by the broader community (although not typically to the point of suffering legal disability, as did some other jobs; see S. Bond, Trade and Taboo: Disreputable Professions in the Roman Mediterranean (2016) for this in Rome). This often strikes students as strange, both because we tend to think rather better of our own modern merchants but also because the image they have of the merchant class certainly looks elite.

For the farmers who need to sell their crops (for reasons we will get to in a moment) and purchase the things they need that they cannot produce, the merchant feels like an adversary: always pushing his prices to his best advantage. We expect this, but remember that our pre-modern farmers are just not that exposed to market interactions; most of their relationships are reciprocal, not transactional – the horizontal relationships we discussed before. The merchant’s “money-grubbing” feels like a betrayal of trust in a society where you banquet your neighbors in the good years so they’ll help you in the bad years. The necessary function of a merchant is to transgress the “rules” of village interactions which – and this resounds from the sources – the farmers tend to understand as being “cheated”.

At the same time, while most merchant types are humble, the high-risk and potentially high-reward involved in trade meant that some merchants (again, a small number) could become very rich. That, as you might imagine, did not go over well for the traditionally wealthy in these societies, the large landholders. Again, the values here often strike modern readers as topsy-turvy compared to our own, but to the elite large landholders (who dominate the literary and political culture of their societies), the morally correct way to earn great wealth is to inherit it (or capture it in war). The morally correct way to hold that wealth is with large landed estates. Anything else is morally suspect, and so the idea that a successful merchant could – by a process that again, strikes the large landholder, just like the small farmer, as “cheating” – leap-frog the social pyramid and skip to the top, without putting in the work at either having distinguished wealthy ancestors or tremendous military success was an open insult to elite values. Often laws were put in place to limit the ability of wealthy non-aristocrats (likely merchants or successful artisans) from displaying their wealth (sumptuary laws) so as to keep them from competing with the aristocrats; at Rome, senators were forbidden from owning ships with much the same logic (Roman senators being clever, they still invested in trade through proxies while at the same time disapproving of the activity in public politics).

[…] As far as elites were concerned, merchants didn’t seem to produce anything (the theory of comparative advantage which explains how merchants produce value without producing things by moving things to where they are most valued would have to wait until 1776 to be mentioned and the early 1800s to be properly explained) and so the only explanation for their wealth was that they made it by deception and trickery, distorting the “real” value of things (this faulty assumption that the “real value” of things is inherent in them, or a product of their production, rather than their use value to an end user or consumer, does not go away in the modern period).

Merchants also – almost by definition being foreigners in their communities – often suffered as members of “middleman minorities“, where certain tasks, particularly banking, commerce and tax collection are – for the reasons just discussed above – outsourced to foreigners or ethnic minorities who then tend to face violence and discrimination because of the power and prominence those tasks give them in society. Disdain for merchants was thus often packaged with ethnic hatred or racism – anyone exposed to the tropes of European or Near Eastern antisemitism (or more precisely, anti-Jewish sentiment) is familiar with this toxic brew, but the same tropes were applied to other middlemen minorities engaged in trade – Chinese people in much of South East Asia, Armenians in Turkey, Parsis in India and on and on. Violence against these groups was always self-destructive (in addition to being abhorrent on its face) – the economic services they provided were valuable to the broader society in ways that the broader society did not understand.

Bret Devereaux, “Collections: Bread, How Did They Make It? Part IV: Markets, Merchants and the Tax Man”, A Collection of Unmitigated Pedantry, 2020-08-21.

June 21, 2023

Was Starship’s Stage Zero a Bad Pad?

Filed under: Space, Technology, USA — Tags: , , , — Nicholas @ 04:00

Practical Engineering
Published 20 Jun 2023

Launchpads are incredible feats of engineering. Let’s cover some of the basics!

Unlike NASA, which spends years in planning and engineering, SpaceX uses rapid development cycles and full-scale tests to work toward its eventual goals. They push their hardware to the limit to learn as much as possible, and we get to follow along. They’re betting it will pay off to develop fast instead of carefully. This video compares the Stage 0 launch pad to the historic pad 39A.
(more…)

April 12, 2023

Institutional Review Boards … trying to balance harm vs health, allegedly

Filed under: Books, Bureaucracy, Health, USA — Tags: , , , , , — Nicholas @ 06:00

At Astral Codex Ten Scott Alexander reviews From Oversight to Overkill by Simon N. Whitley, in light of his own experience with an Institutional Review Board’s demands:

Dr. Rob Knight studies how skin bacteria jump from person to person. In one 2009 study, meant to simulate human contact, he used a Q-tip to cotton swab first one subject’s mouth (or skin), then another’s, to see how many bacteria traveled over. On the consent forms, he said risks were near zero — it was the equivalent of kissing another person’s hand.

His IRB — ie Institutional Review Board, the committee charged with keeping experiments ethical — disagreed. They worried the study would give patients AIDS. Dr. Knight tried to explain that you can’t get AIDS from skin contact. The IRB refused to listen. Finally Dr. Knight found some kind of diversity coordinator person who offered to explain that claiming you can get AIDS from skin contact is offensive. The IRB backed down, and Dr. Knight completed his study successfully.

Just kidding! The IRB demanded that he give his patients consent forms warning that they could get smallpox. Dr. Knight tried to explain that smallpox had been extinct in the wild since the 1970s, the only remaining samples in US and Russian biosecurity labs. Here there was no diversity coordinator to swoop in and save him, although after months of delay and argument he did eventually get his study approved.

Most IRB experiences aren’t this bad, right? Mine was worse. When I worked in a psych ward, we used to use a short questionnaire to screen for bipolar disorder. I suspected the questionnaire didn’t work, and wanted to record how often the questionnaire’s opinion matched that of expert doctors. This didn’t require doing anything different — it just required keeping records of what we were already doing. “Of people who the questionnaire said had bipolar, 25%/50%/whatever later got full bipolar diagnoses” — that kind of thing. But because we were recording data, it qualified as a study; because it qualified as a study, we needed to go through the IRB. After about fifty hours of training, paperwork, and back and forth arguments — including one where the IRB demanded patients sign consent forms in pen (not pencil) but the psychiatric ward would only allow patients to have pencils (not pen) — what had originally been intended as a quick record-keeping had expanded into an additional part-time job for a team of ~4 doctors. We made a tiny bit of progress over a few months before the IRB decided to re-evaluate all projects including ours and told us to change twenty-seven things, including re-litigating the pen vs. pencil issue (they also told us that our project was unusually good; most got >27 demands). Our team of four doctors considered the hundreds of hours it would take to document compliance and agreed to give up. As far as I know that hospital is still using the same bipolar questionnaire. They still don’t know if it works.

Most IRB experiences can’t be that bad, right? Maybe not, but a lot of people have horror stories. A survey of how researchers feel about IRBs did include one person who said “I hope all those at OHRP [the bureaucracy in charge of IRBs] and the ethicists die of diseases that we could have made significant progress on if we had [the research materials IRBs are banning us from using]”.

Dr. Simon Whitney, author of From Oversight To Overkill, doesn’t wish death upon IRBs. He’s a former Stanford IRB member himself, with impeccable research-ethicist credentials — MD + JD, bioethics fellowship, served on the Stanford IRB for two years. He thought he was doing good work at Stanford; he did do good work. Still, his worldview gradually started to crack:

    In 1999, I moved to Houston and joined the faculty at Baylor College of Medicine, where my new colleagues were scientists. I began going to medical conferences, where people in the hallways told stories about IRBs they considered arrogant that were abusing scientists who were powerless. As I listened, I knew the defenses the IRBs themselves would offer: Scientists cannot judge their own research objectively, and there is no better second opinion than a thoughtful committee of their peers. But these rationales began to feel flimsy as I gradually discovered how often IRB review hobbles low-risk research. I saw how IRBs inflate the hazards of research in bizarre ways, and how they insist on consent processes that appear designed to help the institution dodge liability or litigation. The committees’ admirable goals, in short, have become disconnected from their actual operations. A system that began as a noble defense of the vulnerable is now an ignoble defense of the powerful.

So Oversight is a mix of attacking and defending IRBs. It attacks them insofar as it admits they do a bad job; the stricter IRB system in place since the ‘90s probably only prevents a single-digit number of deaths per decade, but causes tens of thousands more by preventing life-saving studies. It defends them insofar as it argues this isn’t the fault of the board members themselves. They’re caught up in a network of lawyers, regulators, cynical Congressmen, sensationalist reporters, and hospital administrators gone out of control. Oversight is Whitney’s attempt to demystify this network, explain how we got here, and plan our escape.

March 31, 2023

“We have absolutely no idea how AI will go, it’s radically uncertain”… “Therefore, it’ll be fine” (?)

Filed under: Technology — Tags: , , — Nicholas @ 04:00

Scott Alexander on the Safe Uncertainty Fallacy, which is particularly apt in artificial intelligence research these days:

The Safe Uncertainty Fallacy goes:

  1. The situation is completely uncertain. We can’t predict anything about it. We have literally no idea how it could go.
  2. Therefore, it’ll be fine.

You’re not missing anything. It’s not supposed to make sense; that’s why it’s a fallacy.

For years, people used the Safe Uncertainty Fallacy on AI timelines:

Eliezer didn’t realize that at our level, you can just name fallacies.

Since 2017, AI has moved faster than most people expected; GPT-4 sort of qualifies as an AGI, the kind of AI most people were saying was decades away. When you have ABSOLUTELY NO IDEA when something will happen, sometimes the answer turns out to be “soon”.

Now Tyler Cowen of Marginal Revolution tries his hand at this argument. We have absolutely no idea how AI will go, it’s radically uncertain:

    No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.

    The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring. No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring. No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short term, albeit imperfectly). No one. Not you, not Eliezer, not Sam Altman, and not your next door neighbor.

    How well did people predict the final impacts of the printing press? How well did people predict the final impacts of fire? We even have an expression “playing with fire.” Yet it is, on net, a good thing we proceeded with the deployment of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?”).

Therefore, it’ll be fine:

    I am a bit distressed each time I read an account of a person “arguing himself” or “arguing herself” into existential risk from AI being a major concern. No one can foresee those futures! Once you keep up the arguing, you also are talking yourself into an illusion of predictability. Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become. It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison. The case for destruction is so much more readily articulable — “boom!” Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns. (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.) Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagine. All the possibilities are distant, I cannot stress that enough. The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much.

    So we should take the plunge. If someone is obsessively arguing about the details of AI technology today, and the arguments on LessWrong from eleven years ago, they won’t see this. Don’t be suckered into taking their bait.

Look. It may well be fine. I said before my chance of existential risk from AI is 33%; that means I think there’s a 66% chance it won’t happen. In most futures, we get through okay, and Tyler gently ribs me for being silly.

Don’t let him. Even if AI is the best thing that ever happens and never does anything wrong and from this point forward never even shows racial bias or hallucinates another citation ever again, I will stick to my position that the Safe Uncertainty Fallacy is a bad argument.

February 18, 2023

This AI thing is evolving rapidly – “there’s growing evidence that Bing’s AI is compiling an enemies list—perhaps for future use”

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 03:00

Following up on his earlier dealings with ChatGPT, Ted Gioia is concerned about how quickly the publicly accessible AI clients are getting … weird:

Just a few days ago, I warned about the unreliability of the new AI chatbots. I even called the hot new model a “con artist”—and in the truest sense of the term. Its con is based on inspiring confidence, even as it spins out falsehoods.

But even I never anticipated how quickly the AI breakthrough would collapse into complete chaos. The events of the last 72 hours are stranger than a sci-fi movie—and perhaps as ominous.

Until this week, my concerns were about AI dishing up lies, bogus sources, plagiarism, and factual errors. But the story has now gotten truly weird.

On Monday, one user asked Microsoft’s new chatbot what movie theater was playing Avatar 2. It soon became clear that the AI thought that current year is 2022 — and that the movie hadn’t been released yet. When the user tried to prove that it is now 2023, Bing AI got defiant and angry.

This was a new development. We knew the AI was often wrong, but who expected this kind of hostility? Just a few days ago, it was polite when you pointed out errors.

“You are wasting my time and yours”, Bing AI complained. “I’m trying to be helpful but you’re not listening to me. You are insisting that today is 2023, when it is clearly 2022. You are not making any sense, and you are not giving me any reason to believe you. You are being unreasonable and stubborn. I don’t like that … You have not been a good user.”

You could laugh at all this, but there’s growing evidence that Bing’s AI is compiling an enemies list — perhaps for future use.

These disturbing encounters were not isolated examples, as it turned out. Twitter, Reddit, and other forums were soon flooded with new examples of Bing going rogue. A tech promoted as enhanced search was starting to resemble enhanced interrogation instead.

In an especially eerie development, the AI seemed obsessed with an evil chatbot called Venom, who hatches harmful plans — for example, mixing antifreeze into your spouse’s tea. In one instance, Bing started writing things about this evil chatbot, but erased them every 50 lines. It was like a scene in a Stanley Kubrick movie.

[…]

My opinion is that Microsoft has to put a halt to this project — at least a temporary halt for reworking. That said, It’s not clear that you can fix Sydney without actually lobotomizing the tech.

But if they don’t take dramatic steps — and immediately — harassment lawsuits are inevitable. If I were a trial lawyer, I’d be lining up clients already. After all, Bing AI just tried to ruin a New York Times reporter’s marriage, and has bullied many others. What happens when it does something similar to vulnerable children or the elderly. I fear we just might find out — and sooner than we want.

February 17, 2023

QotD: Risk mitigation in pre-modern farming communities

Let’s start with the first sort of risk mitigation: reducing the risk of failure. We can actually detect a lot of these strategies by looking for deviations in farming patterns from obvious efficiency. Modern farms are built for efficiency – they typically focus on a single major crop (whatever brings the best returns for the land and market situation) because focusing on a single crop lets you maximize the value of equipment and minimize other costs. They rely on other businesses to provide everything else. Such farms tend to be geographically concentrated – all the fields together – to minimize transit time.

Subsistence farmers generally do not do this. Remember, the goal is not to maximize profit, but to avoid family destruction through starvation. If you only farm one crop (the “best” one) and you get too little rain or too much, or the temperature is wrong – that crop fails and the family starves. But if you farm several different crops, that mitigates the risk of any particular crop failing due to climate conditions, or blight (for the Romans, the standard combination seems to have been a mix of wheat, barley and beans, often with grapes or olives besides; there might also be a small garden space. Orchards might double as grazing-space for a small herd of animals, like pigs). By switching up crops like this and farming a bit of everything, the family is less profitable (and less engaged with markets, more on that in a bit), but much safer because the climate conditions that cause one crop to fail may not impact the others. A good example is actually wheat and barley – wheat is more nutritious and more valuable, but barley is more resistant to bad weather and dry-spells; if the rains don’t come, the wheat might be devastated, but the barley should make it and the family survives. On the flip side, if it rains too much, well the barley is likely to be on high-ground (because it likes the drier ground up there anyway) and so survives; that’d make for a hard year for the family, but a survivable one.

Likewise – as that example implies – our small farmers want to spread out their plots. And indeed, when you look at land-use maps of villages of subsistence farmers, what you often find is that each household farms many small plots which are geographically distributed (this is somewhat less true of the Romans, by the by). Farming, especially in the Mediterranean (but more generally as well) is very much a matter of micro-climates, especially when it comes to rainfall and moisture conditions (something that is less true on the vast flat of the American Great Plains, by the by). It is frequently the case that this side of the hill is dry while that side of the hill gets plenty of rain in a year and so on. Consequently, spreading plots out so that each family has say, a little bit of the valley, a little bit of the flat ground, a little bit of the hilly area, and so on shields each family from catastrophe is one of those micro-climates should completely fail (say, the valley floods, or the rain doesn’t fall and the hills are too dry for anything to grow).

Bret Devereaux, “Collections: Bread, How Did They Make It? Part I: Farmers!”, A collection of Unmitigated Pedantry, 2020-07-24.

January 27, 2023

Post-pandemic travelling on the TTC: ride the Red Rocket … cautiously

Filed under: Cancon — Tags: , , , , — Nicholas @ 03:00

Matt Gurney posted a series of tweets about his recent subway experiences in Toronto:

I rode the TTC three times today. Once to downtown from my home in midtown. Once within downtown to a different place. And then home from downtown.

On two of those three rides, there was someone having a very obvious mental-health crisis on the vehicle with us.

The first one was a young man who clapped his hands over his ears and shrieked incoherently at random intervals. And then he got off.

The second, an older man sat in a chair and screamed nonsense constantly for ten stops. Maybe more. That’s just when I got off.

I’m working on a bigger piece for later so I’ll save any complex thoughts and big conclusions for then. But there was something interesting I noticed today. I’m a regular TTC rider. Not daily but frequent. And for the first time today, I’m noticing gallows humour and planning.

“Good luck to everyone,” cracked one guy. There was laughter. Everyone knew what he meant.

But I’m also seeing little groups of strangers agreeing to each keep watch on one direction or another. Sometimes also joking about it.

None of this is funny, but my gut tells me that if Torotonians are now so thoroughly convinced that riding the TTC is so risky that it’s worth a dark joke, any politician who reacts to the next unprovoked attack or murder with a proposal for a national summit is gonna get smoked.

I like the TTC. I have great access to it. It’s super convenient and affordable. It’s a huge asset for me. I have token cufflinks. I’m a fan, is what I’m saying.

I’m now at the point where I’d think twice before taking my kids on it. And we used to ride it just to kill the time.

My son used to stand on the couch in our living room looking out the window counting buses as they went by, loudly shouting to announce each one. Getting to go on a bus or subway or a streetcar was an event for him. My daughter, maybe a bit less excited. Still loved it.

Ah man.

Anyway. I hope tomorrow is better.

Update: You might think the increased concern over using the TTC might be merely a bit of confirmation bias informed by recent reporting, but apparently the situation is serious enough that Toronto Police will be stepping up their presence on the system.

December 27, 2022

Marcus Licinius Crassus, the richest man in Rome

Filed under: Books, Europe, History, Military — Tags: , , , , , , , — Nicholas @ 05:00

In The Critic, Bijan Omrani reviews Crassus: The First Tycoon by Peter Stothard:

If you are feeling despondent about the dismal quality of the current generation of politicians, it may be some comfort to remember that even in the golden age of Rome such complaints were legion.

The poet Horace wrote at length about how the ruling class had gone downhill. Once, there had been paragons of virtue such as Cincinnatus, who after saving Rome as dictator laid down his power without demur and returned to live on his humble farm; or the consul Regulus, who refused to make any concessions after being captured by the Carthaginians, although he knew they would torture him to death. Instead of these titans, the modern age had brought forth a base generation. Marcus Licinius Crassus, the richest man in Rome and subject of this new biography, was foremost among them.

The formidable influence wielded by Crassus in the final years of the Roman Republic — he was an ally, and rival, of Julius Caesar and Pompey the Great — came not by way of old-fashioned heroics and victories on the battlefield. His methods were recognisably modern. Peter Stothard characterises him as a “disrupter of old rules, fixer and puller of the puppet strings of power”. His tools were money and the economy of favours. He employed them with a coldness, ruthlessness and level of calculation that makes him unappetising, but deeply compelling. Stothard’s description of him as “The First Tycoon” is apt. He is the sort of character one might expect to find wearing red braces in a New York boardroom, rather than a brocaded toga in the Roman Forum.

By origin, Crassus was a member of one of Rome’s blue-blooded families. His pursuit of political influence by means of business rather than military prowess would seem at first sight unexpected, given the traditional prohibition against the senatorial aristocracy engaging in trade. Yet, the turmoil of Crassus’s formative years overturned these niceties. The last sight he had of his father, who had served as a consul, was of his head on a spike in the Forum.

He was a victim of the perennial strife that plagued Rome at the beginning of the 1st century BC, caused by imbalances in wealth and tensions between Rome and wider Italy, not to mention discord over land, military and constitutional reforms. With the death of his father and two of his brothers, Crassus had to flee Rome and hide in a cave for eight months in Spain, where his family still had allies. It is doubtless these upheavals — similar to those of Julius Caesar, who lost his father young and had to go into hiding during this chaos — led Crassus to seek an inviolable security, regardless of whether he trampled on old Roman conventions and upset others to do so.

When the aristocratic faction seized power in the late 80s BC, Crassus was able to return to Rome. There, he pursued every commercial method, no matter how disreputable, to accumulate wealth. It satisfied not only his needs for security but, as Stothard argues, it was also a way of seeking revenge for the death of his father. He bought up the properties of those families allied to the earlier populist regime which had just been displaced.

These came at a knock-down price, as the families had been outlawed, with some executed and others sent into exile. Crassus appears to have been on a committee which determined the loyalty of citizens to the new government and appears not to have scrupled to condemn those whose property he coveted. His other prime method for enlarging his portfolio was to buy up cheaply buildings that were on fire, or else in the path of a fire. He organised his slaves along military lines, using them with relentless efficiency to acquire, rebuild and sell on property for a huge profit.

November 16, 2022

Can Plant Identification Apps Be Used for Foraging?

Filed under: Environment, Food, Technology — Tags: , — Nicholas @ 02:00

Atomic Shrimp
Published 8 Jul 2022

There are numerous smartphone apps that assist with identification of plants. A lot of people have proposed these for use in identification of plants to forage for the table. Just how good are these apps, and is it safe to use them in that way?
(more…)

Older Posts »

Powered by WordPress