Quotulatiousness

February 19, 2018

Graphing good news

Filed under: Books, Economics, History — Tags: , , , — Nicholas @ 05:00

In the Times Literary Supplement, David Wootton reviews Enlightenment Now: A manifesto for science, reason, humanism and progress by Steven Pinker:

This book consists essentially of seventy-two graphs – and, despite that, it is gripping, provocative and (many will find) infuriating. The graphs all have time on the horizontal axis, and on the vertical axis something important that can be measured against it – life expectancy, for example, or suicide rates, or income. In some graphs the line, or lines (often the graphs compare trends in several countries) fall as they go from left to right; in others they rise. In every single one, the overall picture (with the inevitable blips and bounces) is of life getting better and better. Suicide rates fall, homicides fall, incomes rise, life expectancies rise, literacy rates rise and so on and on through seventy-two variations. Most of these graphs are not new: some simply update graphs which appeared in Pinker’s earlier The Better Angels of Our Nature (2011); others come from recognized purveyors of statistical information. The graphs that weren’t in Better Angels extend the argument of that book, that war and homicide are on the decline across the globe, to assert that life has been getting better and better in all sorts of other respects. The claim isn’t new: a shorter version is to be found in Johan Norberg’s Progress (2017). But the range and scope of the evidence adduced is new. The only major claim not supported by a graph (or indeed much evidence of any kind) is the assertion that all this progress has something to do with the Enlightenment.

Since the argument of the book is almost entirely contained in the graphs, those who want to attack the argument are going to attack the figures on which the graphs are based. Good luck to them: arguments based on statistics, like all interesting arguments, should be tested and tested again. Better Angels caused a vitriolic dispute between Pinker and Nassim Nicholas Taleb as to whether major wars are becoming less frequent. In Taleb’s view the question is a bit like asking whether major earthquakes are getting less frequent or not: they happen so rarely, and so randomly, that you would need records going back over a vast stretch of time to reach any meaningful conclusion; a graph showing falling death rates in wars over the past seventy years won’t do the job. But it certainly will tell you that lots of generalizations about modern war are wrong. Much, indeed most, of Pinker’s argument survived Taleb’s attack, which in any case was directed at only one graph among many.

A more radical line of criticism of Better Angels came from John Gray. How can one find a common standard of measurement for the suffering of a concentration camp victim, of a soldier who died in the trenches, and of someone killed in the firebombing of Dresden? To turn to economics, how can one find a common standard of measurement for books and washing machines, oranges and steak pies? Money, you might think, provides that standard, but what happens if many of the goods being measured – electric lighting, cars, televisions, computers – get cheaper and cheaper as time goes on, so that a rising standard of living is concealed by falling prices? For Gray, to place one’s faith in statistics, which claim to be measuring the unmeasurable, is no different from believing in conversations with angels or in the efficacy of Buddhist prayer wheels. Quantification is our religion.

February 17, 2018

Only 3.8% of American adults identify themselves as LGBT

Filed under: Politics, USA — Tags: , , , — Nicholas @ 03:00

Most people guess a much higher percentage, and if the poll was restricted to the under-30s, the number would likely be at least twice as high. The poll is a few years old now, but it points out that most Americans over-estimate the number of gays and lesbians in the population:

The American public estimates on average that 23% of Americans are gay or lesbian, little changed from Americans’ 25% estimate in 2011, and only slightly higher than separate 2002 estimates of the gay and lesbian population. These estimates are many times higher than the 3.8% of the adult population who identified themselves as lesbian, gay, bisexual or transgender in Gallup Daily tracking in the first four months of this year.

The stability of these estimates over time contrasts with the major shifts in Americans’ attitudes about the morality and legality of gay and lesbian relations in the past two decades. Whereas 38% of Americans said gay and lesbian relations were morally acceptable in 2002, that number has risen to 63% today. And while 35% of Americans favored legalized same-sex marriage in 1999, 60% favor it today.

The U.S. Census Bureau documents the number of individuals living in same-sex households but has not historically identified individuals as gay or lesbian per se. Several other surveys, governmental and non-governmental, have over the years measured sexual orientation, but the largest such study by far has been the Gallup Daily tracking measure instituted in June 2012. In this ongoing study, respondents are asked “Do you, personally, identify as lesbian, gay, bisexual or transgender?” with 3.8% being the most recent result, obtained from more than 58,000 interviews conducted in the first four months of this year.

H/T to Gari Garion for the link.

February 15, 2018

QotD: Computer models

Filed under: Economics, Quotations, Technology — Tags: , , — Nicholas @ 01:00

How can one be certain about outcomes in a complex system that we’re not really all that good at modeling? Anyone who’s familiar with the history of macroeconomic modeling in the 1960s and 1970s will be tempted to answer “Umm, we can’t.” Economists thought that the explosion of data and increasingly sophisticated theory was going to allow them to produce reasonably precise forecasts of what would happen in the economy. Enormous mental effort and not a few careers were invested in building out these models. And then the whole effort was basically abandoned, because the models failed to outperform mindless trend extrapolation — or as Kevin Hassett once put it, “a ruler and a pencil.”

Computers are better now, but the problem was not really the computers; it was that the variables were too many, and the underlying processes not understood nearly as well as economists had hoped. Economists can’t run experiments in which they change one variable at a time. Indeed, they don’t even know what all the variables are.

This meant that they were stuck guessing from observational data of a system that was constantly changing. They could make some pretty good guesses from that data, but when you built a model based on those guesses, it didn’t work. So economists tweaked the models, and they still didn’t work. More tweaking, more not working.

Eventually it became clear that there was no way to make them work given the current state of knowledge. In some sense the “data” being modeled was not pure economic data, but rather the opinions of the tweaking economists about what was going to happen in the future. It was more efficient just to ask them what they thought was going to happen. People still use models, of course, but only the unflappable true believers place great weight on their predictive ability.

Megan McArdle, “Global-Warming Alarmists, You’re Doing It Wrong”, Bloomberg View, 2016-06-01.

February 4, 2018

Sword Bayonets – German Casualties – Jerusalem Occupation I OUT OF THE TRENCHES

The Great War
Published on 3 Feb 2018

Check our Podcast: http://bit.ly/MedievalismWW1Podcast

Chair of Wisdom Time! This week we talk about possibly fabricated German casualty numbers, the unwieldy WW1 bayonets and the reaction to the occupation of Jerusalem.

January 19, 2018

What “killed” the most tanks in World War 2?

Filed under: Britain, Europe, France, Germany, History, Military, USA, Weapons, WW2 — Tags: , — Nicholas @ 02:00

Military History Visualized
Published on 22 Dec 2017

This video discusses what killed the most tanks in World War 2. Was it anti-tank guns, mines, planes, hand-held anti-tank weapons, mechanical breakdowns, etc. Also a short look at the problems of the term “kill”, e.g., mobility, firepower and catastrophic/complete kill.

Original Question by Christopher: “What destroyed the most tanks during WW2: infantry, planes, anti-tank guns, or other tanks (I’m not sure if tank destroyers needs its own category or not).”

January 8, 2018

QotD: Differentiating between lies and (political) bullshit

Filed under: Politics, Quotations — Tags: , — Nicholas @ 01:00

Thirty years ago, the Princeton philosopher Harry Frankfurt published an essay in an obscure academic journal, Raritan. The essay’s title was “On Bullshit”. (Much later, it was republished as a slim volume that became a bestseller.) Frankfurt was on a quest to understand the meaning of bullshit — what was it, how did it differ from lies, and why was there so much of it about?

Frankfurt concluded that the difference between the liar and the bullshitter was that the liar cared about the truth — cared so much that he wanted to obscure it — while the bullshitter did not. The bullshitter, said Frankfurt, was indifferent to whether the statements he uttered were true or not. “He just picks them out, or makes them up, to suit his purpose.”

Statistical bullshit is a special case of bullshit in general, and it appears to be on the rise. This is partly because social media — a natural vector for statements made purely for effect — are also on the rise. On Instagram and Twitter we like to share attention-grabbing graphics, surprising headlines and figures that resonate with how we already see the world. Unfortunately, very few claims are eye-catching, surprising or emotionally resonant because they are true and fair. Statistical bullshit spreads easily these days; all it takes is a click.

Tim Harford, “How politicans poisoned statistics”, TimHarford.com, 2016-04-20.

January 1, 2018

Blog traffic in 2017

Filed under: Administrivia, Media — Tags: , , — Nicholas @ 03:00

The annual statistics update on Quotulatiousness from January 1st through December 31st, 2017. The numbers will be a couple of thousand short of the full year, as I did the screen captures mid-morning on the 31st.

I stopped paying much attention to the blog stats years ago, but the jump in traffic from 2016 to 2017 is amazing! Going from a stable ~1.7 million visits per year to nearly 2.5 million last year is quite unexpected. That’s getting up toward the region where it might seem to make sense to try to monetize the blog … but I tried doing the Amazon affiliate thing earlier this year, and it generated exactly $0.00 in revenue for Amazon, and I got my full share of that revenue (as Jayne put it: “Let’s see, let me do the math: 10 per cent of nothing is, … (mumble) carry the zero …(mumble) … “)

October 26, 2017

QotD: The nutrition science is settled

Filed under: Food, Health, Quotations, Science — Tags: , — Nicholas @ 01:00

Nutrition science is, in general, a bottomless stew of politics, guesswork, bogus data and poor statistical practice. I would call it “unsavoury” if that weren’t such an inexcusable pun in this context. Anyone who has read the newspaper for 10 or 20 years, watching the endless tide of good-for-you/bad-for-you roll in and out, must know this instinctively.

Colby Cosh, “MSG: The harmless food enhancer everyone still dreads”, National Post, 2016-04-18.

September 5, 2017

The 100 Year Flood Is Not What You Think It Is (Maybe)

Filed under: Environment, Technology — Tags: , , , — Nicholas @ 08:21

Published on 6 Mar 2016

Today on Practical Engineering we’re talking about hydrology, and I took a little walk through my neighborhood to show you some infrastructure you may have never noticed before.

Almost everyone agrees that flooding is bad. Most years it’s the number one natural disaster in the US by dollars of damage. So being able to characterize flood risks is a crucial job of civil engineers. Engineering hydrology has equal parts statistics and understanding how society treats risks. Water is incredibly important to us, and it shapes almost every facet of our lives, but it’s almost never in the right place at the right time. Sometimes there’s not enough, like in a drought or just an arid region, but we also need to be prepared for the times when there’s too much water, a flood. Rainfall and streamflow have tremendous variability and it’s the engineer’s job to characterize that so that we can make rational and intelligent decisions about how we develop the world around us. Thanks for watching!

FEMA Floodplain Maps: https://msc.fema.gov/portal
USGS Stream Gages: http://maps.waterdata.usgs.gov/mapper

“So, let’s consider the concept of a ‘500-year flood'”

Filed under: Environment, Science — Tags: , , , , — Nicholas @ 03:00

Charlie Martin explains how it’s possible to have two “500-year floods” in less than 500 years:

There have been a lot of people suggesting that Harvey the Hurricane shows that “really and truly climate change is happening, see, in-your-face deniers!”

Of course, it’s possible, even though the actual evidence — including the 12-year drought in major hurricanes — is against it. But hurricanes are a perfect opportunity for stupid math tricks. Hurricanes also provide great opportunities to explain concepts that are unclear to people. So, let’s consider the concept of a “500-year flood.”

Most people hear this and think it means “one flood this size in 500 years.” The real definition is subtly different: saying “a 500-year flood” actually means “there is one chance in 500 of a flood this size happening in any year.”

It’s called a “500-year flood” because statistically, over a long enough time, we would expect to have roughly one such flood on average every 500 years. So, if we had 100,000 years of weather data (and things stayed the same otherwise, which is an unrealistic assumption) then we’d expect to have seen 100,000/500- or 200 500-year floods [Ed. typo fixed] at that level.

The trouble is, we’ve only got about 100 years of good weather data for the Houston area.

July 27, 2017

Words & Numbers: Is Income Inequality Real?

Filed under: Economics, Politics, USA — Tags: , , , — Nicholas @ 05:00

Published on 26 Jul 2017

Income inequality has been in the news more and more, and it doesn’t look good. It’s aggravating to see people making more money than you, and we’re told all the time that income inequality is on the rise. But is it? And even if it is, is it actually a bad thing? This week on Words and Numbers, Antony Davies​ and James R. Harrigan​ talk about how income inequality plays out in the real world.

Learn more: https://fee.org/articles/is-income-inequality-real/

July 19, 2017

“The Economics of Trade” | THINK 2017

Filed under: Britain, Economics, USA — Tags: , , — Nicholas @ 04:00

Published on Jul 17, 2017

What exactly is Free Trade and is it always the best policy?

Professor Don Boudreaux of Cafe Hayek discusses the morality of capitalist exchange and its inherent advantages.

July 13, 2017

Each month in the United States—a place with about 160 million civilian jobs—1.7 million of them vanish”

Filed under: Business, Economics, Technology — Tags: , , — Nicholas @ 05:00

Deirdre McCloskey addresses the fear that technological change is gobbling up all the jobs:

Consider the historical record: If the nightmare of technological unemployment were true, it would already have happened, repeatedly and massively. In 1800, four out of five Americans worked on farms. Now one in 50 do, but the advent of mechanical harvesting and hybrid corn did not disemploy the other 78 percent.

In 1910, one out of 20 of the American workforce was on the railways. In the late 1940s, 350,000 manual telephone operators worked for AT&T alone. In the 1950s, elevator operators by the hundreds of thousands lost their jobs to passengers pushing buttons. Typists have vanished from offices. But if blacksmiths unemployed by cars or TV repairmen unemployed by printed circuits never got another job, unemployment would not be 5 percent, or 10 percent in a bad year. It would be 50 percent and climbing.

Each month in the United States — a place with about 160 million civilian jobs — 1.7 million of them vanish. Every 30 days, in a perfectly normal manifestation of creative destruction, over 1 percent of the jobs go the way of the parlor maids of 1910. Not because people quit. The positions are no longer available. The companies go out of business, or get merged or downsized, or just decide the extra salesperson on the floor of the big-box store isn’t worth the costs of employment.

What you hear on the evening news is the monthly net increase or decrease in jobs, with some 200,000 added in a good month. But the gross figure of 1 percent of jobs lost per month is the relevant one for worries about technological unemployment. It’s well over 10 percent per year at simple interest. In just a few years at such rates — if disemployment were truly permanent — a third of the labor force would be standing on street corners, and the fraction still would be rising. In 2000, well over 100,000 people were employed by video stores, yet our street corners are not filled with former video store clerks asking for loose change.

We could “save people’s jobs” by stopping all innovation. You would do next year exactly what you did this year. Capital as well as labor would perpetually be employed the same way. But then we would perpetually have the same income. That’s nice if you’re doing well now. It’s not so nice if you’re poor or young.

Job protections for the old have in fact already created a dangerous class of unemployed youths in the world — 50 percent among Greeks and black South Africans, for instance.

June 15, 2017

Words & Numbers: What You Should Know About Poverty in America

Filed under: Economics, Government, USA — Tags: , , , , , — Nicholas @ 04:00

Published on 14 Jun 2017

Poverty is a big deal – it affects about 41 million people in the United States every year – yet the federal government spends a huge amount of money to end poverty. So much of the government’s welfare spending gets eaten up by bureaucracy, conflicting programs, and politicians presuming they know how people should spend their own money. Obviously, this isn’t working.

This week on Words and Numbers, Antony Davies and James R. Harrigan delve into how people can really become less poor and what that means for society and the government.

May 30, 2017

QotD: The uses of IQ

Filed under: Books, Health, Media, Quotations — Tags: , , , — Nicholas @ 01:00

Suppose that the question at issue regards individuals: “Given two 11 year olds, one with an IQ of 110 and one with an IQ of 90, what can you tell us about the differences between those two children?” The answer must be phrased very tentatively. On many important topics, the answer must be, “We can tell you nothing with any confidence.” It is well worth a guidance counselor’s time to know what these individual scores are, but only in combination with a variety of other information about the child’s personality, talents, and background. The individual’s IQ score all by itself is a useful tool but a limited one.

Suppose instead that the question at issue is: “Given two sixth-grade classes, one for which the average IQ is 110 and the other for which it is 90, what can you tell us about the difference between those two classes and their average prospects for the future?” Now there is a great deal to be said, and it can be said with considerable confidence — not about any one person in either class but about average outcomes that are important to the school, educational policy in general, and society writ large. The data accumulated under the classical tradition are extremely rich in this regard, as will become evident in subsequent chapters.

[…]

We agree emphatically with Howard Gardner, however, that the concept of intelligence has taken on a much higher place in the pantheon of human virtues than it deserves. One of the most insidious but also widespread errors regarding IQ, especially among people who have high IQs, is the assumption that another person’s intelligence can be inferred from casual interactions. Many people conclude that if they see someone who is sensitive, humorous, and talks fluently, the person must surely have an above-average IQ.

This identification of IQ with attractive human qualities in general is unfortunate and wrong. Statistically, there is often a modest correlation with such qualities. But modest correlations are of little use in sizing up other individuals one by one. For example, a person can have a terrific sense of humor without giving you a clue about where he is within thirty points on the IQ scale. Or a plumber with a measured IQ of 100 — only an average IQ — can know a great deal about the functioning of plumbing systems. He may be able to diagnose problems, discuss them articulately, make shrewd decisions about how to fix them, and, while he is working, make some pithy remarks about the president’s recent speech.

At the same time, high intelligence has earmarks that correspond to a first approximation to the commonly understood meaning of smart. In our experience, people do not use smart to mean (necessarily) that a person is prudent or knowledgeable but rather to refer to qualities of mental quickness and complexity that do in fact show up in high test scores. To return to our examples: Many witty people do not have unusually high test scores, but someone who regularly tosses off impromptu complex puns probably does (which does not necessarily mean that such puns are very funny, we hasten to add). If the plumber runs into a problem he has never seen before and diagnoses its source through inferences from what he does know, he probably has an IQ of more than 100 after all. In this, language tends to reflect real differences: In everyday language, people who are called very smart tend to have high IQs.

All of this is another way of making a point so important that we will italicize it now and repeat elsewhere: Measures of intelligence have reliable statistical relationships with important social phenomena, but they are a limited tool for deciding what to make of any given individual. Repeat it we must, for one of the problems of writing about intelligence is how to remind readers often enough how little an IQ score tells about whether the human being next to you is someone whom you will admire or cherish. This thing we know as IQ is important but not a synonym for human excellence.

Charles Murray, “The Bell Curve Explained”, American Enterprise Institute, 2017-05-20.

« Newer PostsOlder Posts »

Powered by WordPress