Quotulatiousness

March 5, 2017

Splitting GDP

Filed under: Economics — Tags: , , , — Nicholas @ 02:00

Published on 21 Nov 2015

In the last three videos, you learned the basics of GDP: how to compute it, and how to account for inflation and population increases. You also learned how real GDP per capita is useful as a quick measure for standard of living.

This time round, we’ll get into specifics on how GDP is analyzed and used to study a country’s economy. You’ll learn two approaches for analysis: national spending and factor income.

You’ll see GDP from both sides of the ledger: the spending and the receiving side.

With the national spending approach, you’ll see how gross domestic product is split into three categories: consumption goods bought by the public, investment goods bought by the public, and government purchases.

You’ll also learn how to avoid double counting in GDP calculation, by understanding how government purchases differ from government spending, in terms of GDP.

After that, you’ll learn the other approach for GDP splitting: factor income.

Here, you’ll view GDP as the total sum of employee compensation, rents, interest, and profit. You’ll understand how GDP looks from the other side — from the receiving end of the ledger, instead of the spending end.

Finally, you’ll pay a visit to FRED (the Federal Reserve Economic Data website) again.

FRED will help you understand how GDP and GDI (the name for GDP when you use the factor income approach) are used by economists in times of economic downturn.

So, buckle in again. It’s time to hit the last stop on our GDP journey.

February 27, 2017

Real GDP Per Capita and the Standard of Living

Filed under: Economics — Tags: , , , , — Nicholas @ 02:00

Published on 20 Nov 2015

They say what matters most in life are the things money can’t buy.

So far, we’ve been paying attention to a figure that’s intimately linked to the things money can buy. That figure is GDP, both nominal, and real. But before you write off GDP as strictly a measure of wealth, here’s something to think about.

Increases in real GDP per capita also correlate to improvements in those things money can’t buy.

Health. Happiness. Education.

What this means is, as real GDP per capita rises, a country also tends to get related benefits.

As the figure increases, people’s longevity tends to march upward along with it. Citizens tend to be better educated. Over time, growth in real GDP per capita also correlates to an increase in income for the country’s poorest citizens.

But before you think of GDP per capita as a panacea for measuring human progress, here’s a caveat.

GDP per capita, while useful, is not a perfect measure.

For example: GDP per capita is roughly the same in Nigeria, Pakistan, and Honduras. As such, you might think the three countries have about the same standard of living.

But, a much larger portion of Nigeria’s population lives on less than $2/day than the other two countries.

This isn’t a question of income, but of income distribution — a matter GDP per capita can’t fully address.
In a way, real GDP per capita is like a thermometer reading — it gives a quick look at temperature, but it doesn’t tell us everything.

It’s far from the end-all, be-all of measuring our state of well-being. Still, it’s worth understanding how GDP per capita correlates to many of the other things we care about: our health, our happiness, and our education.

So join us in this video, as we work to understand how GDP per capita helps us measure a country’s standard of living. As we said: it’s not a perfect measure, but it is a useful one.

February 19, 2017

Nominal vs. Real GDP

Filed under: Economics — Tags: , , , , — Nicholas @ 03:00

Published on 19 Nov 2015

“Are you better off today than you were 4 years ago? What about 40 years ago?”

These sorts of questions invite a different kind of query: what exactly do we mean, when we say “better off?” And more importantly, how do we know if we’re better off or not?

To those questions, there’s one figure that can shed at least a partial light: real GDP.

In the previous video, you learned about how to compute GDP. But what you learned to compute was a very particular kind: the nominal GDP, which isn’t adjusted for inflation, and doesn’t account for increases in the population.

A lack of these controls produces a kind of mirage.

For example, compare the US nominal GDP in 1950. It was roughly $320 billion. Pretty good, right? Now compare that with 2015’s nominal GDP: over $17 trillion.

That’s 55 times bigger than in 1950!

But wait. Prices have also increased since 1950. A loaf of bread, which used to cost a dime, now costs a couple dollars. Think back to how GDP is computed. Do you see how price increases impact GDP?

When prices go up, nominal GDP might go up, even if there hasn’t been any real growth in the production of goods and services. Not to mention, the US population has also increased since 1950.

As we said before: without proper controls in place, even if you know how to compute for nominal GDP, all you get is a mirage.

So, how do you calculate real GDP? That’s what you’ll learn today.

In this video, we’ll walk you through the factors that go into the computation of real GDP.

We’ll show you how to distinguish between nominal GDP, which can balloon via rising prices, and real GDP—a figure built on the production of either more goods and services, or more valuable kinds of them. This way, you’ll learn to distinguish between inflation-driven GDP, and improvement-driven GDP.

Oh, and we’ll also show you a handy little tool named FRED — the Federal Reserve Economic Data website.

FRED will help you study how real GDP has changed over the years. It’ll show you what it looks like during healthy times, and during recessions. FRED will help you answer the question, “If prices hadn’t changed, how much would GDP truly have increased?”

FRED will also show you how to account for population, by helping you compute a key figure: real GDP per capita. Once you learn all this, not only will you see past the the nominal GDP-mirage, but you’ll also get an idea of how to answer our central question:

“Are we better off than we were all those years ago?”

February 6, 2017

Social media, big data, and (lots of) profanity

Filed under: Media, Politics, Technology — Tags: , , , , , — Nicholas @ 03:00

Scott Adams linked to this video (which is very NSFW), discussing how social media platforms can use their analytic tools to “shape” communications among their users:

January 29, 2017

QotD: Perverse incentives for journalists

Filed under: Health, Media, Quotations — Tags: , , , — Nicholas @ 01:00

Unfortunately, the incentives of both academic journals and the media mean that dubious research often gets more widely known than more carefully done studies, precisely because the shoddy statistics and wild outliers suggest something new and interesting about the world. If I tell you that more than half of all bankruptcies are caused by medical problems, you will be alarmed and wish to know more. If I show you more carefully done research suggesting that it is a real but comparatively modest problem, you will just be wondering what time Game of Thrones is on.

Megan McArdle, “The Myth of the Medical Bankruptcy”, Bloomberg View, 2017-01-17.

January 16, 2017

Why do millennials earn some 20% less than boomers did at the same stage of life?

Filed under: Economics — Tags: , , , , — Nicholas @ 03:00

Tim Worstall explains why we shouldn’t be up in arms about the reported shortfall in millennial earnings compared to their parents’ generation at the same stage:

Part of the explanation here is that the millennials are better educated. We could take that to be some dig at what the snowflakes are learning in college these days but that’s not quite what I mean. Rather, they’re measuring the incomes of millennials in their late 20s. The four year college completion or graduation rate has gone up by some 50% since the boomers were similarly measured. Thus, among the boomers at that age there would be more people with a decade of work experience under their belt and fewer people in just the first few years of a professional career.

And here’s one of the things we know about blue collar and professional wages. Yes, the lifetime income as a professional is likely higher (that college wage premium and all that) but blue collar wages actually start out better and then don’t rise so much. Thus if we measure a society at the late 20s age and a society which has moved to a more professional wage structure we might well find just this result. The professionals making less at that age, but not over lifetimes, than the blue collar ones.

[…]

We’ve also got a wealth effect being demonstrated here. The millennials have lower net wealth than the boomers. Part of that is just happenstance of course. We’ve just had the mother of all recessions and housing wealth was the hardest hit part of it. And thus, given that housing equity is the major component of household wealth until the pension is fully topped up late in life, that wealth is obviously going to take a hit in the aftermath. There is another effect too, student debt. This is net wealth we’re talking about so if more of the generation is going to college more of the generation will have that negative wealth in the form of student debt. And don’t forget, it’s entirely possible to have negative net wealth here. For we don’t count the degree as having a wealth value but we do count the loans to pay for it as negative wealth.

January 1, 2017

Blog traffic in 2016

Filed under: Administrivia, Media — Tags: , , — Nicholas @ 03:00

The annual statistics update on traffic to Quotulatiousness from January 1st through December 31st, 2016. Overall, the traffic dropped slightly from 2015, which in turn was down a bit from the peak traffic year of 2014:


Over eight and a half million hits. That’s a pretty good number for an obscure Canadian blog.


The final count of visitors to the blog will be about 2,500-3,500 higher, as I did the screen captures at around 10:30 in the morning.

October 31, 2016

Is the “Gold Standard” of peer review actually just Fool’s Gold?

Filed under: Environment, Government, Health, Science — Tags: , , — Nicholas @ 01:00

Donna Laframboise points out that it’s difficult to govern based on scientific evidence if that evidence isn’t true:

We’re continually assured that government policies are grounded in evidence, whether it’s an anti-bullying programme in Finland, an alcohol awareness initiative in Texas or climate change responses around the globe. Science itself, we’re told, is guiding our footsteps.

There’s just one problem: science is in deep trouble. Last year, Richard Horton, editor of the Lancet, referred to fears that ‘much of the scientific literature, perhaps half, may simply be untrue’ and that ‘science has taken a turn toward darkness.’

It’s a worrying thought. Government policies can’t be considered evidence-based if the evidence on which they depend hasn’t been independently verified, yet the vast majority of academic research is never put to this test. Instead, something called peer review takes place. When a research paper is submitted, journals invite a couple of people to evaluate it. Known as referees, these individuals recommend that the paper be published, modified, or rejected.

If it’s true that one gets what one pays for, let me point out that referees typically work for no payment. They lack both the time and the resources to perform anything other than a cursory overview. Nothing like an audit occurs. No one examines the raw data for accuracy or the computer code for errors. Peer review doesn’t guarantee that proper statistical analyses were employed, or that lab equipment was used properly. The peer review process itself is full of serious flaws, yet is treated as if it’s the handmaiden of objective truth.

And it shows. Referees at the most prestigious of journals have given the green light to research that was later found to be wholly fraudulent. Conversely, they’ve scoffed at work that went on to win Nobel prizes. Richard Smith, a former editor of the British Medical Journal, describes peer review as a roulette wheel, a lottery and a black box. He points out that an extensive body of research finds scant evidence that this vetting process accomplishes much at all. On the other hand, a mountain of scholarship has identified profound deficiencies.

October 22, 2016

Polls, voting trends, and turnouts

Filed under: Politics, USA — Tags: , , , , — Nicholas @ 02:00

Jay Currie looks at the US election polling:

Polls tend to work by adjusting their samples to reflect demographics and an estimate of a given demographic’s propensity to actually vote. On a toy model basis, you can think of it as a layer cake with each layer representing an age cohort. So, for example, if you look at younger voters 18-29 you might find that 90% of them support Hilly and 10% Trump. If there are 100 of these voters in your sample of 500 a simple projection would suggest 90 votes for Hilly, 10 for Trump. The problem is that it is difficult to know how many of those younger voters will actually go out and vote. As a rule of thumb the older you are the more likely you are to vote so now you have to estimate voting propensity.

There are two ways to get a sense of voting propensity: ask the people in your sample or look at the behaviour of people the same age but in the last couple of elections.

And now the landscape begins to shift. In 2008, nearly 50% of voters aged 18-29 voted. In 2012, 40% voted. In both elections, the youth vote was heavily pro-Obama. If you were designing a poll at this point, what sort of weighting would make sense for youth voters? Making that call will change the landscape your poll will reflect. If you want your poll to tilt Hilly you can believe that the prospect of the first woman President of the United States will be as motivating as Obama was and assign a voting propensity of 40-50%; alternatively, if you don’t see many signs of Hillary catching fire among younger voters, you can set the propensity number at 30% and create a tie or a slight Trump lead.

(The results of this are even more dramatic if you look at the black vote and turnout. In 2008 black turnout was 69.1%, 2012, 67.4% with Obama taking well over 90%. Will the nice white lady achieve anything like these numbers?)

One the other side of the ledger, the turnouts of the less educated have been low for the last two elections. 52% in 2008 and a little less than 50 in 2012. There is room for improvement. Now, as any educated person will tell you, often at length, Trump draws a lot of support in the less educated cohorts. But that support is easily discounted because these people (the deplorables and their ilk) barely show up to vote.

Build your model on the basis that lower education people’s participation in 2016 will be similar to 2008 and 20012 and you will produce a result in line with the 538.com consensus view. But if you think that the tens of thousands people who show up for Trump’s rallies might just show up to vote, you will have a model tending towards the LA Times view of things.

October 1, 2016

Here’s some fantastic news you’re not seeing in the headlines

Filed under: Economics — Tags: , — Nicholas @ 02:00

The same world poverty data, presented as absolute or relative levels of poverty:

world-poverty-in-absolute-numbers

world-poverty-in-relative-numbers

H/T to Rob Fisher at Samizdata for the link.

June 21, 2016

World War 1 in Numbers I THE GREAT WAR Special

Filed under: Cancon, Europe, History, Military, WW1 — Tags: , , , — Nicholas @ 04:00

Published on 20 Jun 2016

Special thanks to Karim Theilgaard for composing the the new theme for our brand new intro!

We are approaching the 100th regular episode and decided to surprise you with an extra special episode about the staggering numbers of World War 1.

May 26, 2016

Eighty percent of Americans surveyed favour banning things they know nothing about

Filed under: Food, Health, Media, Science, USA — Tags: , , — Nicholas @ 03:00

Don’t get too smug, fellow Canuckistanis, as I suspect the numbers might be just as bad if Canadians were surveyed in this way:

You might have heard that Americans overwhelmingly favor mandatory labeling for foods containing genetically modified ingredients. That’s true, according to a new study: 84 percent of respondents said they support the labels.

Survey of GMO labelling fans

But a nearly identical percentage — 80 percent—in the same survey said they’d also like to see labels on food containing DNA.

Survey of DNA labelling fans

The study, published in the Federation of American Societies for Experimental Biology Journal last week, also found that 33 percent of respondents thought that non-GM tomatoes “did not contain genes” and 32 percent thought that “vegetables did not have DNA.” So there’s that.

University of Florida food economist Brandon R. McFadden and his co-author Jayson L. Lusk surveyed 1,000 American consumers and discovered [PDF] that “consumers think they know more than they actually do about GM food.” In fact, the authors say, “the findings question the usefulness of results from opinion polls as motivation for public policy surrounding GM food.”

My summary for laymen: When it comes to genetically modified food, people don’t know much, they don’t know what they don’t know, and they sure as heck aren’t letting that stop them from having strong opinions.

April 24, 2016

The “secret” of Indian food

Filed under: Food, India, Science — Tags: , — Nicholas @ 02:00

In an article in the Washington Post last year, Roberto Ferdman summarized the findings of a statistical study explaining why the flavours in Indian foods differ so much from other world cuisines:

Indian food, with its hodgepodge of ingredients and intoxicating aromas, is coveted around the world. The labor-intensive cuisine and its mix of spices is more often than not a revelation for those who sit down to eat it for the first time. Heavy doses of cardamom, cayenne, tamarind and other flavors can overwhelm an unfamiliar palate. Together, they help form the pillars of what tastes so good to so many people.

But behind the appeal of Indian food — what makes it so novel and so delicious — is also a stranger and subtler truth. In a large new analysis of more than 2,000 popular recipes, data scientists have discovered perhaps the key reason why Indian food tastes so unique: It does something radical with flavors, something very different from what we tend to do in the United States and the rest of Western culture. And it does it at the molecular level.

[…]

Chefs in the West like to make dishes with ingredients that have overlapping flavors. But not all cuisines adhere to the same rule. Many Asian cuisines have been shown to belie the trend by favoring dishes with ingredients that don’t overlap in flavor. And Indian food, in particular, is one of the most powerful counterexamples.

Researchers at the Indian Institute for Technology in Jodhpur crunched data on several thousand recipes from a popular online recipe site called TarlaDalal.com. They broke each dish down to its ingredients, and then compared how often and heavily ingredients share flavor compounds.

The answer? Not too often.

November 29, 2015

Does Teddy Bridgewater hold the ball too long?

Filed under: Football — Tags: , , , , — Nicholas @ 03:00

Over at Vikings Territory, Brett Anderson endangers his health, eyesight, and even his sanity by exhaustively tracking, timing, and analyzing every throw by Teddy Bridgewater in last week’s game against the Green Bay Packers. A common knock on Bridgewater is that he’s holding the ball too long and therefore missing pass opportunities and making himself more vulnerable to being sacked. It’s a long article, but you can skip right to the end to get the facts distilled:

What The Film Shows

It became clear pretty quickly that plays with larger TBH [time ball held] had a lot happening completely out of Bridgewater’s control. There were only a couple of plays where it clearly looked like Bridgewater held the ball too long while there were options downfield to target or that he hesitated to pull the trigger on guys that were open. And consistently, there were three issues I kept noticing.

  1. Receiver route depth – The Vikings receivers run a ton of late developing routes. I don’t have any numbers to back that up – we’re talking strictly film review now. But on plays ran out of the shotgun with 5-step drops or plays with even longer 7-step drops, by the time Bridgewater is being pressured (which happens about every 2 of 3 plays), his receivers have not finished their routes. And I know that just because they haven’t finished the route doesn’t mean Bridgewater can’t anticipate where they are going to be but… We’re talking not really even close to finishing their routes. It seems that a lot of the Vikings play designs consist of everybody running deep fade routes to create room underneath for someone on a short dig or to check down to a running back in the flat. So, if this player underneath is for any reason covered (or if the Vikings find themselves in long down and distance situations where an underneath route isn’t going to cut it, which… surprise, happens quite often), Bridgewater’s other receiver options are midway through their route 20 yards downfield. What’s worse? Not only are these routes taking forever to develop and typically only materializing once Bridgewater has been sacked or scampered away to save himself, but also…
  2. Receiver coverage – The Vikings receivers are typically not open. It was pretty striking how often on plays with higher TBH receivers have very little separation. (Make sure to take a look through the frame stills linked in the data table above. I tried to make sure I provided a capture for plays with higher TBH or plays that resulted in a negative outcome. Red circles obviously indicate receivers who are not open while yellow typically indicates receivers who are.) The Packers consistently had 7 defenders in coverage resulting in multiple occasions where multiple receivers are double teamed with safety help over the top. But even in plays with one on one coverage, the Vikings receivers are still having a difficult time finding space. So now, we have a situation with Bridgewater where we have these deep drops where not only are receivers not finished with their deep routes but they are also blanket covered. And why are teams able to drop so many players into coverage creating risky situations for a quarterback who is consistently risk adverse? Because…
  3. Poor offensive line play – The Vikings offensive line is not good. And it may be worse than you think. It’s no secret by this point that the Vikings offensive line had one of its worst showings of the year against the Packers. More often than not, simply by rushing four defenders, Green Bay was able to get pressure on Bridgewater within 2-3 seconds. This is a quick sack time. And more often than not, Bridgewater is having to evade this pressure by any means necessary to either give his receivers time to finish their routes or give them time to get open. (Or more frequently – both.) As a result of this, what we saw on multiple occasions against the Packers is Bridgewater being pressured quickly, him scrambling from the pocket and dancing around while stiff-arming a defender once or twice and ultimately throwing the ball out of bounds or taking a sack. Are you starting to see what the problem here?

Conclusion

Bridgewater is not holding the ball for a length of time that should reflect poorly on his play. The data shows that Bridgewater is about average when looking just strictly at the numbers. The tape shows a quarterback who really doesn’t have a lot of options other than holding on to the ball. When Bridgewater is presented with a quick 1- or 3-step drop and his receivers run routes with lengths complementary to the length of his drop, it typically results in Bridgewater finding a relatively open receiver, making a quick decision and getting the ball there accurately. When Bridgewater is faced with longer developing plays behind an offensive line that’s a sieve and receivers who are running lengthy routes while closely covered, he tries to make a play himself. Sure, there were a couple of plays during the Packers game where it may have been a better decision for Bridgewater to take a sack when initially pressured and saving the yards he lost by scrambling backwards. However, it’s difficult to chastise him for trying to create plays when they aren’t there when it doesn’t work and applauding him when his evasiveness, deadly stiff arm and surprisingly effective spin move result in a big play.

Bridgewater has been far from perfect this season. But after this extensive exercise, I can comfortably say that the amount of time Bridgewater is holding on to the ball should not negatively reflect on his performance considering the above mentioned external factors.

November 27, 2015

Wealth, inequality, and billionaires

Filed under: Economics, Government, Politics — Tags: , , , — Nicholas @ 04:00

Several months ago, the Washington Post reported on a new study of wealth and inequality that tracked how many billionaires got rich through competition in the market and how many got rich through political “connections”:

The researchers found that wealth inequality was growing over time: Wealth inequality increased in 17 of the 23 countries they measured between 1987 and 2002, and fell in only six, Bagchi says. They also found that their measure of wealth inequality corresponded with a negative effect on economic growth. In other words, the higher the proportion of billionaire wealth in a country, the slower that country’s growth. In contrast, they found that income inequality and poverty had little effect on growth.

The most fascinating finding came from the next step in their research, when they looked at the connection between wealth, growth and political connections.

The researchers argue that past studies have looked at the level of inequality in a country, but not why inequality occurs — whether it’s a product of structural inequality, like political power or racism, or simply a product of some people or companies faring better than others in the market. For example, Indonesia and the United Kingdom actually score similarly on a common measure of inequality called the Gini coefficient, say the authors. Yet clearly the political and business environments in those countries are very different.

So Bagchi and Svejnar carefully went through the lists of all the Forbes billionaires, and divided them into those who had acquired their wealth due to political connections, and those who had not. This is kind of a slippery slope — almost all billionaires have probably benefited from government connections at one time or another. But the researchers used a very conservative standard for classifying people as politically connected, only assigning billionaires to this group when it was clear that their wealth was a product of government connections. Just benefiting from a government that was pro-business, like those in Singapore and Hong Kong, wasn’t enough. Rather, the researchers were looking for a situation like Indonesia under Suharto, where political connections were usually needed to secure import licenses, or Russia in the mid-1990s, when some state employees made fortunes overnight as the state privatized assets.

The researchers found that some countries had a much higher proportion of billionaire wealth that was due to political connections than others did. As the graph below, which ranks only countries that appeared in all four of the Forbes billionaire lists they analyzed, shows, Colombia, India, Australia and Indonesia ranked high on the list, while the U.S. and U.K. ranked very low.

Wealth and political connections

Looking at all the data, the researchers found that Russia, Argentina, Colombia, Malaysia, India, Australia, Indonesia, Thailand, South Korea and Italy had relatively more politically connected wealth. Hong Kong, the Netherlands, Singapore, Sweden, Switzerland and the U.K. all had zero politically connected billionaires. The U.S. also had very low levels of politically connected wealth inequality, falling just outside the top 10 at number 11.

When the researchers compared these figures to economic growth, the findings were clear: These politically connected billionaires weighed on economic growth. In fact, wealth inequality that came from political connections was responsible for nearly all the negative effect on economic growth that the researchers had observed from wealth inequality overall. Wealth inequality that wasn’t due to political connections, income inequality and poverty all had little effect on growth.

« Newer PostsOlder Posts »

Powered by WordPress