Published on 10 Apr 2014
“It’s kind of a weird thing that’s happened with American society — this idea that you have to have a college degree to be a respectable member of the middle class,” says Glenn Reynolds, professor of law at the University of Tennessee and purveyor of the popular Instapundit blog. Reynolds’ latest work, The New School: How the Information Age Will Save American Education From Itself, looks at the higher education bubble and how parents, students, and educators can remake the education system.
Reynolds sat down with Reason TV‘s Alexis Garcia to discuss why Americans are spending more for a college education and how students are responding to increasing tuition costs. “Given how expensive it is to go to college, there has to be a return sufficient to make it worth the time and especially the money,” Reynolds states. “You’re seeing declining enrollment in some schools and you’re seeing much more price resistance on the part of both parents and students.”
The discussion also includes Reynolds’ take on school choice, the upcoming elections, the current state of the blogosphere, and whether or not both political parties are necessary. Nearly a decade after Reynolds published An Army of Davids: How Markets and Technology Empower Ordinary People to Beat Big Media, Big Government, and Other Goliaths, the blogfather still remains optimistic about technology’s ability to empower the individual and inspire grassroots movements.
April 13, 2014
April 10, 2014
ESR linked to an interesting discussion of the spread of chile peppers and other exotic spices from the Roman empire onwards:
Can you imagine a world without salsa? Or Tabasco sauce, harissa, sriracha, paprika or chili powder?
I asked myself that question after I found a 700-year-old recipe for one of my favorite foods, merguez — North Africa’s beloved lamb sausage that is positively crimson with chiles. The medieval version was softly seasoned with such warm spices as black pepper, coriander and cinnamon instead of the brash heat of capsicum chile peppers — the signature flavor of the dish today.
The cuisines of China, Indonesia, India, Bhutan, Korea, Hungary and much of Africa and the Middle East would be radically different from what they are today if chiles hadn’t returned across the ocean with Columbus. Barely 50 years after the discovery of the New World, chiles were warming much of the Old World. How did they spread so far, so fast? The answers may surprise you — they did me!
I learned that Mamluk and Ottoman Muslims were nearly as responsible for the discovery of New World peppers as Columbus — but I’m getting ahead of myself.
The global pepper saga begins in the first millennium bce with the combustible career of another pepper — black pepper (Piper nigrum) and its cousins, Indian long pepper and Javanese cubeb. Although Piper nigrum was first grown on the Malabar Coast in India, the taste for it enflamed the ancient world: No matter what the cost — and it was very high — people were mad for pepper. The Romans, for example, first tasted it in Egypt, and the demand for it drove them to sail to India to buy it. In the first century, Pliny complained about the cost: “There is no year in which India does not drain the Roman Empire of fifty million sesterces.”
In one sense, the whole global system of trade — the sea and land routes throughout the known world that spread culture and cuisine through commerce — was engaged with the appetite for pepper, in its growth, distribution and consumption.
ESR said in his brief G+ posting:
More about the early and very rapid spread of capsicum peppers in the Old World than I’ve ever seen in one place before.
I also didn’t know they were such a nutritional boon. It appears one reason they became so entrenched is they’re a good source of Vitamin C in peasant cuisines centered around a starch like rice. My thought is that moderns may tend to miss this point because we have so much better access to citrus fruits and other very high-quality C sources.
The bit about paprika having been introduced to Hungary by the Ottomans was also particularly interesting to me. This was less than 30 years after they had reached the Old World.
April 9, 2014
It’s a common misunderstanding (especially with people who don’t know what laissez faire actually means):
For years, Republicans benefited from economic growth. So did pretty much everyone else, of course. But I have something specific in mind. Politically, when the economy is booming — or merely improving at a satisfactory clip — the distinction between being pro-business and pro-market is blurry. The distinction is also fuzzy when the economy is shrinking or imploding.
But when the economy is simply limping along — not good, not disastrous — like it is now, the line is easier to see. And GOP politicians typically don’t want to admit they see it.
Just to clarify, the difference between being pro-business and pro-market is categorical. A politician who is a “friend of business” is exactly that, a guy who does favors for his friends. A politician who is pro-market is a referee who will refuse to help protect his friends (or anyone else) from competition unless the competitors have broken the rules. The friend of business supports industry-specific or even business-specific loans, grants, tariffs, or tax breaks. The pro-market referee opposes special treatment for anyone.
GOP politicians can’t have it both ways anymore. An economic system that simply doles out favors to established stakeholders becomes less dynamic and makes job growth less likely. (Most jobs are created by new businesses.) Politically, the longer we’re in a “new normal” of lousy growth, the more the focus of politics turns to wealth redistribution. That’s bad for the country and just awful politics for Republicans. In that environment, being the party of less — less entitlement spending, less redistribution — is a losing proposition.
Also, for the first time in years, there’s an organized — or mostly organized — grassroots constituency for the market. Historically, the advantage of the pro-business crowd is that its members pick up the phone and call when politicians shaft them. The market, meanwhile, was like a bad Jewish son; it never called and never wrote. Now, there’s an infrastructure of tea-party-affiliated and other free-market groups forcing Republicans to stop fudging.
A big test will be on the Export-Import Bank, which is up for reauthorization this year. A bank in name only, the taxpayer-backed agency rewards big businesses in the name of maximizing exports that often don’t need the help (hence its nickname, “Boeing’s Bank”). In 2008, even then-senator Barack Obama said it was “little more than a fund for corporate welfare.” The bank, however, has thrived on Obama’s watch. It’s even subsidizing the sale of private jets. Remember when Obama hated tax breaks for corporate jets?
April 7, 2014
In the New York Times, Gary Marcus and Ernest Davis examine the big claims being made for the big data revolution:
Is big data really all it’s cracked up to be? There is no doubt that big data is a valuable tool that has already had a critical impact in certain areas. For instance, almost every successful artificial intelligence computer program in the last 20 years, from Google’s search engine to the I.B.M. Jeopardy! champion Watson, has involved the substantial crunching of large bodies of data. But precisely because of its newfound popularity and growing use, we need to be levelheaded about what big data can — and can’t — do.
The first thing to note is that although big data is very good at detecting correlations, especially subtle correlations that an analysis of smaller data sets might miss, it never tells us which correlations are meaningful. A big data analysis might reveal, for instance, that from 2006 to 2011 the United States murder rate was well correlated with the market share of Internet Explorer: Both went down sharply. But it’s hard to imagine there is any causal relationship between the two. Likewise, from 1998 to 2007 the number of new cases of autism diagnosed was extremely well correlated with sales of organic food (both went up sharply), but identifying the correlation won’t by itself tell us whether diet has anything to do with autism.
Second, big data can work well as an adjunct to scientific inquiry but rarely succeeds as a wholesale replacement. Molecular biologists, for example, would very much like to be able to infer the three-dimensional structure of proteins from their underlying DNA sequence, and scientists working on the problem use big data as one tool among many. But no scientist thinks you can solve this problem by crunching data alone, no matter how powerful the statistical analysis; you will always need to start with an analysis that relies on an understanding of physics and biochemistry.
April 3, 2014
The publisher sent a copy of The Zero Marginal Cost Society along with a note that Rifkin himself wanted ESR to receive a copy (because Rifkin thinks ESR is a good representative of some of the concepts in the book). ESR isn’t impressed:
In this book, Rifkin is fascinated by the phenomenon of goods for which the marginal cost of production is zero, or so close to zero that it can be ignored. All of the present-day examples of these he points at are information goods — software, music, visual art, novels. He joins this to the overarching obsession of all his books, which are variations on a theme of “Let us write an epitaph for capitalism”.
In doing so, Rifkin effectively ignores what capitalists do and what capitalism actually is. “Capital” is wealth paying for setup costs. Even for pure information goods those costs can be quite high. Music is a good example; it has zero marginal cost to reproduce, but the first copy is expensive. Musicians must own expensive instruments, be paid to perform, and require other capital goods such as recording studios. If those setup costs are not reliably priced into the final good, production of music will not remain economically viable.
Rifkin cites me in his book, but it is evident that he almost completely misunderstood my arguments in two different way, both of which bear on the premises of his book.
First, software has a marginal cost of production that is effectively zero, but that’s true of all software rather than just open source. What makes open source economically viable is the strength of secondary markets in support and related services. Most other kinds of information goods don’t have these. Thus, the economics favoring open source in software are not universal even in pure information goods.
Second, even in software — with those strong secondary markets — open-source development relies on the capital goods of software production being cheap. When computers were expensive, the economics of mass industrialization and its centralized management structures ruled them. Rifkin acknowledges that this is true of a wide variety of goods, but never actually grapples with the question of how to pull capital costs of those other goods down to the point where they no longer dominate marginal costs.
There are two other, much larger, holes below the waterline of Rifkin’s thesis. One is that atoms are heavy. The other is that human attention doesn’t get cheaper as you buy more of it. In fact, the opposite tends to be true — which is exactly why capitalists can make a lot of money by substituting capital goods for labor.
These are very stubborn cost drivers. They’re the reason Rifkin’s breathless hopes for 3-D printing will not be fulfilled. Because 3-D printers require feedstock, the marginal cost of producing goods with them has a floor well above zero. That ABS plastic, or whatever, has to be produced. Then it has to be moved to where the printer is. Then somebody has to operate the printer. Then the finished good has to be moved to the point of use. None of these operations has a cost that is driven to zero, or near zero at scale. 3-D printing can increase efficiency by outcompeting some kinds of mass production, but it can’t make production costs go away.
April 2, 2014
In the Harvard Business Review, Paul Oyer explains some of the changes in the market for adopting dogs over the last decade or so:
Lots of people are looking for a canine companion to brighten their lives, and there are always plenty of dogs “on the market” at shelters or through breeders. Yet, too many dogs don’t find homes, and they often pay the ultimate price (especially if they are in Sochi). So what stands in the way of dogs and owners finding one another?
For starters, the supply and demand at any given time in any given area is typically thin and random. Thankfully, online pet boards have thickened the market by enabling potential adopters, especially those who want to rescue a dog, to find a broader range of options rather than just settling for what the shelter happens to have the day they go there. Sites such as petfinder.com lead to many adoptions, many of which cross significant geographic territory.
A second problem — and this is much harder to solve than the thin-market problem — is there are a lot of duds on both sides of the dog adoption market, and it’s hard to tell exactly who they are. A breeder could describe a bad, Cujo-like dog as “good with children” while potential owners like Michael Vicks’ former associates would surely claim they would give a dog a safe home.
Shelters address this issue by thoroughly screening would-be adopters (I have always found it ironic that they give you your baby to take home after it is born with no questions asked, but you have to jump through a lot of hoops to adopt a puppy or kitten that will otherwise be euthanized.) But there is no evidence that these screenings are very effective.
March 31, 2014
Virginia Postrel has an interesting take on the current brouhaha over Facebook’s acquistion of formerly crowdfunded Oculus:
Crowdfunding sites such as Kickstarter and Indiegogo represent a classic entrepreneurial phenomenon: Once you roll out your great idea, customers use it in ways you didn’t imagine, and you wind up in a different business than you expected.
Kickstarter’s founders wanted to help artists raise money. Indiegogo co-founder Danae Ringelmann pictured aiding capital-strapped small-businesses owners like her parents. Neither intended their site to act as a test market. But, as the rags-to-riches story of virtual-reality firm Oculus shows, that’s what they have become.
“It’s a way to access capital, but what it’s also become is a market-testing and validation platform,” Ringelmann told the Dent the Future conference on Tuesday. “What we’re doing is creating pre-markets for ideas,” she said.
Now that Facebook is buying Oculus for $2 billion, critics are reverting to the original assumption that crowdfunding is primarily about raising money. “Talking people out of $2.4 million in exchange for zero percent equity is a perfectly legal scam,” wrote my colleague Barry Ritholtz.
But it’s not a scam at all. It’s market research. In effect, customers placed pre-orders and received early products; why are they griping that they don’t own a part of the business?
The backlash is largely Kickstarter’s fault. It may not be running a scam, but it definitely sends mixed messages. Unlike Indiegogo, which prides itself on operating a neutral platform giving anybody’s idea a market test, Kickstarter hasn’t embraced its de facto transformation. It strictly curates the campaigns it hosts and, although it makes its biggest profits on technology products, it still exudes an artistic sensibility that isn’t entirely comfortable with disruptive technology or large enterprises. It still talks as though it’s PBS. “Kickstarter is not a store,” it declares.
March 30, 2014
Well, right in this particular analysis, anyway:
Which is where we can bring Karl Marx into the discussion. Wrong as he was on many points he was at times a perceptive analyst. And he noted that what determined the wages of the workers wasn’t some calculation of a “fair wage”, nor some true value of their production (although he had much to say on both points), but in a market economy the wages that were paid were a reflection of what other people were willing to pay for access to that labour.
If, for example, there were a large number of unemployed (that “reserve army of the unemployed”) then a capitalist didn’t have to raise the wages of his workers however far productivity grew. If anyone tried to capture a bit more of the value being created, say through a strike or other activity, then the capitalist could simply fire them and bring in some of those unemployed. No profits needed to be shared with the workers. However, when we get to a situation of full employment then the dynamic changes. It’s not possible to simply hire and fire to keep wages low. For the other capitalists are competing for access to that labour that makes those profits. The higher profits go the higher all capitalists will be willing to bid up wages to continue making some profit at all.
The obverse of this is if the employers collude in order to artificially suppress the wages of the workers which is why that case involving Apple, Google and so on is going to trial. That’s monopoly capitalism that is and we really don’t like it at all.
But in this case with Yahoo trying to challenge Google’s YouTube, it will be the workers who benefit. For the two companies are vying with each other for access to the content being made and thus the profits that can be made. Of whatever revenue can be made a larger portion will go to the producers of the content and a smaller one to the owners of the platforms. Which is excellent, this is exactly what we want to happen.
March 29, 2014
Tim Worstall looks at the occasional claim that if Lehman Brothers had actually been “Lehman Sisters” (that is, an organization with much higher female participation), then they would have taken on less financial risk and therefore not have been the trigger to the financial meltdown:
… there’s very definitely an element of truth to this: but the final story is rather different from what is commonly assumed. It’s only if financial organisations are completely female, or completely male, that risk is reduced. Adding more of either gender to an organisation actually increases risk.
Mixed gender environments increase risk tolerance in both men and women. So adding women to an all male institution increases, likely, the risk that organisation will tolerate. And so does adding men to an all female one. Not just because the men sway the average but because both men and women become more risk tolerant in the presence of the other sex.
Thus it would be correct to say that Lehman Sisters would have been less risk tolerant than Lehman Brothers. But the reality of what there actually was at the firm was that it was a mixed gender environment and so more risk tolerant than either of the single gender hypotheticals would have been. It is gender diversity itself that increases risk tolerance, reduces risk aversion.
Which leads to an interesting thought. Everyone generally agrees that banking as a whole has become more risk tolerant, and thus more fragile, in recent decades. These are also the decades when women have made significant inroads into that area of professional life. Which leaves us with something of a conundrum. We generally believe that fragility in the banking system is a bad idea. We also all generally believe that gender equality is a good idea. But that gender equality of women going into finance and banking seems to increase the fragility of the system given that rise in risk tolerance from a mixed gender environment.
March 28, 2014
James Delingpole agrees that the most recent WHO report on deaths due to pollution is shocking, but points out where the press release does a sleight-of-hand move:
Even if you take the WHO’s estimates with a huge pinch of salt — and you probably should — that doesn’t mean the pollution problem in some parts of the world isn’t deadly serious. During the 20th century, around 260 million are reckoned to have died from indoor pollution in the developing world: that’s roughly twice as many as were killed in all the century’s wars.
Here, though, is the point where the WHO loses all credibility on the issue.
“Excessive air pollution is often a by-product of unsustainable policies in sectors such as transport, energy, waste management and industry. In most cases, healthier strategies will also be more economical in the long term due to health-care cost savings as well as climate gains,” Carlos Dora, WHO Coordinator for Public Health, Environmental and Social Determinants of Health said.
“WHO and health sectors have a unique role in translating scientific evidence on air pollution into policies that can deliver impact and improvements that will save lives,” Dr. Dora added.
See what Dora just did there? He used the shock value of the WHO’s pollution death figures to slip three Big Lies under the impressionable reader’s radar.
First, he’s trying to make out that outdoor pollution is as big a problem as indoor pollution. It isn’t: nowhere near. Many of the deaths the WHO links to the former are very likely the result of the latter (cooking and heating in poorly ventilated rooms using dung, wood, and coal) which, by nature, is much more intense.
Secondly, he’s implying that economic development is to blame. In fact, it’s economic development we have to thank for the fact that there are so many fewer pollution deaths than there used to be. As Bjorn Lomborg has noted, over the 20th century as poverty receded and clean fuels got cheaper, the risk of dying of pollution decreased eight-fold. In 1900, air pollution cost 23 per cent of global GDP; today it is 6 per cent, and by 2050 it will be 4 per cent.
But the third and by far the biggest of the lies is the implication that the UN’s policies on climate change are helping to alleviate the problem.
March 26, 2014
Everyone seems to want to raise the minimum wage right now (well, everyone in the media certainly), but it might backfire spectacularly on the very people it’s supposed to help:
It’s become commonplace for computers to replace American workers — think about those on an assembly line and in toll booths — but two University of Oxford professors have come to a surprising conclusion: Waitresses, fast-food workers and others earning at or near the minimum wage should also be on alert.
President Obama’s proposal to increase the federal minimum wage from $7.25 to $10.10 per hour could make it worthwhile for employers to adopt emerging technologies to do the work of their low-wage workers. But can a robot really do a janitor’s job? Can software fully replace a fast-food worker? Economists have long considered these low-skilled, non-routine jobs as less vulnerable to technological replacement, but until now, quantitative estimates of a job’s vulnerability have been missing from the debate.
Based on a 2013 paper by Carl Benedikt Frey and Michael A. Osborne of Oxford [PDF], occupations in the U.S. that pay at or near the minimum wage — that’s about one of every six workers in the U.S. — are much more susceptible to “computerization,” or as defined by the authors, “job automation by means of computer-controlled equipment.” The researchers considered a time frame of 20 years, and they measured whether such jobs could be computerized, not whether these jobs will be computerized. The latter involves assumptions about economic feasibility and social acceptance that go beyond mere technology.
The minimum-wage occupations that Frey and Osborne think are most vulnerable include, not surprisingly, telemarketers, sales clerks and cashiers. But also included are occupations that employ a large share of the low-wage workforce, such as waiters and waitresses, food-preparation workers and cooks. If the computerization of these low-wage jobs becomes feasible, and if employers find it economical to invest in such labor-saving technology, there will be huge implications for the U.S. labor force.
H/T to Colby Cosh, who said “McDonald’s is going to turn into vending machines. Can’t say this enough. McDonald’s…vending machines.”
March 20, 2014
The federal finance minister announced his resignation the other day. This had been rumoured for quite some time, as Jim Flaherty had been having health issues for the last couple of years. He’s at least for the time being remaining as my local MP (full disclosure: I coached two of his sons in soccer several years ago). He wasn’t going to be my MP in the next parliament, as my village is being moved to a different riding under the new boundaries. At Gods of the Copybook Headings, Richard Anderson tries to find the right words to say goodbye:
Writing valedictory posts is always a bit tricky. You have to strike a balance between showing the bad that is the obvious at the moment with the good that might be visible only in hindsight. It be must admitted that Jim Flaherty was not a bad finance minister, a minister who surrendered to every short-term political demand of the cabinet. Nor was he a great one charting a new course for Canada. He didn’t shift the goal posts, like Michael Wilson did for Brian Mulroney or Paul Martin did for Jean Chrétien. Big Jim minded the shop better than the other guys would have. Among midgets he was a giant.
The Flaherty years consisted of digging a gigantic hole and then carefully filling it back in. Modest surpluses, massive deficits and then a projected modest surplus. To quote the old poem: “Always he led us back to where we were before.” The net result is that we have a somewhat larger national debt than if we’d balanced the books for eight straight years. Not good but not too bad either.
As for Big Jim? Well he was one of the best Liberal finances minister of the last hundred years.
Such a well-placed barb. So artistically planted. And so true.
Tim Worstall pokes fun at a recent Oxfam report that claims that Britain’s five richest families own more than the bottom 20% of the population:
I read this and thought, “well, yes, this is obvious and what the hell’s it got to do with increasing inequality?” Of course Gerald Grosvenor (aka Duke of Westminster) has more wealth than the bottom 10 per cent of the country put together. It’s obvious that the top five families will have more than 20 per cent of all Britons. Do they think we all just got off the turnip truck or something?
They’ve also managed to entirely screw up the statistic they devised themselves by missing the point that if you’ve no debts and a £10 note then you’ve got more wealth than the bottom 10 or 20 per cent of the population has in aggregate. The bottom levels of our society have negative wealth.
Given what we classify as wealth, the poor have no assets at all. Property, financial assets (stocks, bonds etc), private sector pension plans, these are all pretty obviously wealth.
But then the state pension is also wealth: it’s a promise of a future stream of income. That is indeed wealth just as much as a share certificate or private pension is. But we don’t count that state pension as wealth in these sorts of calculations.
The right to live in a council house at a subsidised rent of the rest of your life is wealth, but that’s not counted either. Hell, the fact that we live in a country with a welfare system is a form of wealth — but we still don’t count that.
Doing this has been called (not by me, originally anyway) committing Worstall’s Fallacy. Failing to take account of the things we already do to correct a problem in arguing that more must be done to correct said problem. We already redistribute wealth by taxing the rich to provide pensions, housing, free education (only until 18 these days) and so on to people who could not otherwise afford them. But when bemoaning the amount of inequality that clearly cries out for more redistribution, we fail to note how much we’re already doing.
So Oxfam are improperly accounting for wealth and they’ve also missed the point that, given the existence of possible negative wealth, then of course one person or another in the UK will have more wealth than the entire lowest swathe.
March 16, 2014
At his blog, David Friedman links to a recent New York Review of Books article by William Nordhaus (itself a response to a Wall Street Journal article) which argues for economic action to address the impact of global warming:
His final, and possibly most important point, is based on his own research, which he complains that the WSJ article is misrepresenting. He starts with a correct point—that it is the difference between benefit and cost, not the ratio, that matters. He goes on to summarize his conclusion:
My research shows that there are indeed substantial net benefits from acting now rather than waiting fifty years. A look at Table 5-1 in my study A Question of Balance (2008) shows that the cost of waiting fifty years to begin reducing CO2 emissions is $2.3 trillion in 2005 prices. If we bring that number to today’s economy and prices, the loss from waiting is $4.1 trillion. Wars have been started over smaller sums.
What he does not mention is that his $4.1 trillion is a cost summed over the entire globe and the rest of the century. Put in annual terms, that come to about $48 billion a year, a less impressive number. Current world GNP is about $85 trillion/year. So the net cost of waiting, on Nordhaus’s own numbers, is about one twentieth of one percent of world GNP. Not precisely a catastrophe.
I suggest a simple experiment. Let Nordhaus write a piece explicitly arguing that the net cost of waiting is about .06% of world GNP and see whether it is more popular with the supporters or the critics of his position. I predict that at least one supporter will accuse him of having sold out to big oil.
The future is very much too uncertain to have confidence in estimates of what will be happening fifty years from now — for an extended demonstration, see my Future Imperfect. If we follow Nordhaus’s current advice and tax carbon now in order to slow warming, it may turn out that the costs were unnecessary or even counterproductive. We may be spending money in order to make ourselves poorer, not richer.
I conclude, on the basis of Nordhaus’s own figures and without taking account of my past criticism of his calculations, that he has his conclusion backwards. The sensible strategy is to take no actions whose justification depends on the belief that increased CO2 produces large net costs until we have considerably better reason than we now do to believe it.
March 9, 2014
It’s not clear whether Prime Minister Stephen Harper is going to Seoul to actually sign a free trade agreement with South Korea or if it’s just another grip-and-grin photo-op to announce an as-yet-unfinalized deal:
Harper said on his 24 Seven webcast that this would be Canada’s first trade deal in the Asia-Pacific region.
“It adds, obviously, to the important deals we have in the Americas and in Europe now. And it’s really given the Canadian economy as good, if not better, free-trade access than virtually every major developed economy,” he said.
Harper added that South Korea is “a relatively open economy, a relatively, very progressive economy and advanced democracy, and it has trade linkages all through Asia itself.” He said it’s “probably the best gateway you can get into long-term trade agreement access into the Asia-Pacific region.”
NDP trade critic Don Davies said growing trade with South Korea and Asia in general is a good thing. But he was skeptical that the week’s coming ceremonies would amount to much more of a repeat of Brussels.
“Are they going to go just to shake hands, have a photo-op and sign an agreement-in-principle without the actual details or text to be released?”
Davies again assailed the government for a total lack of transparency, and questioned whether the deal would be able to protect jobs in Canada’s auto sector.
“In trade deals, it’s details that matter,” he said.
“The Conservatives have the least transparent trade policy probably in the developed world. They are closed, they are secretive and they don’t involve a lot of stakeholders; they don’t involve the opposition.”
The deal would mark progress toward expanding trade with Asia, a major economic priority of the Harper government. Coming on the heels of the Canada-EU pact, it would allow Prime Minister Stephen Harper to trumpet his first significant free-trade deal in Asia, and give impetus to other negotiations, particularly with Japan.