We do not believe any group of men adequate enough or wise enough to operate without scrutiny or without criticism. We know that the only way to avoid error is to detect it, that the only way to detect it is to be free to enquire. We know that the wages of secrecy are corruption. We know that in secrecy error, undetected, will flourish and subvert.
J. Robert Oppenheimer, “Encouragement of Science” (Address at Science Talent Institute, 6 Mar 1950), Bulletin of the Atomic Scientists, v.7, #1 (Jan 1951) p.6-8
March 3, 2015
March 1, 2015
Scott Lincicome would like to point out to the contending Republicans hoping to become the GOP’s presidential candidate that defence spending is not immune to the massive overspending problem common to big government:
Over the next 20 months, a clown-car-full of Republican politicians will vie for their party’s presidential nomination. As the candidates crisscross the nation, each will undoubtedly call for smarter, leaner, and (hopefully) smaller government. However, there is one government program that, despite being a paragon of government incompetence and mind-bending fiscal incontinence, will most likely be ignored by these champions of budgetary temperance: the F-35 Joint Strike Fighter. In so doing, these Republicans will abandon their principles and continue a long, bipartisan tradition of perpetuating the broader problems with U.S. defense spending that the troubled jet symbolizes.
During the Obama years, the Republican Party magically rediscovered its commitment — at least rhetorically — to limited government and fiscal sanity. Criticizing the graft, incompetence, and cost of boondoggles like the 2009 stimulus bill, green-energy subsidies, or Obamacare, GOP politicians not only highlighted these programs’ specific failings, but also often explained how such problems were the inevitable result of an unwieldy federal government that lacked discipline and accountability and was inherently susceptible to capture by well-funded interest groups like unions or insurance companies.
They railed against massive bureaucracies, like the Department of Energy, that paid off cronies with scant congressional oversight. And, in the case of well-publicized debacles like the botched, billion-dollar Healthcare.gov roll-out, many Republicans were quick to note that the root of the problem lay not in one glitchy website, but the entire federal procurement process, and even Big Government itself
One wonders, however, if these Republicans’ philosophical understanding of Big Government’s inherent weaknesses extends to national defense and, in particular, the F-35. According to the latest (2012) estimate from the Pentagon, the total cost to develop, buy and operate the F-35 will be $1.45 trillion — yes, trillion, with a “t” — over the next 50 years, up from a measly $1 trillion estimated in 2011. For those of you keeping score at home, this means that the F-35’s lifetime cost grew about $450 billion in one year. (Who says inflation is dead?)
That number — $1.45 trillion — might be difficult to grasp, especially in the context of U.S. defense spending, so let me try to put it in perspective: the entire Manhattan Project, which took around three years and led to the development of the atom bomb, cost a total of $26 billion (2015), most of which went to “building factories and producing the fissile materials, with less than 10% for development and production of the weapons.” By contrast, the F-35 will cost $29 billion. Per year.
For the next 50 years.
February 27, 2015
Americans, prepare to feel angry: After years of watching our cholesterol, sacrificing shellfish and egg yolks and gloriously fatty pork and beef, and enduring day-glow yellow and too-soft tubs of butter substitute, Americans are about to be told by our government diet experts, “Oops … we had it all wrong.”
The Dietary Guidelines Advisory Committee, which is charged with reviewing the government-issued dietary guidelines every five years, is preparing to release its “new and improved” guidelines any day now, and leaks from the deliberations hint at a reversal in the committee’s decades-long guidance that Americans should eat a diet low in cholesterol.
What are Americans to think of this new guidance that says cholesterol doesn’t really matter after all, that it is no longer a “nutrient of concern,” that eating food high in cholesterol may not be connected to heart disease?
Devotees of protein-rich, low-carb diets may see this as validation and reason to celebrate. Others will no doubt feel deflated, confused, and just plain bitter that for years they’ve been fed a lie that cost them, quite literally, the joy of eating delicious food, and possibly better health. Still others will misunderstand this new guidance and think butter and other high-cholesterol foods are now in the healthy column. In reality, those foods still ought to be consumed in moderation — particularly by people with preexisting conditions such as diabetes.
Yet there’s a bigger story here. Government really ought not be in the business of providing nutrition advice in the first place. Nutrition is a personal issue, and what’s best for one person may not be best for another. Moreover, Americans have ample access to information in the private sector on health and nutrition. In other words, Uncle Sam, we don’t need you anymore.
Julie Gunlock, “Government Dieticians Tell Us, Never Mind Our Decades of Bad Advice”, National Review, 2015-02-13.
February 25, 2015
Published on 23 Feb 2015
“All the logic that we are seeing in the Net Neutrality debate is assuming that nothing has changed; it’s assuming that it’s 1995. What’s actually happened is that people get more and more service, year in and year out,” says Daniel Berninger, a telecom activist who was involved in the early days of internet-phone service of Vonage.
Net Neutrality proponents, including President Obama, argue that internet-service providers (ISPs) need to be regulated by the Federal Communications Commission (FCC) in order to keep the internet “free and open.”
Berninger heads up VCXC, a nonprofit that is pushing for regulatory and policy changes to speed up the transition to IP-based networks for voice and data sharing. He’s an unsparing critic of FCC Chairman Tom Wheeler’s plan to implement Net Neutrality by regulating broadband network operators under Title II or “common carrier” provisions of federal law.
Title II has historically applied to telephone companies, which were regulated as public utilities and subject to government scrutiny regarding every aspect of service, including pricing and universal service obligations. Since the mid-1990s, the internet has been classified as an “information service,” which is subject to much less regulation under Title I of the relevant federal law.
“Title II regulation has been around for 80 years,” says Berninger, “and we know exactly what it can accomplish and what it can’t accomplish … in all the things that it touched, it essentially destroyed innovation.” In 1956, he explains, as part of a consent decree involving ATT, phone service was regulated by the FCC under Title II while “information services” were essentially unregulated. “We split communications and computing and treated them entirely different — essentially as a twin experiment. Well, one twin prospered and one twin did not do very well.” Berninger argues that virtually all the problems that proponents of Title II regulation and Net Neutrality worry over — such as the blocking of specific websites and the deliberate slowing of traffic — haven’t occurred precisely because ISPs are subject to market competition and must constantly innovate to keep customers happy. FCC regulation would hamper that.
The FCC will vote on Wheeler’s proposal later this week and is widely expected to endorse it. The FCC has lost two previous attempts to assert regulatory control over the internet.
February 24, 2015
Oh, I don’t mean the profession of teaching … I mean the actual practice of imparting knowledge. As Joanna Williams explains, it’s the practical part that’s in steep decline nowadays:
After almost two decades working in the British education system, I’m still shocked when I meet teachers and lecturers who recoil at the prospect of actually imparting knowledge to their students. I cringed when the headteacher at my daughter’s junior school gathered all the new parents together to watch a sharply edited film showing that knowledge was now so easily accessible and so quickly outdated that there was little point in teaching children anything other than how to Google. When I find myself discussing the purpose of higher education, my proposal that the pursuit and transmission of knowledge should be the primary concern of the university is mostly met by looks of incomprehension that swiftly turn to barely concealed horror.
Teaching knowledge, as has been discussed before on spiked, has rarely been popular among the Rousseau-inspired, supposedly child-centred progressives of the educational world. It began to go more seriously out of fashion in the 1970s. Today, when every 10-year-old has a smart phone in their back pocket, actually teaching them stuff is seen as an unnecessary imposition on their individual creativity, serving no other end than future pub-quiz success. Working with children, rather than teaching knowledge, is considered altogether nicer; what’s more, it conveniently avoids the need for complex decisions to be made about what is most important in any particular subject. Rather than imposing their authority on children, teachers can be simply ‘guides on the side’, creating a learning environment through which children can determine their own path. What lies behind many of these entrenched ideas is a fundamental misunderstanding of what knowledge actually is.
Unfortunately, as a few voices in the educational world are beginning to make clear, left to their own devices children generally learn little and creativity is stifled rather than unleashed. Michael Young has been making the case for ‘bringing knowledge back in’ for many years now. More recently, people like Daisy Christodoulou, Toby Young and Tom Bennett have joined those chipping away at the child-centred, anti-knowledge orthodoxy. This is definitely a trend to welcome. And when knowledge-centred teaching goes against everything the educational establishment stands for, it is important to get the arguments right.
William Kitchen’s book, Authority and the Teacher, is a useful addition to the debate. Kitchen makes a convincing case that ‘any education without knowledge transmission is not an education at all’. The central premise of his book is his claim that ‘the development of knowledge requires a submission to the authority of a master expert: the teacher’. Kitchen argues that it is the teacher’s authority that makes imparting knowledge possible; in the absence of authority, teaching becomes simply facilitation and knowledge becomes inaccessible. He is careful to delineate authority from power, and he locates teachers’ authority within their own subject knowledge, which in turn is substantiated and held in check through membership of a disciplinary community. Without ‘the authority of the community and the practice,’ he argues, the notion of ‘correctness’ loses its meaning and there is no longer any sense to the passing of educational judgements.
Three years ago, The Los Angeles Times published a feel-good story on the Little Free Library movement. The idea is simple: A book lover puts a box or shelf or crate of books in their front yard. Neighbors browse, take one, and return later with a replacement. A 76-year-old in Sherman Oaks, California, felt that his little library, roughly the size of a dollhouse, “turned strangers into friends and a sometimes-impersonal neighborhood into a community,” the reporter observed. The man knew he was onto something “when a 9-year-old boy knocked on his door one morning to say how much he liked the little library.” He went on to explain, “I met more neighbors in the first three weeks than in the previous 30 years.”
Since 2009, when a Wisconsin man built a little, free library to honor his late mother, who loved books, copycats inspired by his example have put thousands of Little Free Libraries all over the U.S. and beyond. Many are displayed on this online map. In Venice, where I live, I know of at least three Little Free Libraries, and have witnessed chance encounters where folks in the neighborhood chat about a book.
I wish that I was writing merely to extol this trend. Alas, a subset of Americans are determined to regulate every last aspect of community life. Due to selection bias, they are overrepresented among local politicians and bureaucrats. And so they have power, despite their small-mindedness, inflexibility, and lack of common sense so extreme that they’ve taken to cracking down on Little Free Libraries, of all things.
Last summer in Kansas, a 9-year-old was loving his Little Free Library until at least two residents proved that some people will complain about anything no matter how harmless and city officials pushed the boundaries of literal-mindedness:
The Leawood City Council said it had received a couple of complaints about Spencer Collins’ Little Free Library. They dubbed it an “illegal detached structure” and told the Collins’ they would face a fine if they did not remove the Little Free Library from their yard by June 19.
Scattered stories like these have appeared in various local news outlets. The L.A. Times followed up last week with a trend story that got things just about right. “Crime, homelessness and crumbling infrastructure are still a problem in almost every part of America, but two cities have recently cracked down on one of the country’s biggest problems: small-community libraries where residents can share books,” Michael Schaub wrote. “Officials in Los Angeles and Shreveport, Louisiana, have told the owners of homemade lending libraries that they’re in violation of city codes, and asked them to remove or relocate their small book collections.”
February 12, 2015
Megan McArdle on the incredibly regressive way that American municipalities are raising money through fines and other costs imposed disproportionally on the poorest members of the community:
During last summer’s riots in Ferguson, Missouri, reporters began to highlight one reason that relations between the town’s police and its citizens are so fraught: heavy reliance on tickets and fines to cover the town’s budget. The city gets more than $3 million of its $20 million budget from “fines and public safety,” with almost $2 million more coming from various other user fees.
The problem with using your police force as a stealth tax-collection agency is that this functions as a highly regressive tax on people who are already having a hard time of things. Financially marginal people who can’t afford to, say, renew their auto registration get caught up in a cascading nightmare of fees piled upon fees that often ends in bench warrants and nights spent in jail … not for posing a threat to the public order, but for lacking the ready funds to legally operate a motor vehicle in our car-dependent society.
So why do municipalities go this route? The glib answer is “racism and hatred of the poor.” And, quite possibly, that plays a large part, if only in the sense that voters tend to discount costs that fall on other people. But having spent some time plowing through town budgets and reading up on the subject this afternoon, I don’t think that’s the only reason. I suspect that Ferguson is leaning so heavily on fines because it doesn’t have a lot of other terrific options.
Last month, Matt Ridley ran down the benefits to farmers, consumers, ecologists and the environment itself that the European Union has been resisting mightily all these years:
Scientifically, the argument over GM crops is as good as over. With nearly half a billion acres growing GM crops worldwide, the facts are in. Biotech crops are on average safer, cheaper and better for the environment than conventional crops. Their benefits accrue disproportionately to farmers in poor countries. The best evidence comes in the form of a “meta-analysis” — a study of studies — carried out by two scientists at Göttingen University, in Germany.
The strength of such an analysis is that it avoids cherry-picking and anecdotal evidence. It found that GM crops have reduced the quantity of pesticide used by farmers by an average of 37 per cent and increased crop yields by 22 per cent. The greatest gains in yield and profit were in the developing world.
If Europe had adopted these crops 15 years ago: rape farmers would be spraying far less pyrethroid or neo-nicotinoid insecticides to control flea beetles, so there would be far less risk to bees; potato farmers would not need to be spraying fungicides up to 15 times a year to control blight; and wheat farmers would not be facing stagnant yields and increasing pesticide resistance among aphids, meaning farmland bird numbers would be up.
Oh, and all that nonsense about GM crops giving control of seeds to big American companies? The patent on the first GM crops has just expired, so you can grow them from your own seed if you prefer and, anyway, conventionally bred varieties are also controlled for a period by those who produce them.
African farmers have been mostly denied genetically modified crops by the machinations of the churches and the greens, aided by the European Union’s demand that imports not be transgenically improved. Otherwise, African farmers would now be better able to combat drought, pests, vitamin deficiency and toxic contamination, while not having to buy so many sprays and risk their lives applying them.
I made this point recently to a charity that works with farmers in Africa and does not oppose GM crops but has so far not dared say so. Put your head above the parapet, I urged. We cannot do that, they replied, because we have to work with other, bigger green charities and they would punish us mercilessly if we broke ranks. Is the bullying really that bad? Yes, they replied.
Yet the Green Blob realises that it has made a mistake here. Not a financial mistake — it made a fortune out of donations during the heyday of stoking alarm about GM crops in the late 1990s — but the realisation that all it has achieved is to prolong the use of sprays and delay the retreat of hunger.
February 1, 2015
Tim Worstall looks at a recent book on an Indian experiment that investigated how to improve poverty relief programs:
In terms of the Indian experience one of the reasons that these trials worked well was because they were trials. Effort was put into making certain that those who were supposed to be receiving the cash were in fact receiving it. Such care and attention to people getting what they’re supposed to get is not an outstanding feature of the various welfare systems currently in use in India, as the book makes clear. So, just making sure that people were getting those modest amounts that they were supposed to get is going to be an advance. And it wouldn’t be possible to simply roll out such a scheme across the country, however beneficial, without a lot of preparatory work to make sure that the right people really would be getting the money.
It’s also true that the current systems fail badly in other ways. Purchasing grain to ship it around to special shops where it will be sold hugely under the market price is always going to be a leaky system. Some number of the middlemen will be sorely tempted to divert produce to sell onto the market and there’s considerable evidence that some succumb to that temptation. If people simply have money to buy on the standard market in the normal manner then it’s a lot easier to keep a control on that sort of thing.
However, the most important thing for the design of the American welfare system is the points they make about how the poor value being given goods as against being given money. $100 (far in excess of the amounts being discussed here) is worth more than $100 of food for example. Or $100 worth of medical care. There’s two reasons for this. One is simply that everyone values agency. The ability to decide things for oneself. And money does that. It’s possible to decide whether you want to purchase food, or to save a bit and buy a goat next week, or more fertiliser for the fields and so on. What the peasant on the ground would like to do with any increase in resources is most unlikely to accord with what some far away bureaucrat thinks said peasant ought to be doing. So, the choice itself increases value.
So, we could actually make poor people richer by abolishing food stamps. Assuming, of course, that we just gave them the same amount of money instead. The same would be true of Medicaid and housing vouchers of course. Yes, I’m aware that there are arguments against doing this. But it is still true: converting goods and services in kind into cash would make the poor richer at the same cost to the rest of us. So it is at least something we should consider, no?
And the main reason switching to cash from the current system is … paternalism. Governments really do think that they are better equipped than the recipients of aid in how to spend that money. And it’s quite true that some welfare recipients would blow the payments on booze or drugs or what-have-you, but the majority of peoples’ lives would improve if they got cash rather than food stamps or other in-kind assistance.
January 30, 2015
It’s understood that governments have an inherently antagonistic relationship with the English language. Generations of grammarians and school masters strove diligently to teach their young wards the importance of clear and logical communication. A strong grasp of English allowed students to think and understand at a sophisticated level. We don’t want any of that stuff now. People who think and talk clearly are a threat to governments the world over.
The art of government, to some extent, is the combined art of order and bullshit. There is a genuine need for the political-bureaucratic class to maintain peace, order and something resembling good government. But beyond the meat and potato stuff there is also the temptation to use government as a tool of enrichment. Since outright thievery is criticized by most people, excepting the thieves of course, an elaborate excuse is needed to distract the electorate from what is being done.
Richard Anderson, “Always Look on the Bright Side of Pork”, The Gods of the Copybook Headings, 2014-05-28.
January 27, 2015
Robert Tracinski on the essential core of a control freak’s very being:
Here’s one of my favorite stories about how the mind of a government official works.
A few years ago, I was in a grocery store in Charlottesville when I overheard a conversation between two shoppers, one of whom was clearly in some position of authority (the City Council, I believe). This was right after the financial crisis. The real estate market had just collapsed, a whole bunch of local development project had just been canceled, and my wife was telling me about all the guys she knew in construction who were desperate for work. Yet here was this lady arguing for why the local government should not approve any new commercial building permits. The danger, she explained, was the prospect of “economic ghost towns,” retail areas where several shops had closed, hurting business for the others. Until these “economic ghost towns” were filled back up — whether anybody wanted them or not — there was no good reason to approve permits for new commercial construction.
I just couldn’t keep quiet and had to interrupt: Only in Charlottesville — a left-leaning university town — could an economic downturn be used as a reason to block new economic activity.
But you have to understand the outlook of those whose faith is the creed of government. Everything is proof of the need for more government power and control. The local economy is booming? Let’s hold back on building permits because we don’t want to grow “too fast.” The local economy is tanking? Let’s hold back on building permits because we don’t want “economic ghost towns,” or whatever. On the national level, in an economic collapse the government needs more money for “stimulus.” But if the economy is booming, that means we can afford higher taxes, right?
January 24, 2015
January 22, 2015
In the latest issue of Michael Pinkus Wine Review, Michael talks about the hints and portents (dealing with the Ontario government requires a certain amount of Kremlinological observation skills) that a tiny measure of privatization may be coming:
There’s a rumour in the wind that a certain amount of privatization is coming to Ontario (wouldn’t that be nice), but I wouldn’t get my hopes up about it just yet – no time line has been given and I am sure that ‘more study’ is necessary … and of course, if track record is any indication, this government will find some way to either screw it up or make it such a complicated piece of legislation that it’ll take years to get through all the red tape behind it. I once heard Jerry Agar, of NewsTalk 1010 fame, say (and I’m paraphrasing here) ‘if you want something screwed up get government involved’; he’s a proponent of the private sector because they can do it more efficiently than government if only ‘the man’ would just get outta the way … I would have to agree with him here. So far the government has made a mess of our liquor system that even repressed, despotic and 3rd world countries have better access to alcohol then we do.
Sadly, I believe it might be too little too late for some of Ontario wineries who have suffered this long, but might not be around to see the light at the end of the tunnel (if and/or when it comes). Yes, this might be the end of the line for a number of our precious wineries and we only have ourselves to blame for their demise. They have been as vocal as any sector, crying for help, not necessarily a hand out (which the grape growers seem to get) as much as a hand up – basically they’ve been pleading with each government: “please give us access to (our own) market (at the very least) and we’ll show you what we can do”, all to no avail.
Why the pessimistic attitude? Let’s look at the facts. It takes some rather deep pockets to own a winery in Ontario, that or a good credit rating, because money is the number one thing required to open the doors. But making it is more of an uphill battles then in any other business I this province. Post-1993, when the majority of the wineries around today opened their doors, your cellar door is the only place you can sell your wine – sure you could tap into the LCBO and the restaurant market, but that’s it. And although recent federal regulations have been lifted regarding the selling and especially shipping of wine across the country, many provinces have yet to enact their own legislation governing the practice, hence leaving the entire topic, not to mention hundreds of wineries, in limbo, unable to tap the rest of the country as a market for fear of breaking the law. With so few avenues to sell home-grown wine the government has basically handcuffed the industry – let alone the number of asinine rules that govern the industry from within (more on that next time) – it has all been put in place it would seem, so that wineries are destined to fail; that they remain open is a testament to their resolve and passion.
January 10, 2015
Megan McArdle explains why healthcare costs more than you think it should:
Milton Friedman famously divided spending into four kinds, which P.J. O’Rourke once summarized as follows:
- You spend your money on yourself. You’re motivated to get the thing you want most at the best price. This is the way middle-aged men haggle with Porsche dealers.
- You spend your money on other people. You still want a bargain, but you’re less interested in pleasing the recipient of your largesse. This is why children get underwear at Christmas.
- You spend other people’s money on yourself. You get what you want but price no longer matters. The second wives who ride around with the middle-aged men in the Porsches do this kind of spending at Neiman Marcus.
- You spend other people’s money on other people. And in this case, who gives a [damn]?
Most health-care spending in the U.S. falls into category three. In theory, the people who are funding our expenses — the proverbial middle-aged men in Porsches, except that they’re actually insurance executives and government bureaucrats — have every incentive to step in, cut up the charge cards, and substitute a gift-wrapped box of Hanes briefs with the comfort-soft waistband. In practice, legislators frequently intervene to stop them from exercising much cost-control. The managed care revolution of the 1990s died when patients complained to their representatives, and the representatives ran down to their offices to pass laws making it very hard to deny coverage for anything anyone wanted. Medicare cost-controls, such as the famed Sustainable Growth Rate, fell prey to similar maneuvers. The only system that exhibits sustained cost control is Medicaid, because poor people don’t vote, or exit the system for better insurance.
The result is a system where everyone complains that we spend much too much on health care — and the very same people get indignant if anyone suggests that they, personally, should maybe spend a little bit less. Everyone wants to go to heaven — but nobody wants to die.
Unfortunately, this is what cost-control actually looks like, which is to say, like people not being able to spend as much on health care. Oh, to be sure, we could achieve this end differently — instead of asking patients to pay a modest share of their own costs (the article suggests that this amount is less than 10 percent, in the case of Harvard professors) — we could simply set a schedule of covered treatment, and deny patients access to off-schedule treatments, or even better, not even tell them that those treatments exist. But people don’t like that solution either, which is why medical dramas are filled with rants about insurers who won’t cover procedures, and the law books are filled with regulations that sharply curtail the ability of insurers to ration care. And the third option, refusing to pay top-dollar for care, would be a bit tricky for Harvard to implement, given that they run exactly the sort of high-cost research facilities that help drive health-care costs skyward. Nor do I really think that the angry professors would be mollified by being given a cheap insurance package that wouldn’t let them go see the top-flight specialists their elite status now entitles them to access.
Instead, they persist in our mass delusion: that there is some magic pot of money in the health-care system, which can be painlessly tapped to provide universal coverage without dislocating any of the generous arrangements that insured people currently enjoy. Just as there are no leprechauns, there is no free money at the end of the rainbow; there are patients demanding services, and health-care workers making comfortable livings, who have built their financial lives around the expectation that those incomes will continue. Until we shed this delusion, you can expect a lot of ranting and raving about the hard truths of the real world.
January 6, 2015
Another example of unexpected consequences, this time from Frances Woolley at Worthwhile Canadian Initiative, who says we need to beware of middle-aged men waving feminist flags:
On December 12, 2006, Ontario ended “mandatory retirement.” As of that date, employers could no longer base termination decisions on an employee’s age. Ontario was following the lead of Quebec and Manitoba, which stopped having a standard retirement age in the early 1980s. Within a couple of years, mandatory retirement had effectively ended right across the country.
Fast forward to 2014. The first Ontario professors to elude retirement are now collecting their pensions. Yup, Canada Revenue Agency requires people to begin drawing their pensions at age 71, regardless of employment status. The average salary of a full professor in Ontario is around $150,000 per year […], and university pension plans are generally fairly generous. So a typical professor working full-time into his 70s will have a combined pension plus salary income of at least $200,000 a year, often more. No wonder professors 65 and older outnumber the under 35s […]. Who would willingly give up a nice office, the freedoms of academia, and a quarter million dollars or so a year?
Now if the professors fighting to eliminate the standard retirement age had said, “we have a very pleasant lifestyle and we’d like to hang onto it, thank you very much,” I could have respected their honesty, if nothing else. But instead, they draped themselves in the feminist flag. A standard retirement age of 65 was wrong because it hurt women. Thomas Klassen and David Macgregor, writing in the CAUT (Canadian Association of University Teachers) Bulletin, challenged ageism in academy on the grounds that “Mandatory retirement at an arbitrary age is devastating for female faculty who often began their careers later than males and may have had interruptions to raise children.”
Two thirds of university teachers between 65 and 69 are men […], as are three quarters of those over the age of 70. This is not simply a reflection of an academy that, 20 or 30 or 40 years ago, when these folks were hired, favoured men over women. Let’s rewind five years, to when the people who are now 65 to 69 were 60 to 64. This is more or less the same group of people, just at two different points in time.
In 2005-6, just before the standard retirement age ended, 65 percent of academics aged 60 to 64 were male […].
In 2010-11, when that same cohort of people were 65-69, 68 percent of those working as university teachers were male. There is hardly any hiring of individuals into university teaching in that age group. The only plausible explanation of the three percentage point increase in the proportion of men in the academia is that more women than men retired in that cohort.
The PhD students in the pipeline are 47 percent female […], as are 46 percent of Canadian assistant professors […]. Just 23 percent of full professors, however, are women. Replacing over 65 full professors with PhD students would result in a more gender-balanced academy.
I’m not trying to argue that we should reintroduce mandatory retirement in order to achieve greater gender balance. I am merely pointing out that who thought the end of mandatory retirement would disproportionately benefit women and promote gender equity were mistaken.