I have no expertise in this area, but it appears to me that if the “Silicon Valley billionaires” are right and we are living in a simulated reality there are only two likely options. First, we’re (if you’ll pardon the simplification) “players in the game” — whether we’re aware of it within our simulation or not — and we can leave the simulation in the same way a World of Warcraft or Final Fantasy XIV or Guild Wars 2 player can log off and resume life in “meat space”. Second, most or all of us are actually NPCs and there’s no way to leave the simulation because (some|most|all) of us have no objective existence outside the simulation we currently occupy. If the second option is true … and mathematically it’s the one that’s overwhelmingly likely if we’re actually in a simulation, then there’s little point in discovering that it’s true, as we’ll all cease to exist when our home simulation is turned off.
October 13, 2016
If we’re living in a simulation, do we even want to break out?
August 22, 2016
QotD: Terry Pratchett and the hacker mentality
I learned something this weekend about the high cost of the subtle delusion that creative technical problem-solving is the preserve of a priesthood of experts, using powers and perceptions beyond the ken of ordinary human beings.
Terry Pratchett is the author of the Discworld series of satirical fantasies. He is — and I don’t say this lightly, or without having given the matter thought and study — quite probably the most consistently excellent writer of intelligent humor in the last century in English. One has to go back as far as P.G. Wodehouse or Mark Twain to find an obvious equal in consistent quality, volume, and sly wisdom.
I’ve been a fan of Terry’s since before his first Discworld novel; I’m one of the few people who remembers Strata, his 1981 first experiment with the disc-world concept. The man has been something like a long-term acquaintance of mine for ten years — one of those people you’d like to call a friend, and who you think would like to call you a friend, if the two of you ever arranged enough concentrated hang time to get that close. But we’re both damn busy people, and live five thousand miles apart.
This weekend, Terry and I were both guests of honor at a hybrid SF convention and Linux conference called Penguicon held in Warren, Michigan. We finally got our hang time. Among other things, I taught Terry how to shoot pistols. He loves shooter games, but as a British resident his opportunities to play with real firearms are strictly limited. (I can report that Terry handled my .45 semi with remarkable competence and steadiness for a first-timer. I can also report that this surprised me not at all.)
During Terry’s Guest-of-Honor speech, he revealed his past as (he thought) a failed hacker. It turns out that back in the 1970s Terry used to wire up elaborate computerized gadgets from Timex Sinclair computers. One of his projects used a primitive memory chip that had light-sensitive gates to build a sort of perceptron that could actually see the difference between a circle and a cross. His magnum opus was a weather station that would log readings of temperature and barometric pressure overnight and deliver weather reports through a voice synthesizer.
But the most astonishing part of the speech was the followup in which Terry told us that despite his keen interest and elaborate homebrewing, he didn’t become a programmer or a hardware tech because he thought techies had to know mathematics, which he thought he had no talent for. He then revealed that he thought of his projects as a sort of bad imitation of programming, because his hardware and software designs were total lash-ups and he never really knew what he was doing.
I couldn’t stand it. “And you think it was any different for us?” I called out. The audience laughed and Terry passed off the remark with a quip. But I was just boggled. Because I know that almost all really bright techies start out that way, as compulsive tinkerers who blundered around learning by experience before they acquired systematic knowledge. “Oh ye gods and little fishes”, I thought to myself, “Terry is a hacker!”
Yes, I thought ‘is’ — even if Terry hasn’t actually tinkered any computer software or hardware in a quarter-century. Being a hacker is expressed through skills and projects, but it’s really a kind of attitude or mental stance that, once acquired, is never really lost. It’s a kind of intense, omnivorous playfulness that tends to color everything a person does.
So it burst upon me that Terry Pratchett has the hacker nature. Which, actually, explains something that has mildly puzzled me for years. Terry has a huge following in the hacker community — knowing his books is something close to basic cultural literacy for Internet geeks. One is actually hard-put to think of any other writer for whom this is as true. The question this has always raised for me is: why Terry, rather than some hard-SF writer whose work explicitly celebrates the technologies we play with?
Eric S. Raymond, “The Delusion of Expertise”, Armed and Dangerous, 2003-05-05.
July 28, 2016
Canada’s National Heritage Digitization “Strategy”
Michael Geist explains why the federal government’s plans for digitization are so underwhelming:
Imagine going to your local library in search of Canadian books. You wander through the stacks but are surprised to find most shelves barren with the exception of books that are over a hundred years old. This sounds more like an abandoned library than one serving the needs of its patrons, yet it is roughly what a recently released Canadian National Heritage Digitization Strategy envisions.
Led by Library and Archives Canada and endorsed by Canadian Heritage Minister Mélanie Joly, the strategy acknowledges that digital technologies make it possible “for memory institutions to provide immediate access to their holdings to an almost limitless audience.”
Yet it stops strangely short of trying to do just that.
My weekly technology law column notes that rather than establishing a bold objective as has been the hallmark of recent Liberal government policy initiatives, the strategy sets as its 10-year goal the digitization of 90 per cent of all published heritage dating from before 1917 along with 50 per cent of all monographs published before 1940. It also hopes to cover all scientific journals published by Canadian universities before 2000, selected sound recordings, and all historical maps.
The strategy points to similar initiatives in other countries, but the Canadian targets pale by comparison. For example, the Netherlands plans to digitize 90 per cent of all books published in that country by 2018 along with many newspapers and magazines that pre-date 1940.
Canada’s inability to adopt a cohesive national digitization strategy has been an ongoing source of frustration and the subject of multiple studies which concluded that the country is falling behind. While there have been no shortage of pilot projects and useful initiatives from university libraries, Canada has thus far failed to articulate an ambitious, national digitization vision.
March 5, 2016
Mechanical computers
ESR posted this video on Google+, saying “Mind…utterly…blown. This is how computers worked before electronic gate logic. There’s a weird beauty of mathematics made tangible about it.”
Uploaded on 13 Jul 2011
A 1953 training film for a mechanical fire control computer aboard Navy Ships. Amazing how problems of mathematical computation were solved so elegantly in “permanent” mechanical form, before microprocessors became inexpensive and commonplace.
November 22, 2015
The Problem with Time & Timezones – Computerphile
Published on 30 Dec 2013
A web app that works out how many seconds ago something happened. How hard can coding that be? Tom Scott explains how time twists and turns like a twisty-turny thing. It’s not to be trifled with!
H/T to Jeremy for the link.
September 14, 2015
“Edge is advertiser friendly, not User friendly”
Jerry Pournelle talks about his differing browser experiences on the Microsoft Surface:
Apple had their announcements today, but I had story conferences so I could not watch them live. I finished my fiction work about lunch time, so I thought to view some reports, and it is time I learned more about the new Windows and get more use to my Surface 3 Pro; a fitting machine to view new Apple products, particularly their new iPad Pro which is I expect their answer to the Surface Pro and Windows 10.
My usual browser is Firefox, which has features I don’t love but by and large I get along with it; but with the Surface it seemed appropriate to make a serious effort to use Edge, the new Microsoft Browser. Of course it has Microsoft Bing the default search engine. It also doesn’t really understand the size of the Pro. It gave me horizontal scrolling, even though I had Edge full screen. I looked up Apple announcements, and Bing gave me a nice list. Right click on the nice bent Microsoft pocket wireless mouse, and open a repost in a new screen. Lo, I have to do horizontal scrolling; Edge makes sure there are ads on screen at all times, so you have to horizontal screen the text to see all of it. Line by line. But I can always see some ads. Edge makes sure I don’t miss ads. It doesn’t care whether I can read the text I was looking for, but it is more careful about the ads. I’m sure that makes the advertisers happy, but I’m not so sure about the users. I thought I went looking for an article, not for ads.
Edge also kept doing things I hadn’t asked it to, and I’d lose the text. Eventually I found if I closed the window and went back to the Bing screen and right clicked to open that same window in a new tab, I was able to – carefully – screen through the text, and adjust the screen so all the text was on screen even though there was still horizontal scrolling possible. This is probably a function of inexperience, but using a touch screen and Edge is a new experience.
Even so it was a rough read. I gave up and went to Firefox on the Surface Pro. Firefox has Google as its default browser, and the top selections it offered me – all I could see on one screen – were different from the ones I saw with Bing. I had to do a bit of scrolling to find the article I had been trying to read, but eventually I found it. Right click to open it in a new Tab. Voila. All my text in the center. I could read it. Much easier. For the record: same site, adjusted to width in Firefox on the Surface Pro, horizontal scrolling of the same article viewed in Edge. Probably my fault, but I don’t know what I did wrong.
Now in Microsoft’s defense, I don’t know Edge very well; but if you are going to a Surface Pro, you may well find Firefox easier to use than Edge. A lot easier to use.
As to Google vs. Bing, in this one case I found Bing superior; what it offered me had more content. But Edge is advertiser friendly, not User friendly.
August 8, 2015
Tom Kratman on “killer ‘bots”
SF author (and former US Army officer) Tom Kratman answers a few questions about drones, artificial intelligence, and the threat/promise of intelligent, self-directed weapon platforms in the near future:
Ordinarily, in this space, I try to give some answers. I’m going to try again, in an area in which I am, at least at a technological level, admittedly inexpert. Feel free to argue.
Question 1: Are unmanned aerial drones going to take over from manned combat aircraft?
I am assuming here that at some point in time the total situational awareness package of the drone operator will be sufficient for him to compete or even prevail against a manned aircraft in aerial combat. In other words, the drone operator is going to climb into a cockpit far below ground and the only way he’ll be able to tell he’s not in an aircraft is that he’ll feel no inertia beyond the bare minimum for a touch of realism, to improve his situational awareness, but with no chance of blacking out due to high G maneuvers..
Still, I think the answer to the question is “no,” at least as long as the drones remain under the control of an operator, usually far, far to the rear. Why not? Because to the extent the things are effective they will invite a proportional, or even more than proportional, response to defeat or at least mitigate their effectiveness. That’s just in the nature of war. This is exacerbated by there being at least three or four routes to attack the remote controlled drone. One is by attacking the operator or the base; if the drone is effective enough, it will justify the effort of making those attacks. Yes, he may be bunkered or hidden or both, but he has a signal and a signature, which can probably be found. To the extent the drone is similar in size and support needs to a manned aircraft, that runway and base will be obvious.
The second target of attack is the drone itself. Both of these targets, base/operator and aircraft, are replicated in the vulnerabilities of the manned aircraft, itself and its base. However, the remote controlled drone has an additional vulnerability: the linkage between itself and its operator. Yes, signals can be encrypted. But almost any signal, to include the encryption, can be captured, stored, delayed, amplified, and repeated, while there are practical limits on how frequently the codes can be changed. Almost anything can be jammed. To the extent the drone is dependent on one or another, or all, of the global positioning systems around the world, that signal, too, can be jammed or captured, stored, delayed, amplified and repeated. Moreover, EMP, electro-magnetic pulse, can be generated with devices well short of the nuclear. EMP may not bother people directly, but a purely electronic, remote controlled device will tend to be at least somewhat vulnerable, even if it’s been hardened,
Question 2: Will unmanned aircraft, flown by Artificial Intelligences, take over from manned combat aircraft?
The advantages of the unmanned combat aircraft, however, ranging from immunity to high G forces, to less airframe being required without the need for life support, or, alternatively, for a greater fuel or ordnance load, to expendability, because Unit 278-B356 is no one’s precious little darling, back home, to the same Unit’s invulnerability, so far as I can conceive, to torture-induced propaganda confessions, still argue for the eventual, at least partial, triumph of the self-directing, unmanned, aerial combat aircraft.
Even, so, I’m going to go out on a limb and go with my instincts and one reason. The reason is that I have never yet met an AI for a wargame I couldn’t beat the digital snot out of, while even fairly dumb human opponents can present problems. Coupled with that, my instincts tell me that that the better arrangement is going to be a mix of manned and unmanned, possibly with the manned retaining control of the unmanned until the last second before action.
This presupposes, of course, that we don’t come up with something – quite powerful lasers and/or renunciation of the ban on blinding lasers – to sweep all aircraft from the sky.
August 2, 2015
Thinking about realistic security in the “internet of things”
The Economist looks at the apparently unstoppable rush to internet-connect everything and why we should worry about security now:
Unfortunately, computer security is about to get trickier. Computers have already spread from people’s desktops into their pockets. Now they are embedding themselves in all sorts of gadgets, from cars and televisions to children’s toys, refrigerators and industrial kit. Cisco, a maker of networking equipment, reckons that there are 15 billion connected devices out there today. By 2020, it thinks, that number could climb to 50 billion. Boosters promise that a world of networked computers and sensors will be a place of unparalleled convenience and efficiency. They call it the “internet of things”.
Computer-security people call it a disaster in the making. They worry that, in their rush to bring cyber-widgets to market, the companies that produce them have not learned the lessons of the early years of the internet. The big computing firms of the 1980s and 1990s treated security as an afterthought. Only once the threats—in the forms of viruses, hacking attacks and so on—became apparent, did Microsoft, Apple and the rest start trying to fix things. But bolting on security after the fact is much harder than building it in from the start.
Of course, governments are desperate to prevent us from hiding our activities from them by way of cryptography or even moderately secure connections, so there’s the risk that any pre-rolled security option offered by a major corporation has already been riddled with convenient holes for government spooks … which makes it even more likely that others can also find and exploit those security holes.
… companies in all industries must heed the lessons that computing firms learned long ago. Writing completely secure code is almost impossible. As a consequence, a culture of openness is the best defence, because it helps spread fixes. When academic researchers contacted a chipmaker working for Volkswagen to tell it that they had found a vulnerability in a remote-car-key system, Volkswagen’s response included a court injunction. Shooting the messenger does not work. Indeed, firms such as Google now offer monetary rewards, or “bug bounties”, to hackers who contact them with details of flaws they have unearthed.
Thirty years ago, computer-makers that failed to take security seriously could claim ignorance as a defence. No longer. The internet of things will bring many benefits. The time to plan for its inevitable flaws is now.
June 23, 2015
May 14, 2015
Moore’s Law challenged yet again
In Bloomberg View, Virginia Postrel looks at the latest “Moore’s Law is over” notions:
Semiconductors are what economists call a “general purpose technology,” like electrical motors. Their effects spread through the economy, reorganizing industries and boosting productivity. The better and cheaper chips become, the greater the gains rippling through every enterprise that uses computers, from the special-effects houses producing Hollywood magic to the corner dry cleaners keeping track of your clothes.
Moore’s Law, which marked its 50th anniversary on Sunday, posits that computing power increases exponentially, with the number of components on a chip doubling every 18 months to two years. It’s not a law of nature, of course, but a kind of self-fulfilling prophecy, driving innovative efforts and customer expectations. Each generation of chips is far more powerful than the previous, but not more expensive. So the price of actual computing power keeps plummeting.
At least that’s how it seemed to be working until about 2008. According to the producer price index compiled by the Bureau of Labor Statistics, the price of the semiconductors used in personal computers fell 48 percent a year from 2000 to 2004, 29 percent a year from 2004 to 2008, and a measly 8 percent from 2008 to 2013.
The sudden slowdown presents a puzzle. It suggests that the semiconductor business isn’t as innovative as it used to be. Yet engineering measures of the chips’ technical capabilities have showed no letup in the rate of improvement. Neither have tests of how the semiconductors perform on various computing tasks.
April 19, 2015
The latest “breakthrough” in helping schizophrenics take their medicine
Scott Alexander recently attended a local psychiatry conference, with some essential themes being emphasized:
This conference consisted of a series of talks about all the most important issues of the day, like ‘The Menace Of Psychologists Being Allowed To Prescribe Medication’, ‘How To Be An Advocate For Important Issues Affecting Your Patients Such As The Possibility That Psychologists Might Be Allowed To Prescribe Them Medication’, and ‘Protecting Members Of Disadvantaged Communities From Psychologists Prescribing Them Medication’.
As somebody who’s noticed that the average waiting list for a desperately ill person to see a psychiatrist is approaching the twelve month mark in some places, I was pretty okay with psychologists prescribing medication. The scare stories about how psychologists might prescribe medications unsafely didn’t have much effect on me, since I continue to believe that putting antidepressants in a vending machine would be a more safety-conscious system than what we have now (a vending machine would at least limit antidepressants to people who have $1.25 in change; the average primary care doctor is nowhere near that selective). Annnnnyway, this made me kind of uncomfortable at the conference and I Struck A Courageous Blow Against The Cartelization Of Medicine by sneaking out without putting my name on their mailing list.
But before I did, I managed to take some notes about what’s going on in the wider psychiatric world, including:
– The newest breakthrough in ensuring schizophrenic people take their medication (a hard problem!) is bundling the pills with an ingestable computer chip that transmits data from the patient’s stomach. It’s a bold plan, somewhat complicated by the fact that one of the most common symptoms of schizophrenia is the paranoid fear that somebody has implanted a chip in your body to monitor you. Can you imagine being a schizophrenic guy who has to explain to your new doctor that your old doctor put computer chips in your pills to monitor you? Yikes. If they go through with this, I hope they publish the results in the form of a sequel to The Three Christs of Ypsilanti.
– The same team is working on a smartphone app to detect schizophrenic relapses. The system uses GPS to monitor location, accelerometer to detect movements, and microphone to check tone of voice and speaking pattern, then throws it into a machine learning system that tries to differentiate psychotic from normal behavior (for example, psychotic people might speak faster, or rock back and forth a lot). Again, interesting idea. But again, one of the most common paranoid schizophrenic delusions is that their electronic devices are monitoring everything they do. If you make every one of a psychotic person’s delusions come true, such that they no longer have any beliefs that do not correspond to reality, does that technically mean you’ve cured them? I don’t know, but I’m glad we have people investigating this important issue.
February 23, 2015
The “Internet of Things” (That May Or May Not Let You Do That)
Cory Doctorow is concerned about some of the possible developments within the “Internet of Things” that should concern us all:
The digital world has been colonized by a dangerous idea: that we can and should solve problems by preventing computer owners from deciding how their computers should behave. I’m not talking about a computer that’s designed to say, “Are you sure?” when you do something unexpected — not even one that asks, “Are you really, really sure?” when you click “OK.” I’m talking about a computer designed to say, “I CAN’T LET YOU DO THAT DAVE” when you tell it to give you root, to let you modify the OS or the filesystem.
Case in point: the cell-phone “kill switch” laws in California and Minneapolis, which require manufacturers to design phones so that carriers or manufacturers can push an over-the-air update that bricks the phone without any user intervention, designed to deter cell-phone thieves. Early data suggests that the law is effective in preventing this kind of crime, but at a high and largely needless (and ill-considered) price.
To understand this price, we need to talk about what “security” is, from the perspective of a mobile device user: it’s a whole basket of risks, including the physical threat of violence from muggers; the financial cost of replacing a lost device; the opportunity cost of setting up a new device; and the threats to your privacy, finances, employment, and physical safety from having your data compromised.
The current kill-switch regime puts a lot of emphasis on the physical risks, and treats risks to your data as unimportant. It’s true that the physical risks associated with phone theft are substantial, but if a catastrophic data compromise doesn’t strike terror into your heart, it’s probably because you haven’t thought hard enough about it — and it’s a sure bet that this risk will only increase in importance over time, as you bind your finances, your access controls (car ignition, house entry), and your personal life more tightly to your mobile devices.
That is to say, phones are only going to get cheaper to replace, while mobile data breaches are only going to get more expensive.
It’s a mistake to design a computer to accept instructions over a public network that its owner can’t see, review, and countermand. When every phone has a back door and can be compromised by hacking, social-engineering, or legal-engineering by a manufacturer or carrier, then your phone’s security is only intact for so long as every customer service rep is bamboozle-proof, every cop is honest, and every carrier’s back end is well designed and fully patched.
January 7, 2015
Cory Doctorow on the dangers of legally restricting technologies
In Wired, Cory Doctorow explains why bad legal precedents from more than a decade ago are making us more vulnerable rather than safer:
We live in a world made of computers. Your car is a computer that drives down the freeway at 60 mph with you strapped inside. If you live or work in a modern building, computers regulate its temperature and respiration. And we’re not just putting our bodies inside computers — we’re also putting computers inside our bodies. I recently exchanged words in an airport lounge with a late arrival who wanted to use the sole electrical plug, which I had beat him to, fair and square. “I need to charge my laptop,” I said. “I need to charge my leg,” he said, rolling up his pants to show me his robotic prosthesis. I surrendered the plug.
You and I and everyone who grew up with earbuds? There’s a day in our future when we’ll have hearing aids, and chances are they won’t be retro-hipster beige transistorized analog devices: They’ll be computers in our heads.
And that’s why the current regulatory paradigm for computers, inherited from the 16-year-old stupidity that is the Digital Millennium Copyright Act, needs to change. As things stand, the law requires that computing devices be designed to sometimes disobey their owners, so that their owners won’t do something undesirable. To make this work, we also have to criminalize anything that might help owners change their computers to let the machines do that supposedly undesirable thing.
This approach to controlling digital devices was annoying back in, say, 1995, when we got the DVD player that prevented us from skipping ads or playing an out-of-region disc. But it will be intolerable and deadly dangerous when our 3-D printers, self-driving cars, smart houses, and even parts of our bodies are designed with the same restrictions. Because those restrictions would change the fundamental nature of computers. Speaking in my capacity as a dystopian science fiction writer: This scares the hell out of me.
December 18, 2014
Admiral Grace Hopper
The US Naval Institute posted an article about the one and only Admiral Grace Hopper earlier this month to mark her birthday:
The typical career arc of a naval officer may run from 25-30 years. Most, however, don’t start at age 35. Yet when it comes to Rear Adm. Grace Hopper, well, the word “typical” just doesn’t apply.
Feisty. Eccentric. Maverick. Brilliant. Precise. Grace Hopper embodied all of those descriptions and more, but perhaps what defined her as much as anything else was the pride she had in wearing the Navy uniform for 43 years. Ironically, Rear Adm. Grace Hopper — “Amazing Grace” as she was known — had to fight to get into the Navy.
Grace Brewster Murray was born into a well-off family in New York on Dec. 9, 1906. She could have followed what many of her peers did during those times: attending college for a year or two, getting married then devoting their lives to their families and volunteer work.
Instead, Grace’s path would be less traveled. Encouraged to explore her innate curiosity on how things worked, a 7-year-old Grace dismantled all of the family’s alarm clocks trying to put them back together again. Rather than banishment from the practice, she was allowed one to practice on.
[…]
When she joined the WAVES in December 1943, Lt. j.g. Grace Hopper was 37 years old. Williams noted that after graduating at the top of her class of 800 officer candidates in June 1944, Hopper paid homage to Alexander Wilson Russell, her great-grandfather, the admiral who apparently took a “dim view of women and cats” in the Navy and laid flowers on his grave to “comfort and reassure him.”
Hopper was sent to the Bureau of Ordnance Computation Project at Harvard University under the guidance of Howard Aiken. The Harvard physics and applied mathematics professor helped create the first Automatic Sequence Controlled Calculator (ASCC), better known as Mark I. He ran a lab where design, testing, modification and analysis of weapons were calculated. Most were specially trained women called computers. “So the first ‘computers’ were women who did the calculating on desk calculators,” Williams said. And the time it took for the computers to calculate was called “girl hours.”
What happened next put Hopper on a new path that would define the rest of her life, according to a passage in the book Improbable Warriors: Women Scientists in the U.S. Navy during World War II also by Williams.
On July 2, 1944, Hopper reported to duty and met Aiken.
“That’s a computing engine,” Aiken snapped at Hopper, pointing to the Mark I. “I would be delighted to have the coefficients for the interpolation of the arc tangent by next Thursday.”
Hopper was a mathematician, but what she wasn’t was a computer programmer. Aiken gave her a codebook, and as Hopper put it, a week to learn “how to program the beast and get a program running.”
Hopper overcame her lack of programming skills the same way she always tackled other obstacles; by being persistent and stopping at nothing to solve problems. She eventually would become well-versed in how the machine operated, all 750,000 parts, 530 miles of wire and 3 million wire connections crammed in a machine that was 8-feet tall and 50-feet wide.
December 10, 2014
QotD: Quality, innovation, and progress
Measured by practically any physical metric, from the quality of the food we eat to the health care we receive to the cars we drive and the houses we live in, Americans are not only wildly rich, but radically richer than we were 30 years ago, to say nothing of 50 or 75 years ago. And so is much of the rest of the world. That such progress is largely invisible to us is part of the genius of capitalism — and it is intricately bound up with why, under the system based on selfishness, avarice, and greed, we do such a remarkably good job taking care of one another, while systems based on sharing and common property turn into miserable, hungry prison camps.
We treat the physical results of capitalism as though they were an inevitability. In 1955, no captain of industry, prince, or potentate could buy a car as good as a Toyota Camry, to say nothing of a 2014 Mustang, the quintessential American Everyman’s car. But who notices the marvel that is a Toyota Camry? In the 1980s, no chairman of the board, president, or prime minister could buy a computer as good as the cheapest one for sale today at Best Buy. In the 1950s, American millionaires did not have access to the quality and variety of food consumed by Americans of relatively modest means today, and the average middle-class household spent a much larger share of its income buying far inferior groceries. Between 1973 and 2008, the average size of an American house increased by more than 50 percent, even as the average number of people living in it declined. Things like swimming pools and air conditioning went from being extravagances for tycoons and movie stars to being common or near-universal. In his heyday, Howard Hughes didn’t have as good a television as you do, and the children of millionaires for generations died from diseases that for your children are at most an inconvenience. As the first 199,746 or so years of human history show, there is no force of nature ensuring that radical material progress happens as it has for the past 250 years. Technological progress does not drive capitalism; capitalism drives technological progress — and most other kinds of progress, too.
Kevin D. Williamson, “Welcome to the Paradise of the Real: How to refute progressive fantasies — or, a red-pill economics”, National Review, 2014-04-24