Quotulatiousness

July 11, 2014

Remember the Y2K bug? It just caused draft notices to be sent to thousands of dead men

Filed under: Bureaucracy, Government, USA — Tags: , , , — Nicholas @ 07:57

This is a story that rightfully should have been published at the beginning of April (except it actually happened):

A year 2000-related bug has caused the US military to send more than 14,000 letters of conscription to men who were all born in the 1800s and died decades ago.

Shocked residents of Pennsylvania began receiving letters ordering their great grandparents to register for the US military draft by pain of “fine and imprisonment.”

“I said, ‘Geez, what the hell is this about?’” Chuck Huey, 73, of Kingston, Pennsylvania told the Associated Press when he received a letter for his late grandfather Bert Huey, born in 1894 and a first world war veteran who died at the age of 100 in 1995.

“It said he was subject to heavy fines and imprisonment if he didn’t sign up for the draft board,” exclaimed Huey. “We were just totally dumbfounded.”

The US Selective Service System, which sent the letters in error, automatically handles the drafting of US citizens and other US residents that are applicable for conscription. The cause of the error was narrowed down to a Y2K-like bug in the Pennsylvania department of transportation (PDT).

A clerk at the PDT failed to select a century during the transfer of 400,000 records to the Selective Service, producing 1990s records for men born a century earlier.

June 18, 2014

The liability concern in the future of driverless cars

Filed under: Law, Technology — Tags: , , , — Nicholas @ 08:04

Tim Worstall asks when it would be appropriate for your driverless car to kill you:

Owen Barder points out a quite delightful problem that we’re all going to have to come up with some collective answer to over the driverless cars coming from Google and others. Just when is it going to be acceptable that the car kills you, the driver, or someone else? This is a difficult public policy question and I’m really not sure who the right people to be trying to solve it are. We could, I guess, given that it is a public policy question, turn it over to the political process. It is, after all, there to decide on such questions for us. But given the power of the tort bar over that process I’m not sure that we’d actually like the answer we got. For it would most likely mean that we never do get driverless cars, at least not in the US.

The basic background here is that driverless cars are likely to be hugely safer than the current human directed versions. For most accidents come about as a result of driver error. So, we expect the number of accidents to fall considerably as the technology rolls out. This is great, we want this to happen. However, we’re not going to end up with a world of no car accidents. Which leaves us with the problem of how do we program the cars to work when there is unavoidably going to be an accident?

[…]

So we actually end up with two problems here. The first being the one that Barder has outlined, which is that there’s an ethical question to be answered over how the programming decisions are made. Seriously, under what circumstances should a driverless car, made by Google or anyone else, be allowed to kill you or anyone else? The basic Trolly Problem is easy enough, kill fewer people by preference. But when one is necessary which one? And then a second problem which is that the people who have done the coding are going to have to take legal liability for that decision they’ve made. And given the ferocity of the plaintiff’s bar at times I’m not sure that anyone will really be willing to make that decision and thus adopt that potential liability.

Clearly, this needs to be sorted out at the political level. Laws need to be made clarifying the situation. And hands up everyone who thinks that the current political gridlock is going to manage that in a timely manner?

Quite.

June 9, 2014

QotD: Economic modelling

Filed under: Economics, Gaming, Quotations — Tags: , — Nicholas @ 07:03

If we think of ourselves as empiricists who judge the value of the theory on the basis of how well it predicts, then we should have ditched economic models years ago. Never have our models managed with data to predict the major turning points, ever, in the history of capitalism. So if we were honest, we should simply accept that and rethink our approach.

But actually, I think they’re even worse. We can’t even predict the past very well using our models. Economic models are failing to model the past in a way that can explain the past. So what we end up doing with our economic models is retrofitting the data and our own prejudices about how the economy works.

This is why I’m saying that this profession of mine is not really anywhere near astronomy. It’s much closer to mathematized superstition, organized superstition, which has a priesthood to replicate on the basis of how well we learn the rituals.

Yanis Varoufakis, talking to Peter Suderman, “A Multiplayer Game Environment Is Actually a Dream Come True for an Economist”, Reason, 2014-05-30.

June 4, 2014

Sarcasm-detecting software wanted

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 09:02

Charles Stross discusses some of the second-order effects should the US Secret Service actually get the sarcasm-detection software they’re reportedly looking for:

… But then the Internet happened, and it just so happened to coincide with a flowering of highly politicized and canalized news media channels such that at any given time, whoever is POTUS, around 10% of the US population are convinced that they’re a baby-eating lizard-alien in a fleshsuit who is plotting to bring about the downfall of civilization, rather than a middle-aged male politician in a business suit.

Well now, here’s the thing: automating sarcasm detection is easy. It’s so easy they teach it in first year computer science courses; it’s an obvious application of AI. (You just get your Turing-test-passing AI that understands all the shared assumptions and social conventions that human-human conversation rely on to identify those statements that explicitly contradict beliefs that the conversationalist implicitly holds. So if I say “it’s easy to earn a living as a novelist” and the AI knows that most novelists don’t believe this and that I am a member of the set of all novelists, the AI can infer that I am being sarcastic. Or I’m an outlier. Or I’m trying to impress a date. Or I’m secretly plotting to assassinate the POTUS.)

Of course, we in the real world know that shaved apes like us never saw a system we didn’t want to game. So in the event that sarcasm detectors ever get a false positive rate of less than 99% (or a false negative rate of less than 1%) I predict that everybody will start deploying sarcasm as a standard conversational gambit on the internet.

Wait … I thought everyone already did?

Trolling the secret service will become a competitive sport, the goal being to not receive a visit from the SS in response to your totally serious threat to kill the resident of 1600 Pennsylvania Avenue. Al Qaida terrrrst training camps will hold tutorials on metonymy, aggressive irony, cynical detachment, and sarcasm as a camouflage tactic for suicide bombers. Post-modernist pranks will draw down the full might of law enforcement by mistake, while actual death threats go encoded as LOLCat macros. Any attempt to algorithmically detect sarcasm will fail because sarcasm is self-referential and the awareness that a sarcasm detector may be in use will change the intent behind the message.

As the very first commenter points out, a problem with this is that a substantial proportion of software developers (as indicated by their position on the Asperger/Autism spectrum) find it very difficult to detect sarcasm in real life…

Bruce Schneier on the human side of the Heartbleed vulnerability

Filed under: Technology — Tags: , , , — Nicholas @ 07:24

Reposting at his own site an article he did for The Mark News:

The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.

It was a software insecurity, but the problem was entirely human.

Software has vulnerabilities because it’s written by people, and people make mistakes — thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.

In retrospect, the mistake should have been obvious, and it’s amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.

The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google’s security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.

When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it’s announced.

May 9, 2014

QotD: Real history and economic modelling

Filed under: Economics, History, Media, Quotations — Tags: , , — Nicholas @ 08:23

I am not an economist. I am an economic historian. The economist seeks to simplify the world into mathematical models — in Krugman’s case models erected upon the intellectual foundations laid by John Maynard Keynes. But to the historian, who is trained to study the world “as it actually is”, the economist’s model, with its smooth curves on two axes, looks like an oversimplification. The historian’s world is a complex system, full of non-linear relationships, feedback loops and tipping points. There is more chaos than simple causation. There is more uncertainty than calculable risk. For that reason, there is simply no way that anyone — even Paul Krugman — can consistently make accurate predictions about the future. There is, indeed, no such thing as the future, just plausible futures, to which we can only attach rough probabilities. This is a caveat I would like ideally to attach to all forward-looking conjectural statements that I make. It is the reason I do not expect always to be right. Indeed, I expect often to be wrong. Success is about having the judgment and luck to be right more often than you are wrong.

Niall Ferguson, “Why Paul Krugman should never be taken seriously again”, The Spectator, 2013-10-13

April 11, 2014

Open source software and the Heartbleed bug

Filed under: Technology — Tags: , , , , — Nicholas @ 07:03

Some people are claiming that the Heartbleed bug proves that open source software is a failure. ESR quickly addresses that idiotic claim:

Heartbleed bugI actually chuckled when I read rumor that the few anti-open-source advocates still standing were crowing about the Heartbleed bug, because I’ve seen this movie before after every serious security flap in an open-source tool. The script, which includes a bunch of people indignantly exclaiming that many-eyeballs is useless because bug X lurked in a dusty corner for Y months, is so predictable that I can anticipate a lot of the lines.

The mistake being made here is a classic example of Frederic Bastiat’s “things seen versus things unseen”. Critics of Linus’s Law overweight the bug they can see and underweight the high probability that equivalently positioned closed-source security flaws they can’t see are actually far worse, just so far undiscovered.

That’s how it seems to go whenever we get a hint of the defect rate inside closed-source blobs, anyway. As a very pertinent example, in the last couple months I’ve learned some things about the security-defect density in proprietary firmware on residential and small business Internet routers that would absolutely curl your hair. It’s far, far worse than most people understand out there.

[…]

Ironically enough this will happen precisely because the open-source process is working … while, elsewhere, bugs that are far worse lurk in closed-source router firmware. Things seen vs. things unseen…

Returning to Heartbleed, one thing conspicuously missing from the downshouting against OpenSSL is any pointer to an implementation that is known to have a lower defect rate over time. This is for the very good reason that no such empirically-better implementation exists. What is the defect history on proprietary SSL/TLS blobs out there? We don’t know; the vendors aren’t saying. And we can’t even estimate the quality of their code, because we can’t audit it.

The response to the Heartbleed bug illustrates another huge advantage of open source: how rapidly we can push fixes. The repair for my Linux systems was a push-one-button fix less than two days after the bug hit the news. Proprietary-software customers will be lucky to see a fix within two months, and all too many of them will never see a fix patch.

Update: There are lots of sites offering tools to test whether a given site is vulnerable to the Heartbeat bug, but you need to step carefully there, as there’s a thin line between what’s legal in some countries and what counts as an illegal break-in attempt:

Websites and tools that have sprung up to check whether servers are vulnerable to OpenSSL’s mega-vulnerability Heartbleed have thrown up anomalies in computer crime law on both sides of the Atlantic.

Both the US Computer Fraud and Abuse Act and its UK equivalent the Computer Misuse Act make it an offence to test the security of third-party websites without permission.

Testing to see what version of OpenSSL a site is running, and whether it is also supports the vulnerable Heartbeat protocol, would be legal. But doing anything more active — without permission from website owners — would take security researchers onto the wrong side of the law.

And you shouldn’t just rush out and change all your passwords right now (you’ll probably need to do it, but the timing matters):

Heartbleed is a catastrophic bug in widely used OpenSSL that creates a means for attackers to lift passwords, crypto-keys and other sensitive data from the memory of secure server software, 64KB at a time. The mega-vulnerability was patched earlier this week, and software should be updated to use the new version, 1.0.1g. But to fully clean up the problem, admins of at-risk servers should generate new public-private key pairs, destroy their session cookies, and update their SSL certificates before telling users to change every potentially compromised password on the vulnerable systems.

April 3, 2014

ESR reviews Jeremy Rifkin’s latest book

Filed under: Books, Economics, Media, Technology — Tags: , , , , — Nicholas @ 10:46

The publisher sent a copy of The Zero Marginal Cost Society along with a note that Rifkin himself wanted ESR to receive a copy (because Rifkin thinks ESR is a good representative of some of the concepts in the book). ESR isn’t impressed:

In this book, Rifkin is fascinated by the phenomenon of goods for which the marginal cost of production is zero, or so close to zero that it can be ignored. All of the present-day examples of these he points at are information goods — software, music, visual art, novels. He joins this to the overarching obsession of all his books, which are variations on a theme of “Let us write an epitaph for capitalism”.

In doing so, Rifkin effectively ignores what capitalists do and what capitalism actually is. “Capital” is wealth paying for setup costs. Even for pure information goods those costs can be quite high. Music is a good example; it has zero marginal cost to reproduce, but the first copy is expensive. Musicians must own expensive instruments, be paid to perform, and require other capital goods such as recording studios. If those setup costs are not reliably priced into the final good, production of music will not remain economically viable.

[…]

Rifkin cites me in his book, but it is evident that he almost completely misunderstood my arguments in two different way, both of which bear on the premises of his book.

First, software has a marginal cost of production that is effectively zero, but that’s true of all software rather than just open source. What makes open source economically viable is the strength of secondary markets in support and related services. Most other kinds of information goods don’t have these. Thus, the economics favoring open source in software are not universal even in pure information goods.

Second, even in software — with those strong secondary markets — open-source development relies on the capital goods of software production being cheap. When computers were expensive, the economics of mass industrialization and its centralized management structures ruled them. Rifkin acknowledges that this is true of a wide variety of goods, but never actually grapples with the question of how to pull capital costs of those other goods down to the point where they no longer dominate marginal costs.

There are two other, much larger, holes below the waterline of Rifkin’s thesis. One is that atoms are heavy. The other is that human attention doesn’t get cheaper as you buy more of it. In fact, the opposite tends to be true — which is exactly why capitalists can make a lot of money by substituting capital goods for labor.

These are very stubborn cost drivers. They’re the reason Rifkin’s breathless hopes for 3-D printing will not be fulfilled. Because 3-D printers require feedstock, the marginal cost of producing goods with them has a floor well above zero. That ABS plastic, or whatever, has to be produced. Then it has to be moved to where the printer is. Then somebody has to operate the printer. Then the finished good has to be moved to the point of use. None of these operations has a cost that is driven to zero, or near zero at scale. 3-D printing can increase efficiency by outcompeting some kinds of mass production, but it can’t make production costs go away.

March 25, 2014

Tech culture and ageism

Filed under: Business, Technology, USA — Tags: , , , , — Nicholas @ 07:56

Noam Scheiber examines the fanatic devotion to youth in (some parts of) the high tech culture:

Silicon Valley has become one of the most ageist places in America. Tech luminaries who otherwise pride themselves on their dedication to meritocracy don’t think twice about deriding the not-actually-old. “Young people are just smarter,” Facebook CEO Mark Zuckerberg told an audience at Stanford back in 2007. As I write, the website of ServiceNow, a large Santa Clara–based I.T. services company, features the following advisory in large letters atop its “careers” page: “We Want People Who Have Their Best Work Ahead of Them, Not Behind Them.”

And that’s just what gets said in public. An engineer in his forties recently told me about meeting a tech CEO who was trying to acquire his company. “You must be the token graybeard,” said the CEO, who was in his late twenties or early thirties. “I looked at him and said, ‘No, I’m the token grown-up.’”

Investors have also become addicted to the youth movement:

The economics of the V.C. industry help explain why. Investing in new companies is fantastically risky, and even the best V.C.s fail a large majority of the time. That makes it essential for the returns on successes to be enormous. Whereas a 500 percent return on a $2 million investment (or “5x,” as it’s known) would be considered remarkable in any other line of work, the investments that sustain a large V.C. fund are the “unicorns” and “super-unicorns” that return 100x or 1,000x — the Googles and the Facebooks.

And this is where finance meets what might charitably be called sociology but is really just Silicon Valley mysticism. Finding themselves in the position of chasing 100x or 1,000x returns, V.C.s invariably tell themselves a story about youngsters. “One of the reasons they collectively prefer youth is because youth has the potential for the black swan,” one V.C. told me of his competitors. “It hasn’t been marked down to reality yet. If I was at Google for five years, what’s the chance I would be a black swan? A lot lower than if you never heard of me. That’s the collective mentality.”

Some of the corporate cultures sound more like playgroups than workgroups:

Whatever the case, the veneration of youth in Silicon Valley now seems way out of proportion to its usefulness. Take Dropbox, which an MIT alumnus named Drew Houston co-founded in 2007, after he got tired of losing access to his files whenever he forgot a thumb drive. Dropbox quickly caught on among users and began to vacuum up piles of venture capital. But the company has never quite outgrown its dorm-room vibe, even now that it houses hundreds of employees in an 85,000-square-foot space. Dropbox has a full-service jamming studio and observes a weekly ritual known as whiskey Fridays. Job candidates have complained about being interviewed in conference rooms with names like “The Break-up Room” and the “Bromance Chamber.” (A spokesman says the names were recently changed.)

Once a year, Houston, who still wears his chunky MIT class ring, presides over “Hack Week,” during which Dropbox headquarters turns into the world’s best-capitalized rumpus room. Employees ride around on skateboards and scooters, play with Legos at all hours, and generally tool around with whatever happens to interest them, other than work, which they are encouraged to set aside. “I’ve been up for about forty hours working on Dropbox Jeopardy,” one engineer told a documentarian who filmed a recent Hack Week. “It’s close to nearing insanity, but it feels worth it.”

It’s safe to say that the reigning sensibility at Dropbox has conquered more or less every corner of the tech world. The ping-pong playing can be ceaseless. The sexual mores are imported from college—“They’ll say something like, ‘This has been such a long day. I have to go out and meet some girls, hook up tonight,’ ” says one fortysomething consultant to several start-ups. And the vernacular is steroidally bro-ish. Another engineer in his forties who recently worked at a crowdsourcing company would steel himself anytime he reviewed a colleague’s work. “In programming, you need a throw-away variable,” the engineer explained to me. “So you come up with something quick.” With his co-workers “it would always be ‘dong’ this, ‘dick’ that, ‘balls’ this.”

There’s also the blind spot about having too many youth-focussed firms in the same market:

The most common advice V.C.s give entrepreneurs is to solve a problem they encounter in their daily lives. Unfortunately, the problems the average 22-year-old male programmer has experienced are all about being an affluent single guy in Northern California. That’s how we’ve ended up with so many games (Angry Birds, Flappy Bird, Crappy Bird) and all those apps for what one start-up founder described to me as cooler ways to hang out with friends on a Saturday night.

H/T to Kathy Shaidle for the link.

December 11, 2013

I’ve heard all of these responses many, many times

Filed under: Humour, Technology — Tags: , — Nicholas @ 11:08

This was posted to Google+ the other day, and it’s pretty accurate:

Programmer top 20 replies

The legacy of id Software’s Doom

Filed under: Gaming, Technology — Tags: , — Nicholas @ 09:10

Following up from yesterday’s post on the 20th anniversary, The Economist also sings the praises of Doom:

Yet for Babbage, the biggest innovation of Doom was something subtler. Video games, then and now, are mainly passive entertainment products, a bit like a more interactive television. You buy one and play it until you either beat it or get bored. But Doom was popular enough that eager users delved into its inner workings, hacking together programs that would let people build their own levels. Drawing something in what was, essentially, a rudimentary CAD program, and then running around inside your own creation, was an astonishing, liberating experience. Like almost everybody else, Babbage’s first custom level was an attempt to reconstruct his own house.

Other programs allowed you to play around with the game itself, changing how weapons worked, or how monsters behaved. For a 12-year-old who liked computers but was rather fuzzy about how they actually worked, being able to pull back the curtain like this was revelatory. Tinkering around with Doom was a wonderful introduction to the mysteries of computers and how their programs were put together. Rather than trying to stop this unauthorised meddling, id embraced it. Its next game, Quake, was designed to actively encourage it.

The modification, or “modding” movement that Doom and Quake inspired heavily influenced the growing games industry. Babbage knows people who got jobs in the industry off the back of their ability to remix others’ creations. (Tim Willits, id’s current creative director, was hired after impressing the firm with his home-brewed Doom maps.) Commercial products — even entire genres of games — exist that trace their roots back to a fascinated teenager playing around in his (or, more rarely, her) bedroom.

But it had more personal effects, too. Being able to alter the game transformed the player from a mere passive consumer of media into a producer in his own right, something that is much harder in most other kinds of media. Amateur filmmakers need expensive kit and a willing cast to indulge their passion. Mastering a musical instrument takes years of practice; starting a band requires like-minded friends. Writing a novel looks easy, until you try it. But creating your own Doom mod was easy enough that anyone could learn it in a day or two. With a bit of practice, it was possible to churn out professional-quality stuff. “User-generated content” was a big buzzword a few years back, but once again, Doom got there first.

December 10, 2013

Twenty years of Doom

Filed under: Gaming, History — Tags: — Nicholas @ 12:26

At The Register, Lucy Orr gets all nostalgic for id Software’s Doom, which turned 20 today:

Doom wasn’t short on story, never mind the gore and gunfire to follow, I particularly enjoyed the fact my own government had fucked things up by messing where they shouldn’t and opened a portal to hell. Damn, it’s just me left to go ultraviolent and push the legions of hell back into fiery limbo.

Faced with dual chain gun-wielding bulked up Aryans as your foe, Wolfenstein 3D was funny rather than scary. Indeed, I don’t remember being scared by a game until Doom appeared, with its engine capable of dimmed quivering lights and its repugnant textures. The nihilistic tones of Alien 3 echoed through such levels as the toxic refinery. Like the Alien series Doom’s dark corners allowed my imagination to run wild and consider turning the lights back on.

But Doom had a lot more going for it then a few scary moments, and I don’t just mean those scrambles for the health kit. Being able to carry an army’s worth of gun power is not necessarily realistic but neither are angry alien demons trying to rip my flesh off. I’m never empty handed with a chainsaw, a shotgun, a chain-gun, and a rocket launcher at my disposal.

With Doom you were not only introduced to a world of cyber demons but death matches — be sure to have the BFG 9000 on hand for that one shot kill — cooperative gameplay and also a world of player mods including maps and sometimes full remakes.

id Software - Doom 1993

November 20, 2013

An app like this may justify the existence of Google Glass

Filed under: Randomness, Technology — Tags: , , , , , — Nicholas @ 08:38

I have a terrible memory for people’s names (and no, it’s not just early senility … I’ve always had trouble remembering names). For example, I’ve been a member of the same badminton club for nearly 15 years and there are still folks there whose names just don’t register: not just new members, but people I’ve played with or against on dozens of occasions. I know them … I just can’t remember their names in a timely fashion. David Friedman suggests that Google Glass might be the solution I need:

I first encountered the solution to my problem in Double Star, a very good novel by Robert Heinlein. It will be made possible, in a higher tech version, by Google glass. The solution is the Farley File, named after FDR’s campaign manager.

A politician such as Roosevelt meets lots of people over the course of his career. For each of them the meeting is an event to be remembered and retold. It is much less memorable to the politician, who cannot possibly remember the details of ten thousand meetings. He can, however, create the illusion of doing so by maintaining a card file with information on everyone he has ever met: The name of the man’s wife, how many children he has, his dog, the joke he told, all the things the politician would have remembered if the meeting had been equally important to him. It is the job of one of the politician’s assistants to make sure that, any time anyone comes to see him, he gets thirty seconds to look over the card.

My version will use more advanced technology, courtesy of Google glass or one of its future competitors. When I subvocalize the key word “Farley,” the software identifies the person I am looking at, shows me his name (that alone would be worth the price) and, next to it, whatever facts about him I have in my personal database. A second trigger, if invoked, runs a quick search of the web for additional information.

Evernote has an application intended to do some of this (Evernote Hello), but it still requires the immersion-breaking act of accessing your smartphone to look up your contact information. Something similar in a Google Glass or equivalent environment might be the perfect solution.

November 4, 2013

QotD: Software quality assurance

Filed under: Business, Government, Quotations, Technology — Tags: , , , — Nicholas @ 10:13

The fundamental purpose of testing—and, for that matter, of all software quality assurance (QA) deliverables and processes — is to tell you just what you’ve built and whether it does what you think it should do. This is essential, because you can’t inspect a software program the same way you can inspect a house or a car. You can’t touch it, you can’t walk around it, you can’t open the hood or the bedroom door to see what’s inside, you can’t take it out for spin. There are very few tangible or visible clues to the completeness and reliability of a software system — and so we have to rely on QA activities to tell us how well built the system is.

Furthermore, almost any software system developed nowadays for production is vastly more complex than a house or car — it’s more on the same order of complexity of a large petrochemical processing and storage facility, with thousands of possible interconnections, states, and processes. We would be (rightly) terrified if, say, Exxon build such a sprawling oil refining complex near our neighborhood and then started up production having only done a bare minimum of inspection, testing, and trial operations before, during and after construction, offering the explanation that they would wait until after the plant went into production and then handle problems as they crop up. Yet too often that’s just how large software development projects are run, even though the system in development may well be more complex (in terms of connections, processes, and possible states) than such a petrochemical factory. And while most inadequately tested software systems won’t spew pollutants, poison the neighborhood, catch fire, or explode, they can cripple corporate operations, lose vast sums of money, spark shareholder lawsuits, and open the corporation’s directors and officers to civil and even criminal liability (particularly with the advent of Sarbanes-Oxley).

And that presumes that the system can actually go into production. The software engineering literature and the trade press are replete with well-documented case studies of “software runaways”: large IT re-engineering or development projects that consume tens or hundreds of millions of dollars, or in a few spectacular (government) cases, billions of dollars, over a period of years, before grinding to a halt and being terminated without ever having put a usable, working system into production. So it’s important not to skimp on testing and the other QA-related activities.

Bruce F. Webster, “Obamacare and the Testing Gap”, And Still I Persist…, 2013-10-31

October 29, 2013

Obamacare’s technical issues

Filed under: Government, Technology, USA — Tags: , , , — Nicholas @ 07:48

A comment at Marginal Revolution deservedly has been promoted to being a guest post, discussing the scale of the problems with the Obamacare software:

The real problems are with the back end of the software. When you try to get a quote for health insurance, the system has to connect to computers at the IRS, the VA, Medicaid/CHIP, various state agencies, Treasury, and HHS. They also have to connect to all the health plan carriers to get pre-subsidy pricing. All of these queries receive data that is then fed into the online calculator to give you a price. If any of these queries fails, the whole transaction fails.

Most of these systems are old legacy systems with their own unique data formats. Some have been around since the 1960′s, and the people who wrote the code that runs on them are long gone. If one of these old crappy systems takes too long to respond, the transaction times out.

[…]

When you even contemplate bringing an old legacy system into a large-scale web project, you should do load testing on that system as part of the feasibility process before you ever write a line of production code, because if those old servers can’t handle the load, your whole project is dead in the water if you are forced to rely on them. There are no easy fixes for the fact that a 30 year old mainframe can not handle thousands of simultaneous queries. And upgrading all the back-end systems is a bigger job than the web site itself. Some of those systems are still there because attempts to upgrade them failed in the past. Too much legacy software, too many other co-reliant systems, etc. So if they aren’t going to handle the job, you need a completely different design for your public portal.

A lot of focus has been on the front-end code, because that’s the code that we can inspect, and it’s the code that lots of amateur web programmers are familiar with, so everyone’s got an opinion. And sure, it’s horribly written in many places. But in systems like this the problems that keep you up at night are almost always in the back-end integration.

The root problem was horrific management. The end result is a system built incorrectly and shipped without doing the kind of testing that sound engineering practices call for. These aren’t ‘mistakes’, they are the result of gross negligence, ignorance, and the violation of engineering best practices at just about every step of the way.

« Newer PostsOlder Posts »

Powered by WordPress