Jerry Brito is a sousveillance fan and he thinks you should be too:
The Narrative Clip is a digital camera about the size of a postage stamp that clips to one’s breast pocket or shirt collar and takes a photo every thirty seconds of whatever one’s seeing. The photos are uploaded to the cloud and can be accessed on demand with a smartphone app, making it easy to look up any moment in one’s life. When the project to mass-produce these cameras first hit Kickstarter, I knew I had to have one, and with any luck mine will be arriving in a couple of weeks.
The prospect of having a complete photographic record of my life is compelling for many reasons. I have a terrible memory, especially for faces, so it will be interesting to see if this device can help. There are also moments in life that would be great to relive, but that one can’t – or one doesn’t know one should – be photographing. Narrative’s Instagram feed has some good examples of these. But most importantly, I want to help hasten our inevitable sousveillance future.
Being monitored in everyday life has become inescapable. So, as David Brin points out in The Transparent Society, the question is not whether there should be pervasive monitoring, but who will have access to the data. Will it only be the powerful, who will use the information to control? Or will the rest of us also be able to watch back?
Ideally, perhaps, we would all be left alone to live private lives under no one’s gaze. Short of halting all technological progress, however, that ship has sailed. Mass surveillance is the inevitable result of smaller cameras and microphones, faster processors, and incredibly cheap storage. So if I can’t change that reality, I want to be able to watch back as well.
In Wired, Nicholas Weaver looks back on the way the internet was converted from a passive network infrastructure to a spy agency wonderland:
According to revelations about the QUANTUM program, the NSA can “shoot” (their words) an exploit at any target it desires as his or her traffic passes across the backbone. It appears that the NSA and GCHQ were the first to turn the internet backbone into a weapon; absent Snowdens of their own, other countries may do the same and then say, “It wasn’t us. And even if it was, you started it.”
If the NSA can hack Petrobras, the Russians can justify attacking Exxon/Mobil. If GCHQ can hack Belgicom to enable covert wiretaps, France can do the same to AT&T. If the Canadians target the Brazilian Ministry of Mines and Energy, the Chinese can target the U.S. Department of the Interior. We now live in a world where, if we are lucky, our attackers may be every country our traffic passes through except our own.
Which means the rest of us — and especially any company or individual whose operations are economically or politically significant — are now targets. All cleartext traffic is not just information being sent from sender to receiver, but is a possible attack vector.
The only self defense from all of the above is universal encryption. Universal encryption is difficult and expensive, but unfortunately necessary.
Encryption doesn’t just keep our traffic safe from eavesdroppers, it protects us from attack. DNSSEC validation protects DNS from tampering, while SSL armors both email and web traffic.
There are many engineering and logistic difficulties involved in encrypting all traffic on the internet, but its one we must overcome if we are to defend ourselves from the entities that have weaponized the backbone.
Ann Althouse finds it amazing that President Obama clearly understands why his campaign website was so effective and why the Obamacare website fails on so many levels, but can’t generalize that knowledge to the whole public/private sphere:
In yesterday’s interview with Chuck Todd, Obama said:
You know, one of the lessons — learned from this whole process on the website — is that probably the biggest gap between the private sector and the federal government is when it comes to I.T. …
Well, the reason is is that when it comes to my campaign, I’m not constrained by a bunch of federal procurement rules, right?
That is, many have pointed out that his campaign website was really good, so why didn’t that mean that he’d be good at setting up a health insurance website? The answer is that the government is bad because the government is hampered by… government!
And how we write — specifications and — and how the — the whole things gets built out. So part of what I’m gonna be looking at is how do we across the board, across the federal government, leap into the 21st century.
I love the combination of: 1. Barely able to articulate what the hell happens inside these computer systems, and 2. Wanting to leap!
Because when it comes to medical records for veterans, it’s still done in paper. Medicaid is still largely done on paper.
When we buy I.T. services generally, it is so bureaucratic and so cumbersome that a whole bunch of it doesn’t work or it ends up being way over cost.
This should have made him sympathetic to the way government burdens private enterprise, but he’s focused on liberating government to take over more of what has been done privately. And yet there’s no plan, no idea about what would suddenly enable government to displace private businesses competing to offer a product people want to buy.
A comment at Marginal Revolution deservedly has been promoted to being a guest post, discussing the scale of the problems with the Obamacare software:
The real problems are with the back end of the software. When you try to get a quote for health insurance, the system has to connect to computers at the IRS, the VA, Medicaid/CHIP, various state agencies, Treasury, and HHS. They also have to connect to all the health plan carriers to get pre-subsidy pricing. All of these queries receive data that is then fed into the online calculator to give you a price. If any of these queries fails, the whole transaction fails.
Most of these systems are old legacy systems with their own unique data formats. Some have been around since the 1960′s, and the people who wrote the code that runs on them are long gone. If one of these old crappy systems takes too long to respond, the transaction times out.
When you even contemplate bringing an old legacy system into a large-scale web project, you should do load testing on that system as part of the feasibility process before you ever write a line of production code, because if those old servers can’t handle the load, your whole project is dead in the water if you are forced to rely on them. There are no easy fixes for the fact that a 30 year old mainframe can not handle thousands of simultaneous queries. And upgrading all the back-end systems is a bigger job than the web site itself. Some of those systems are still there because attempts to upgrade them failed in the past. Too much legacy software, too many other co-reliant systems, etc. So if they aren’t going to handle the job, you need a completely different design for your public portal.
A lot of focus has been on the front-end code, because that’s the code that we can inspect, and it’s the code that lots of amateur web programmers are familiar with, so everyone’s got an opinion. And sure, it’s horribly written in many places. But in systems like this the problems that keep you up at night are almost always in the back-end integration.
The root problem was horrific management. The end result is a system built incorrectly and shipped without doing the kind of testing that sound engineering practices call for. These aren’t ‘mistakes’, they are the result of gross negligence, ignorance, and the violation of engineering best practices at just about every step of the way.
Mark Steyn’s weekend column touched on some items of interest to aficionados of past government software fiascos:
The witness who coughed up the intriguing tidbit about Obamacare’s exemption from privacy protections was one Cheryl Campbell of something called CGI. This rang a vague bell with me. CGI is not a creative free spirit from Jersey City with an impressive mastery of Twitter, but a Canadian corporate behemoth. Indeed, CGI is so Canadian their name is French: Conseillers en Gestion et Informatique. Their most famous government project was for the Canadian Firearms Registry. The registry was estimated to cost in total $119 million, which would be offset by $117 million in fees. That’s a net cost of $2 million. Instead, by 2004 the CBC (Canada’s PBS) was reporting costs of some $2 billion — or a thousand times more expensive.
Yeah, yeah, I know, we’ve all had bathroom remodelers like that. But in this case the database had to register some 7 million long guns belonging to some two-and-a-half to three million Canadians. That works out to almost $300 per gun — or somewhat higher than the original estimate for processing a firearm registration of $4.60. Of those $300 gun registrations, Canada’s auditor general reported to parliament that much of the information was either duplicated or wrong in respect to basic information such as names and addresses.
Also, there was a 1-800 number, but it wasn’t any use.
So it was decided that the sclerotic database needed to be improved.
But it proved impossible to “improve” CFIS (the Canadian Firearms Information System). So CGI was hired to create an entirely new CFIS II, which would operate alongside CFIS I until the old system could be scrapped. CFIS II was supposed to go operational on January 9, 2003, but the January date got postponed to June, and 2003 to 2004, and $81 million was thrown at it before a new Conservative government scrapped the fiasco in 2007. Last year, the government of Ontario canceled another CGI registry that never saw the light of day — just for one disease, diabetes, and costing a mere $46 million.
But there’s always America! “We continue to view U.S. federal government as a significant growth opportunity,” declared CGI’s chief exec, in what would also make a fine epitaph for the republic. Pizza and Mountain Dew isn’t very Montreal, and on the evidence of three years of missed deadlines in Ontario and the four-year overrun on the firearms database CGI don’t sound like they’re pulling that many all-nighters. Was the government of the United States aware that CGI had been fired by the government of Canada and the government of Ontario (and the government of New Brunswick)? Nobody’s saying. But I doubt it would make much difference.
Bruce Schneier explains why you’d want to do this … and how much of a pain it can be to set up and work with:
Since I started working with Snowden’s documents, I have been using a number of tools to try to stay secure from the NSA. The advice I shared included using Tor, preferring certain cryptography over others, and using public-domain encryption wherever possible.
I also recommended using an air gap, which physically isolates a computer or local network of computers from the Internet. (The name comes from the literal gap of air between the computer and the Internet; the word predates wireless networks.)
But this is more complicated than it sounds, and requires explanation.
Since we know that computers connected to the Internet are vulnerable to outside hacking, an air gap should protect against those attacks. There are a lot of systems that use — or should use — air gaps: classified military networks, nuclear power plant controls, medical equipment, avionics, and so on.
Osama Bin Laden used one. I hope human rights organizations in repressive countries are doing the same.
Air gaps might be conceptually simple, but they’re hard to maintain in practice. The truth is that nobody wants a computer that never receives files from the Internet and never sends files out into the Internet. What they want is a computer that’s not directly connected to the Internet, albeit with some secure way of moving files on and off.
He also provides a list of ten rules (or recommendations, I guess) you should follow if you want to set up an air-gapped machine of your own.
If you guessed “the internet” — particularly the internet sites that ate the classified ad business alive — you’re apparently wrong. The real culprit is … an amazingly old-fashioned racist and sexist stereotype:
For years, we’ve talked about the ridiculousness with which many old school journalists want to blame the internet (or, more specifically Google or Craigslist) for the troubles some in the industry have had lately. It is a ridiculous claim. Basically, newspapers have survived for years on a massive inefficiency in information. What newspapers did marginally well was bring together a local community of interest, take their attention, and then sell that attention. What many folks in the news business still can’t come to terms with is the fact that there are tons of other communities of attention out there now, so they can’t slide by on inefficiencies like they did in the past.
Either way, it’s always nice to see some in the industry recognize that blaming the internet is a mistake. However, Chris Powell, the managing editor for the Journal Inquirer in Connecticut’s choice of a different culprit doesn’t seem much more on target. Powell, who it appears, actually does have a journalism job (I can’t fathom how or why) published an opinion piece (found via Mark Hamilton and Mathew Ingram) that puts the blame squarely on… single mothers. Okay, not just any single mothers:
Indeed, newspapers still can sell themselves to traditional households — two-parent families involved with their children, schools, churches, sports, civic groups, and such. But newspapers cannot sell themselves to households headed by single women who have several children by different fathers, survive on welfare stipends, can hardly speak or read English, move every few months to cheat their landlords, barely know what town they’re living in, and couldn’t afford a newspaper subscription even if they could read. And such households constitute a rising share of the population.
In Maclean’s, Jesse Brown looks at the rather dangerous interpretation of how email works in a recent court decision:
Newsflash: Google scans your email! Whether you have a Gmail account or just send email to people who do, Gmail’s bots automatically read your messages, mostly for the purpose of creating targeted advertising. And if you were reading this in 2005, that might seem shocking.
Today, I think most Internet users understand how free webmail works and are okay with it. But a U.S. federal judge has ruled otherwise. Yesterday, U.S. District Judge Lucy H. Koh ruled that Google’s terms of service and privacy policies do not explicitly spell out that Google will “intercept” users’ email (here’s the ruling).
The word “intercept” is crucial here, because it may put Google in the crosshairs of State and Federal anti-wiretapping laws. After Judge Koh’s ruling, a class-action lawsuit against Google can proceed, whose plaintiffs seek remedies for themselves and for class groups including “all U.S. citizen non-Gmail users who have sent a message to a Gmail user and received a reply…”. Like they say in Vegas, go big or go home.
An algorithm that scans my messages for keywords like “vacation” in order to offer me cheap flights is not by any stretch of the imagination a wiretap.
But Google has taken a different tack in their defence. If, they’ve argued, what Gmail does qualifies as interception, than so does all email, since automated processing is needed just to send the stuff, whether or not advertising algorithms or anti-spam filters are in use. This logic can be extended, I suppose, to all data that passes through the Internet.
You might call it fighting stupid with stupid, but I think it’s a bold bluff: rule us illegal, Google warns the court, and be prepared to deem the Internet itself a wiretap violation.
A while back, I mentioned that my hosting service was moving the site to a new server. Fortunately, this change appears to have happened without disrupting anything. Today, however, I had to make a DNS setting change that may take up to 24 hours to take effect. If you get a 404 message that the site is unreachable, try again in an hour or so and hopefully the new settings will be in place. Or, I could be worried over nothing and this will also be a transparent change from the users’ point of view (fingers crossed, anyway).
Nick Gillespie is puzzled that Reddit users are no longer allowed to submit links to Reason:
So I’m left wondering exactly what we did to incur the wrath of TheRedditPope. Reddit penalizes sites and users that scrape articles from original sources, try to game the system by submitting only material in which they have an publishing interest, and don’t add much information or analysis. As several of the commenters in the thread note, Reason.com is the biggest libertarian news site on the web and whether folks agree or not with our take on a given topic, they can’t seriously accuse us of ripping off other sites or not shooting our mouths off with our own particular POVs on any given topics.
Consider the attempted post that brought the ban to our attention. The user who contacted us had apparently tried to submit this story: “Do-Nothing Congress? Americans Think Congress Passes Too Many Laws, Wrong Kinds of Legislation.” Click on the link and you’ll be taken to an extended analysis of information drawn from the latest Reason-Rupe Poll, an original quarterly survey of American voters that has garnered praise from all over the political spectrum and has been cited in all sorts of mainstream and alternative outlets. If the Reason poll — which is designed by Reason Foundation, the nonprofit that publishes this site, and is executed in the field by the same group that conducts Pew Research — and that post in particular don’t meet the threshold of original content that is news-rich and original, then nothing does.
I am a huge admirer of Reddit, even in the wake of recent revelations about the /r/Politics ban. As I wrote last year in a Reddit thread,
Reddit is one of those rare sites that actually delivers on the potential of the Internet and Web to create a new way of creating community and distributing news, information, and culture that simply couldn’t exist in the past. Like wildly different sites ranging from slashdot to Arts & Letters Daily to Talking Points Memo to the late, lamented Suck, Reddit is precisely one of the reasons why cyberspace (or whatever you call it) continues to excite us and make plain old meatspace a little more tolerable.