The National Journal‘s Alex Brown talks about a federal government department facing the end of the line thanks to search engines like Google:
A little-known branch of the Commerce Department faces elimination, thanks to advances in technology and a snarkily named bill from Sens. Tom Coburn and Claire McCaskill.
The National Technical Information Service compiles federal reports, serving as a clearinghouse for the government’s scientific, technical, and business documents. The NTIS then sells copies of the documents to other agencies and the public upon request. It’s done so since 1950.
But Coburn and McCaskill say it’s hard to justify 150 employees and $66 million in taxpayer dollars when almost all of those documents are now available online for free.
Enter the Let Me Google That for You Act.
“Our goal is to eliminate you as an agency,” the famously grumpy Coburn told NTIS Director Bruce Borzino at a Wednesday hearing. Pulling no punches, Coburn suggested that any NTIS documents not already available to the public be put “in a small closet in the Department of Commerce.”
H/T to Jim Geraghty for the link. He assures us that despite any similarities to situations portrayed in his recent political novel The Weed Agency, he didn’t make this one up.
In Forbes, Tim Worstall ignores the slogans to follow the money in the Net Neutrality argument:
The FCC is having a busy time of it as their cogitations into the rules about net neutrality become the second most commented upon in the organisation’s history (second only to Janet Jackson’s nip-slip which gives us a good idea of the priorities of the citizenry). The various internet content giants, the Googles, Facebooks and so on of this world, are arguing very loudly that strict net neutrality should be the standard. We could, of course attribute this to all in those organisations being fully up with the hippy dippy idea that information just wants to be free. Apart from the obvious point that Zuckerberg, for one, is a little too young to have absorbed that along with the patchouli oil we’d probably do better to examine the underlying economics of what’s going on to work out why people are taking the positions they are.
Boiling “net neutrality” down to its essence the argument is about whether the people who own the connections to the customer, the broadband and mobile airtime providers, can treat different internet traffic differently. Should we force them to be neutral (thus the “neutrality” part) and treat all traffic exactly the same? Or should they be allowed to speed up some traffic, slow down other, in order to prioritise certain services over others?
We can (and many do) argue that we the consumers are paying for this bandwidth so it’s up to us to decide and we might well decide that they cannot. Others might (and they do) argue that certain services require very much more of that bandwidth than others, further, require a much higher level of service, and it would be economically efficient to charge for that greater volume and quality. For example, none of us would mind all that much if there was a random second or two delay in the arrival of a gmail message but we’d be very annoyed if there were random such delays in the arrival of a YouTube packet. Netflix would be almost unusable if streaming were subject to such delays. So it might indeed make sense to prioritise such traffic and slow down other to make room for it.
You can balance these arguments as you wish: there’s not really a “correct” answer to this, it’s a matter of opinion. But why are the content giants all arguing for net neutrality? What’s their reasoning?
As you’d expect, it all comes down to the money. Who pays more for what under a truly “neutral” model and who pays more under other models. The big players want to funnel off as much of the available profit to themselves as possible, while others would prefer the big players reduced to the status of regulated water company: carrying all traffic at the same rate (which then allows the profits to go to other players).
Tim Worstall asks when it would be appropriate for your driverless car to kill you:
Owen Barder points out a quite delightful problem that we’re all going to have to come up with some collective answer to over the driverless cars coming from Google and others. Just when is it going to be acceptable that the car kills you, the driver, or someone else? This is a difficult public policy question and I’m really not sure who the right people to be trying to solve it are. We could, I guess, given that it is a public policy question, turn it over to the political process. It is, after all, there to decide on such questions for us. But given the power of the tort bar over that process I’m not sure that we’d actually like the answer we got. For it would most likely mean that we never do get driverless cars, at least not in the US.
The basic background here is that driverless cars are likely to be hugely safer than the current human directed versions. For most accidents come about as a result of driver error. So, we expect the number of accidents to fall considerably as the technology rolls out. This is great, we want this to happen. However, we’re not going to end up with a world of no car accidents. Which leaves us with the problem of how do we program the cars to work when there is unavoidably going to be an accident?
So we actually end up with two problems here. The first being the one that Barder has outlined, which is that there’s an ethical question to be answered over how the programming decisions are made. Seriously, under what circumstances should a driverless car, made by Google or anyone else, be allowed to kill you or anyone else? The basic Trolly Problem is easy enough, kill fewer people by preference. But when one is necessary which one? And then a second problem which is that the people who have done the coding are going to have to take legal liability for that decision they’ve made. And given the ferocity of the plaintiff’s bar at times I’m not sure that anyone will really be willing to make that decision and thus adopt that potential liability.
Clearly, this needs to be sorted out at the political level. Laws need to be made clarifying the situation. And hands up everyone who thinks that the current political gridlock is going to manage that in a timely manner?
Michael Geist talks about another court attempting to push local rules into other jurisdictions online — in this case it’s not the European “right to be forgotten” nonsense, it’s unfortunately a Canadian court pulling the stunt:
In the aftermath of the European Court of Justice “right to be forgotten” decision, many asked whether a similar ruling could arise in Canada. While a privacy-related ruling has yet to hit Canada, last week the Supreme Court of British Columbia relied in part on the decision in issuing an unprecedented order requiring Google to remove websites from its global index. The ruling in Equustek Solutions Inc. v. Jack is unusual since its reach extends far beyond Canada. Rather than ordering the company to remove certain links from the search results available through Google.ca, the order intentionally targets the entire database, requiring the company to ensure that no one, anywhere in the world, can see the search results. Note that this differs from the European right to be forgotten ruling, which is limited to Europe.
The implications are enormous since if a Canadian court has the power to limit access to information for the globe, presumably other courts would as well. While the court does not grapple with this possibility, what happens if a Russian court orders Google to remove gay and lesbian sites from its database? Or if Iran orders it remove Israeli sites from the database? The possibilities are endless since local rules of freedom of expression often differ from country to country. Yet the B.C. court adopts the view that it can issue an order with global effect. Its reasoning is very weak, concluding that:
the injunction would compel Google to take steps in California or the state in which its search engine is controlled, and would not therefore direct that steps be taken around the world. That the effect of the injunction could reach beyond one state is a separate issue.
Unfortunately, it does not engage effectively with this “separate issue.”
Well, right in this particular analysis, anyway:
Which is where we can bring Karl Marx into the discussion. Wrong as he was on many points he was at times a perceptive analyst. And he noted that what determined the wages of the workers wasn’t some calculation of a “fair wage”, nor some true value of their production (although he had much to say on both points), but in a market economy the wages that were paid were a reflection of what other people were willing to pay for access to that labour.
If, for example, there were a large number of unemployed (that “reserve army of the unemployed”) then a capitalist didn’t have to raise the wages of his workers however far productivity grew. If anyone tried to capture a bit more of the value being created, say through a strike or other activity, then the capitalist could simply fire them and bring in some of those unemployed. No profits needed to be shared with the workers. However, when we get to a situation of full employment then the dynamic changes. It’s not possible to simply hire and fire to keep wages low. For the other capitalists are competing for access to that labour that makes those profits. The higher profits go the higher all capitalists will be willing to bid up wages to continue making some profit at all.
The obverse of this is if the employers collude in order to artificially suppress the wages of the workers which is why that case involving Apple, Google and so on is going to trial. That’s monopoly capitalism that is and we really don’t like it at all.
But in this case with Yahoo trying to challenge Google’s YouTube, it will be the workers who benefit. For the two companies are vying with each other for access to the content being made and thus the profits that can be made. Of whatever revenue can be made a larger portion will go to the producers of the content and a smaller one to the owners of the platforms. Which is excellent, this is exactly what we want to happen.
In Wired, Steven Levy explains how the NSA nearly killed the internet:
On June 6, 2013, Washington Post reporters called the communications departments of Apple, Facebook, Google, Yahoo, and other Internet companies. The day before, a report in the British newspaper The Guardian had shocked Americans with evidence that the telecommunications giant Verizon had voluntarily handed a database of every call made on its network to the National Security Agency. The piece was by reporter Glenn Greenwald, and the information came from Edward Snowden, a 29-year-old IT consultant who had left the US with hundreds of thousands of documents detailing the NSA’s secret procedures.
Greenwald was the first but not the only journalist that Snowden reached out to. The Post’s Barton Gellman had also connected with him. Now, collaborating with documentary filmmaker and Snowden confidante Laura Poitras, he was going to extend the story to Silicon Valley. Gellman wanted to be the first to expose a top-secret NSA program called Prism. Snowden’s files indicated that some of the biggest companies on the web had granted the NSA and FBI direct access to their servers, giving the agencies the ability to grab a person’s audio, video, photos, emails, and documents. The government urged Gellman not to identify the firms involved, but Gellman thought it was important. “Naming those companies is what would make it real to Americans,” he says. Now a team of Post reporters was reaching out to those companies for comment.
It would be the start of a chain reaction that threatened the foundations of the industry. The subject would dominate headlines for months and become the prime topic of conversation in tech circles. For years, the tech companies’ key policy issue had been negotiating the delicate balance between maintaining customers’ privacy and providing them benefits based on their personal data. It was new and controversial territory, sometimes eclipsing the substance of current law, but over time the companies had achieved a rough equilibrium that allowed them to push forward. The instant those phone calls from reporters came in, that balance was destabilized, as the tech world found itself ensnared in a fight far bigger than the ones involving oversharing on Facebook or ads on Gmail. Over the coming months, they would find themselves at war with their own government, in a fight for the very future of the Internet.
Bruce Schneier on the rising tide of non-governmental surveillance:
Google recently announced that it would start including individual users’ names and photos in some ads. This means that if you rate some product positively, your friends may see ads for that product with your name and photo attached — without your knowledge or consent. Meanwhile, Facebook is eliminating a feature that allowed people to retain some portions of their anonymity on its website.
These changes come on the heels of Google’s move to explore replacing tracking cookies with something that users have even less control over. Microsoft is doing something similar by developing its own tracking technology.
More generally, lots of companies are evading the “Do Not Track” rules, meant to give users a say in whether companies track them. Turns out the whole “Do Not Track” legislation has been a sham.
It shouldn’t come as a surprise that big technology companies are tracking us on the Internet even more aggressively than before.
If these features don’t sound particularly beneficial to you, it’s because you’re not the customer of any of these companies. You’re the product, and you’re being improved for their actual customers: their advertisers.
In Maclean’s, Jesse Brown looks at the rather dangerous interpretation of how email works in a recent court decision:
Newsflash: Google scans your email! Whether you have a Gmail account or just send email to people who do, Gmail’s bots automatically read your messages, mostly for the purpose of creating targeted advertising. And if you were reading this in 2005, that might seem shocking.
Today, I think most Internet users understand how free webmail works and are okay with it. But a U.S. federal judge has ruled otherwise. Yesterday, U.S. District Judge Lucy H. Koh ruled that Google’s terms of service and privacy policies do not explicitly spell out that Google will “intercept” users’ email (here’s the ruling).
The word “intercept” is crucial here, because it may put Google in the crosshairs of State and Federal anti-wiretapping laws. After Judge Koh’s ruling, a class-action lawsuit against Google can proceed, whose plaintiffs seek remedies for themselves and for class groups including “all U.S. citizen non-Gmail users who have sent a message to a Gmail user and received a reply…”. Like they say in Vegas, go big or go home.
An algorithm that scans my messages for keywords like “vacation” in order to offer me cheap flights is not by any stretch of the imagination a wiretap.
But Google has taken a different tack in their defence. If, they’ve argued, what Gmail does qualifies as interception, than so does all email, since automated processing is needed just to send the stuff, whether or not advertising algorithms or anti-spam filters are in use. This logic can be extended, I suppose, to all data that passes through the Internet.
You might call it fighting stupid with stupid, but I think it’s a bold bluff: rule us illegal, Google warns the court, and be prepared to deem the Internet itself a wiretap violation.
Charles Hugh Smith on the dangers of being too big in your own market:
Microsoft is a case study in dominance leading to incompetence and catastrophe. Within the moat of near-monopoly/dominance, competence dwindles to the ability to keep doing what worked spectacularly well in the past, and keeping bureaucratic infighting and divisional rivalries down to a dull background erosion of initiative and talent.
Doing more of what succeeded spectacularly in the past works until it doesn’t, at which point doggedly pressing on with the old formula of success leads to catastrophic failures.
Nokia and Blackberry are recent case studies, but the rise of Google Chrome and smart-phone/tablet computing is beginning to threaten Microsoft’s core business of being the utility monopoly in the PC space.
Dominance means leaders and employees alike lose the ability to experience risk. The customer will take what is delivered, regardless, for the simple reason that alternatives are either unavailable or cumbersome.
Dominance in any space breeds complacency and enables the luxuries of political squabbling, sclerosis and loss of focus. Competence becomes incompetence, and the infrastructure that fosters creativity and flexibility — that is, a keen appreciation of risk and spontaneity — is slowly dismantled.
That applies not just to corporations but to governments, nations and empires.
H/T to Zero Hedge for the link.
Vinay Mandalia discusses the quite rational response of the Indian government to the recent discovery that the US intelligence services have had full access to all email communications hosted on US email services:
The Government of India is planning to ban the use of US based email services like Gmail for official communications and is soon going to send out a formal notification to its half a million officials across the country asking them to use official email addresses and services provided by National Informatics Centre.
The move is intended to increase the security of confidential government data and information after it was revealed earlier that NSA may be involved in widespread spying and surveillance activities across the globe.
In a statement to reporters here J. Satyanarayana, secretary in the department of electronics and information technology, said that data of Indian citizens using US based email services like Gmail is residing on servers which are located outside India and for now the government is concerned about the large amount of official and critical data that may be resident on those servers.
Expect a lot of other US “allies” to suddenly discover that their internal communications have been an open book to their “friends” for the last 10-20 years and decide to take similar measures.
H/T to Techdirt for the link.
Wired‘s Klint Finley wants you to meet the indie hackers who want to jailbreak the internet (among other things):
One guy is wearing his Google Glass. Another showed up in an HTML5 t-shirt. And then there’s the dude who looks like the Mad Hatter, decked out in a top hat with an enormous white flower tucked into the brim.
At first, they look like any other gaggle of tech geeks. But then you notice that one of them is Ward Cunningham, the man who invented the wiki, the tech that underpins Wikipedia. And there’s Kevin Marks, the former vice president of web services at British Telecom. Oh, and don’t miss Brad Fitzpatrick, creator of the seminal blogging site LiveJournal and, more recently, a coder who works in the engine room of Google’s online empire.
Packed into a small conference room, this rag-tag band of software developers has an outsized digital pedigree, and they have a mission to match. They hope to jailbreak the internet.
They call it the Indie Web movement, an effort to create a web that’s not so dependent on tech giants like Facebook, Twitter, and, yes, Google — a web that belongs not to one individual or one company, but to everyone. “I don’t trust myself,” says Fitzpatrick. “And I don’t trust companies.” The movement grew out of an egalitarian online project launched by Fitzpatrick, before he made the move to Google. And over the past few years, it has roped in about 100 other coders from around the world.
I use a few tools to come up with items to post on the blog. The two most useful are Twitter and RSS. I’d been using Google Reader for my RSS needs until it was shut down at the beginning of July, so I switched to The Old Reader and it has been working quite well as a direct Google Reader replacement. Earlier this week, TOR had a server meltdown and multiple failures of drives while attempting to recover. As of this morning, they’re still trying to get back online and (hopefully) recover all the data. Fortunately, I’ve also been testing Newsvibe for RSS, and it’s still working well … but has a different set of feeds than TOR.
My other main tool, Twitter, seems to be having some issues today … or it might just be that my old Twitter client is finally giving up the ghost. I’ve been using the desktop TweetDeck client for years, but I really disliked the “new” version of the tool introduced when TweetDeck was taken over by Twitter itself. Over the last several months, the old client (version 0.38.2) has been slowly losing bits of functionality — for example, sometime in the last week, I lost the ability to send a direct message from Tweetdeck, and earlier this year it became impossible to use the “old” retweet method and more recently to retweet at all.
Today, when I started up the client, it was unable to retrieve any data from earlier this morning. This might be a general issue with the Twitter API or it might be yet another bit of creeping feature-fail. It’s picking up new Twitter posts, but one of the more useful features was that it would also collect tweets from my several lists that had been posted overnight. This morning, only the main feed column in Tweetdeck is being populated, the rest (Mentions, Direct Messages, various list and search columns) are empty.
I may need to shop around for a new Twitter client. Either way, it puts a crimp in my usual blogging habits.
If you use Google Reader, you’ve got until Monday to find a replacement tool or give up on your RSS feeds. Lifehacker wants to help:
The first thing you’ll want to do is back up your data as an OPML file through Google Takeout. You won’t be able to access it ever again once the service shuts down, so this officially qualifies as crunch time. Luckily, it’s really simple, and we’ve shown you how to do it in three easy steps. Once you’re done, I’d also make sure you have several secure backups saved at home and on the cloud, just to be sure.
As soon as your data is safe and sound, it’s time to go shopping for a new RSS home. Feedly is the most popular alternative at the moment, but there are tons of other options if it doesn’t check all of your boxes. In case you missed it, we’ve rounded up some of the best to help make the transition a little easier. All of these services will import that all-important OPML file, but some can pull your Reader data directly off of Google’s servers while it’s still available, including starred and read items in many cases, so it’s probably worth it to set up a new account over the weekend. In fact, if you haven’t settled on one alternative yet, you might want to sign up for several to hedge your bets and preserve this valuable metadata.
I’ve been using Google Reader to stay on top of news for my weekly Guild Wars 2 community round-ups at GuildMag, so finding a replacement was necessary. I settled on The Old Reader for my GW2 feeds and I’m experimenting with Newsvibe for other feeds.
I’ve been very pleased with The Old Reader, which has been a great replacement and the transition was nearly seamless. I’m still not completely sold on Newsvibe, as it has a couple of issues that reduce its usefulness to me: the session times out very quickly (less than an hour) and it can’t handle certain RSS feeds and refuses to indicate why (it just fails to add the new subscription silently).