The gulf that separates us from the near past is now so great that we cannot really imagine how one could design a spacecraft, or learn engineering in the first place, or even just look something up, without a computer and a network. Journalists my age will understand how profound and disturbing this break in history is: Do you remember doing your job before Google? It was, obviously, possible, since we actually did it, but how? It is like having a past life as a conquistador or a phrenologist.
Colby Cosh, “Who will be the moonwalkers of tomorrow?”, Maclean’s, 2014-07-24.
July 25, 2014
July 15, 2014
In Forbes, Tim Worstall ignores the slogans to follow the money in the Net Neutrality argument:
The FCC is having a busy time of it as their cogitations into the rules about net neutrality become the second most commented upon in the organisation’s history (second only to Janet Jackson’s nip-slip which gives us a good idea of the priorities of the citizenry). The various internet content giants, the Googles, Facebooks and so on of this world, are arguing very loudly that strict net neutrality should be the standard. We could, of course attribute this to all in those organisations being fully up with the hippy dippy idea that information just wants to be free. Apart from the obvious point that Zuckerberg, for one, is a little too young to have absorbed that along with the patchouli oil we’d probably do better to examine the underlying economics of what’s going on to work out why people are taking the positions they are.
Boiling “net neutrality” down to its essence the argument is about whether the people who own the connections to the customer, the broadband and mobile airtime providers, can treat different internet traffic differently. Should we force them to be neutral (thus the “neutrality” part) and treat all traffic exactly the same? Or should they be allowed to speed up some traffic, slow down other, in order to prioritise certain services over others?
We can (and many do) argue that we the consumers are paying for this bandwidth so it’s up to us to decide and we might well decide that they cannot. Others might (and they do) argue that certain services require very much more of that bandwidth than others, further, require a much higher level of service, and it would be economically efficient to charge for that greater volume and quality. For example, none of us would mind all that much if there was a random second or two delay in the arrival of a gmail message but we’d be very annoyed if there were random such delays in the arrival of a YouTube packet. Netflix would be almost unusable if streaming were subject to such delays. So it might indeed make sense to prioritise such traffic and slow down other to make room for it.
You can balance these arguments as you wish: there’s not really a “correct” answer to this, it’s a matter of opinion. But why are the content giants all arguing for net neutrality? What’s their reasoning?
As you’d expect, it all comes down to the money. Who pays more for what under a truly “neutral” model and who pays more under other models. The big players want to funnel off as much of the available profit to themselves as possible, while others would prefer the big players reduced to the status of regulated water company: carrying all traffic at the same rate (which then allows the profits to go to other players).
July 10, 2014
The “internet of things” is coming: more and more of your surroundings are going to be connected in a vastly expanded internet. A lot of attention needs to be paid to security in this new world, as Dan Goodin explains:
In the latest cautionary tale involving the so-called Internet of things, white-hat hackers have devised an attack against network-connected lightbulbs that exposes Wi-Fi passwords to anyone in proximity to one of the LED devices.
The attack works against LIFX smart lightbulbs, which can be turned on and off and adjusted using iOS- and Android-based devices. Ars Senior Reviews Editor Lee Hutchinson gave a good overview here of the Philips Hue lights, which are programmable, controllable LED-powered bulbs that compete with LIFX. The bulbs are part of a growing trend in which manufacturers add computing and networking capabilities to appliances so people can manipulate them remotely using smartphones, computers, and other network-connected devices. A 2012 Kickstarter campaign raised more than $1.3 million for LIFX, more than 13 times the original goal of $100,000.
According to a blog post published over the weekend, LIFX has updated the firmware used to control the bulbs after researchers discovered a weakness that allowed hackers within about 30 meters to obtain the passwords used to secure the connected Wi-Fi network. The credentials are passed from one networked bulb to another over a mesh network powered by 6LoWPAN, a wireless specification built on top of the IEEE 802.15.4 standard. While the bulbs used the Advanced Encryption Standard (AES) to encrypt the passwords, the underlying pre-shared key never changed, making it easy for the attacker to decipher the payload.
July 8, 2014
I disagree with you. I understand where you’re coming from, but I believe you’re mistaken, and I’ll explain why you are wrong.
First of all, the data backs up my point. I have facts out the waz. Your data are flawed, old, biased or incomplete. The people who collected your data are in prison for fraud or took funding from an evil billionaire who lives in a castle on a mountain where there is always lightning. My facts are bulletproof. They were gathered by humble grass roots researchers who love America and hate cancer. You can be forgiven for not having the same information that I do. People on “your side” don’t like to discuss data that annihilate their arguments. [...]
More important than the data, though, is that my argument is just. I can see why you made the argument that you did, but you’re forgetting a whole host of injustices, tragedies and “Raiders of the Lost Ark” style flying specters that would be loosed upon millions of people if you had your way. What I’m saying is that the moral arc of the universe bends towards my argument.
History has proved me correct on this point time and time again. From the Bible to the Renaissance to the Depression and WWII, my point was cemented repeatedly by real events and real people who suffered under the regimes of dogmatic fools like you. There are several authors who have made the very point I am making more eloquently than I have, and you can buy their books and read them in your spare time, which I suggest you do, because right now you’re uneducated and just talking out your butt.
Joe Donatelli, “Why You Are Wrong”, The Humor Columnist, 2014-06.
July 7, 2014
Michael Geist on the federal government’s secret dealings on the TPP docket:
The next major agreement on the government’s docket is the Trans Pacific Partnership, a massive proposed trade deal that includes the United States, Australia, Mexico, Malaysia, Singapore, New Zealand, Vietnam, Japan, Peru, and Chile. While other trade talks occupy a prominent place in the government’s promotional plans, the TPP remains largely hidden from view. Indeed, most Canadians would be surprised to learn that Canada is hosting the latest round of TPP negotiations this week in Ottawa.
My weekly technology law column (Toronto Star version, homepage version) argues the secrecy associated with the TPP – the draft text of the treaty has still not been formally released, the precise location of the Ottawa negotiations has not been disclosed, and even the existence of talks was only confirmed after media leaks – suggests that the Canadian government has something to hide when it comes to the TPP.
Since this is the first major TPP negotiation round to be held in Canada, there was an opportunity to build public support for the agreement. Yet instead, the Canadian government approach stands as one of the most secretive in TPP history. Why the secrecy?
The answer may lie in the substance of the proposed agreement, which leaked documents indicate often stands in stark contrast to current Canadian policy. The agricultural provision may attract the lion share of TPP attention, but it is the digital issues that are particularly problematic from a Canadian perspective.
For example, late last month the government announced that new copyright rules associated with Internet providers would take effect starting in 2015. The Canadian system, referred to as a “notice-and-notice” approach, is widely viewed as among the most balanced in the world, providing rights holders with the ability to raise concerns about alleged infringements, while simultaneously safeguarding the privacy and free speech rights of users.
July 6, 2014
The argument that you’ve got nothing to worry about because you’re not doing anything wrong has long since passed its best-before date. As Nick Gillespie points out, you don’t need to be a member of Al Qaeda, a black-hat hacker, or a registered Republican to be of interest to the NSA’s information gathering team:
If You’re Reading Reason.com, The NSA is Probably Already Following You
Two things to contemplate on early Sunday morning, before church or political talk shows get underway:
Remember all those times we were told that the government, especially the National Security Agency (NSA), only tracks folks who either guilty of something or involved in suspicious-seeming activity? Well, we’re going to have amend that a bit. Using documents from Edward Snowden, the Washington Post‘s Barton Gellman, Julie Tate, and Ashkan Soltani report
Ordinary Internet users, American and non-American alike, far outnumber legally targeted foreigners in the communications intercepted by the National Security Agency from U.S. digital networks, according to a four-month investigation by The Washington Post.
Nine of 10 account holders found in a large cache of intercepted conversations, which former NSA contractor Edward Snowden provided in full to The Post, were not the intended surveillance targets but were caught in a net the agency had cast for somebody else.
Many of them were Americans. Nearly half of the surveillance files, a strikingly high proportion, contained names, e-mail addresses or other details that the NSA marked as belonging to U.S. citizens or residents. NSA analysts masked, or “minimized,” more than 65,000 such references to protect Americans’ privacy, but The Post found nearly 900 additional e-mail addresses, unmasked in the files, that could be strongly linked to U.S. citizens or U.S.residents.
The cache of documents in question date from 2009 through 2012 and comprise 160,000 documents collected up the PRISM and Upstream, which collect data from different sources. “Most of the people caught up in those programs are not the targets and would not lawfully qualify as such,” write Gellman, Julie Tate, and Ashkan Soltani, who also underscore that NSA surveillance has produced some very meaningful and good intelligence. The real question is whether the government can do that in a way that doesn’t result in massive dragnet programs that create far more problems ultimately than they solve (remember the Church Committee?).
Read the whole thing. And before anyone raises the old “if you’re innocent, you’ve got nothing to hide shtick,” read Scott Shackford’s “3 Reasons the ‘Nothing to Hide’ Crowd Should be worried about Government Surveillance.”
July 2, 2014
EU publishers want a totally different online model for content – where they can monetize everything
Glyn Moody reports on the passionate desire of EU publishing organizations to get rid of as much free content as possible and replace it with an explicit licensing regime (with them holding all the rights, of course):
For too many years, the copyright industries fought hard against the changes being wrought by the rise of the Internet and the epochal shift from analog to digital. Somewhat belatedly, most of those working in these sectors have finally accepted that this is not a passing phase, but a new world that requires new thinking in their businesses, as in many other spheres. A recent attempt to codify that thinking can be found in a publication from the European Publishers Council (EPC). “Copyright Enabled on the Network” (pdf) — subtitled “From vision to reality: Copyright, technology and practical solutions enabling the media & publishing ecosystem” — that is refreshingly honest about the group’s aims:
Since 1991, Members [of the EPC] have worked to review the impact of proposed European legislation on the press, and then express an opinion to legislators, politicians and opinion-formers with a view to influencing the content of final regulations. The objective has always been to encourage good law-making for the media industry.
The new report is part of that, and is equally frank about what lies at the heart of the EPC’s vision — licensing:
A thread which runs through this paper is the proliferation of ‘direct to user’ licensing by publishers and other rights owners. Powered by ubiquitous data standards, to identify works and those who have rights in those works, licensing will continue to innovate exponentially so that eventually the cost of serving a licence is close to zero. The role of technology is to make this process seamless and effective from the user’s perspective, whether that user is the end consumer or another party in the digital content supply chain.
[...] the EPS vision includes being able to pin down every single “granular” part of a mash-up, so that the rights can be checked and — of course — licensed. Call it the NSA approach to copyright: total control through total surveillance.
Last year, when the National Post started demanding a paid license to quote any part of their articles (including stories they picked up from other sources), I stopped linking to their site. I suspect most Canadian bloggers did the same, as I see very few links to the newspaper now compared to before the change in their policy. It worked from their point of view: I’m certainly not sending any traffic to their site now, and there was never a chance of me being able to afford their $150 per 100 word licensing rate. Win-win, I guess. The EPS is hoping to avoid that scenario playing out in Europe by mandating that all content use the same kind of licensing, backed up by the power of the courts (and the kind of pervasive surveillance tactics the NSA and its Anglosphere partners have honed to a very fine edge).
In Wired, Peter W. Singer And Allan Friedman analyze five common myths about online security:
“A domain for the nerds.” That is how the Internet used to be viewed back in the early 1990s, until all the rest of us began to use and depend on it. But this quote is from a White House official earlier this year describing how cybersecurity is too often viewed today. And therein lies the problem, and the needed solution.
Each of us, in whatever role we play in life, makes decisions about cybersecurity that will shape the future well beyond the world of computers. But by looking at this issue as only for the IT Crowd, we too often do so without the proper tools. Basic terms and essential concepts that define what is possible and proper are being missed, or even worse, distorted. Some threats are overblown and overreacted to, while others are ignored.
Perhaps the biggest problem is that while the Internet has given us the ability to run down the answer to almost any question, cybersecurity is a realm where past myth and future hype often weave together, obscuring what actually has happened and where we really are now. If we ever want to get anything effective done in securing the online world, we have to demystify it first.
Myth #2: Every Day We Face “Millions of Cyber Attacks”
This is what General Keith Alexander, the recently retired chief of US military and intelligence cyber operations, testified to Congress in 2010. Interestingly enough, leaders from China have made similar claims after their own hackers were indicted, pointing the finger back at the US. These numbers are both true and utterly useless.
Counting individual attack probes or unique forms of malware is like counting bacteria — you get big numbers very quickly, but all you really care about is the impact and the source. Even more so, these numbers conflate and confuse the range of threats we face, from scans and probes caught by elementary defenses before they could do any harm, to attempts at everything from pranks to political protests to economic and security related espionage (but notably no “Cyber Pearl Harbors,” which have been mentioned in government speeches and mass media a half million times). It’s a lot like combining everything from kids with firecrackers to protesters with smoke bombs to criminals with shotguns, spies with pistols, terrorists with grenades, and militaries with missiles in the same counting, all because they involve the same technology of gunpowder.
June 17, 2014
Michael Geist talks about another court attempting to push local rules into other jurisdictions online — in this case it’s not the European “right to be forgotten” nonsense, it’s unfortunately a Canadian court pulling the stunt:
In the aftermath of the European Court of Justice “right to be forgotten” decision, many asked whether a similar ruling could arise in Canada. While a privacy-related ruling has yet to hit Canada, last week the Supreme Court of British Columbia relied in part on the decision in issuing an unprecedented order requiring Google to remove websites from its global index. The ruling in Equustek Solutions Inc. v. Jack is unusual since its reach extends far beyond Canada. Rather than ordering the company to remove certain links from the search results available through Google.ca, the order intentionally targets the entire database, requiring the company to ensure that no one, anywhere in the world, can see the search results. Note that this differs from the European right to be forgotten ruling, which is limited to Europe.
The implications are enormous since if a Canadian court has the power to limit access to information for the globe, presumably other courts would as well. While the court does not grapple with this possibility, what happens if a Russian court orders Google to remove gay and lesbian sites from its database? Or if Iran orders it remove Israeli sites from the database? The possibilities are endless since local rules of freedom of expression often differ from country to country. Yet the B.C. court adopts the view that it can issue an order with global effect. Its reasoning is very weak, concluding that:
the injunction would compel Google to take steps in California or the state in which its search engine is controlled, and would not therefore direct that steps be taken around the world. That the effect of the injunction could reach beyond one state is a separate issue.
Unfortunately, it does not engage effectively with this “separate issue.”
June 13, 2014
Some great news on the privacy front, this time a decision handed down by the Supreme Court of Canada, as reported by Michael Geist:
This morning another voice entered the discussion and completely changed the debate. The Supreme Court of Canada issued its long-awaited R. v. Spencer decision, which examined the legality of voluntary warrantless disclosure of basic subscriber information to law enforcement. In a unanimous decision written by (Harper appointee) Justice Thomas Cromwell, the court issued a strong endorsement of Internet privacy, emphasizing the privacy importance of subscriber information, the right to anonymity, and the need for police to obtain a warrant for subscriber information except in exigent circumstances or under a reasonable law.
I discuss the implications below, but first some of the key findings. First, the Court recognizes that there is a privacy interest in subscriber information. While the government has consistently sought to downplay that interest, the court finds that the information is much more than a simple name and address, particular in the context of the Internet. As the court states:
the Internet has exponentially increased both the quality and quantity of information that is stored about Internet users. Browsing logs, for example, may provide detailed information about users’ interests. Search engines may gather records of users’ search terms. Advertisers may track their users across networks of websites, gathering an overview of their interests and concerns. “Cookies” may be used to track consumer habits and may provide information about the options selected within a website, which web pages were visited before and after the visit to the host website and any other personal information provided. The user cannot fully control or even necessarily be aware of who may observe a pattern of online activity, but by remaining anonymous – by guarding the link between the information and the identity of the person to whom it relates – the user can in large measure be assured that the activity remains private.
Given all of this information, the privacy interest is about much more than just name and address.
Second, the court expands our understanding of informational privacy, concluding that there three conceptually distinct issues: privacy as secrecy, privacy as control, and privacy as anonymity. It is anonymity that is particularly notable as the court recognizes its importance within the context of Internet usage. Given the importance of the information and the ability to link anonymous Internet activities with an identifiable person, a high level of informational privacy is at stake.
in the totality of the circumstances of this case, there is a reasonable expectation of privacy in the subscriber information. The disclosure of this information will often amount to the identification of a user with intimate or sensitive activities being carried out online, usually on the understanding that these activities would be anonymous. A request by a police officer that an ISP voluntarily disclose such information amounts to a search.
Fourth, having concluded that obtaining subscriber information was a search with a reasonable expectation of privacy, the information was unconstitutionally obtained therefore led to an unlawful search. Addressing the impact of the PIPEDA voluntary disclosure clause, the court notes:
Since in the circumstances of this case the police do not have the power to conduct a search for subscriber information in the absence of exigent circumstances or a reasonable law, I do not see how they could gain a new search power through the combination of a declaratory provision and a provision enacted to promote the protection of personal information.
Update, 7 July: A few weeks later, the US Supreme Court also made a strong pro-privacy ruling, this one mandating a warrant for police to search the contents of a cellphone.
— Julia Angwin (@JuliaAngwin) June 25, 2014
Politico‘s Josh Gerstein has more on the ruling in in Riley v. California:
The Supreme Court’s blunt and unequivocal decision Wednesday giving Americans strong protection against arrest-related searches of their cell phones could also give a boost to lawsuits challenging the National Security Agency’s vast collection of phone call data.
Chief Justice John Roberts’s 28-page paean to digital privacy was like music to the ears of critics of the NSA’s metadata program, which sweeps up details on billions of calls and searches them for possible links to terrorist plots.
“This is a remarkably strong affirmation of privacy rights in a digital age,” said Marc Rotenberg of the Electronic Privacy Information Center. “The court found that digital data is different and that has constitutional significance, particularly in the realm of [the] Fourth Amendment…I think it also signals the end of the NSA program.”
Roberts’s opinion is replete with rhetoric warning about the privacy implications of access to data in individuals’ smart phones, including call logs, Web search records and location information. Many of the arguments parallel, or are virtually identical to, the ones privacy advocates have made about the dangers inherent in the NSA’s call metadata program.
June 5, 2014
It’s been a year since the name Edward Snowden became known to the world, and it’s been a bumpy ride since then, as we found out that the tinfoil-hat-wearing anti-government conspiracy theorists were, if anything, under-estimating the actual level of organized, secret government surveillance. At The Register, Duncan Campbell takes us inside the “FIVE-EYED VAMPIRE SQUID of the internet”, the five-way intelligence-sharing partnership of US/UK/Canada/Australia/New Zealand:
One year after The Guardian opened up the trove of top secret American and British documents leaked by former National Security Agency (NSA) sysadmin Edward J Snowden, the world of data security and personal information safety has been turned on its head.
Everything about the safety of the internet as a common communication medium has been shown to be broken. As with the banking disasters of 2008, the crisis and damage created — not by Snowden and his helpers, but by the unregulated and unrestrained conduct the leaked documents have exposed — will last for years if not decades.
Compounding the problem is the covert network of subornment and control that agencies and collaborators working with the NSA are now revealed to have created in communications and computer security organisations and companies around the globe.
The NSA’s explicit objective is to weaken the security of the entire physical fabric of the net. One of its declared goals is to “shape the worldwide commercial cryptography market to make it more tractable to advanced cryptanalytic capabilities being developed by the NSA”, according to top secret documents provided by Snowden.
Profiling the global machinations of merchant bank Goldman Sachs in Rolling Stone in 2009, journalist Matt Taibbi famously characterized them as operating “everywhere … a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money”.
The NSA, with its English-speaking “Five Eyes” partners (the relevant agencies of the UK, USA, Australia, New Zealand and Canada) and a hitherto unknown secret network of corporate and government partners, has been revealed to be a similar creature. The Snowden documents chart communications funnels, taps, probes, “collection systems” and malware “implants” everywhere, jammed into data networks and tapped into cables or onto satellites.
June 4, 2014
Charles Stross discusses some of the second-order effects should the US Secret Service actually get the sarcasm-detection software they’re reportedly looking for:
… But then the Internet happened, and it just so happened to coincide with a flowering of highly politicized and canalized news media channels such that at any given time, whoever is POTUS, around 10% of the US population are convinced that they’re a baby-eating lizard-alien in a fleshsuit who is plotting to bring about the downfall of civilization, rather than a middle-aged male politician in a business suit.
Well now, here’s the thing: automating sarcasm detection is easy. It’s so easy they teach it in first year computer science courses; it’s an obvious application of AI. (You just get your Turing-test-passing AI that understands all the shared assumptions and social conventions that human-human conversation rely on to identify those statements that explicitly contradict beliefs that the conversationalist implicitly holds. So if I say “it’s easy to earn a living as a novelist” and the AI knows that most novelists don’t believe this and that I am a member of the set of all novelists, the AI can infer that I am being sarcastic. Or I’m an outlier. Or I’m trying to impress a date. Or I’m secretly plotting to assassinate the POTUS.)
Of course, we in the real world know that shaved apes like us never saw a system we didn’t want to game. So in the event that sarcasm detectors ever get a false positive rate of less than 99% (or a false negative rate of less than 1%) I predict that everybody will start deploying sarcasm as a standard conversational gambit on the internet.
Wait … I thought everyone already did?
Trolling the secret service will become a competitive sport, the goal being to not receive a visit from the SS in response to your totally serious threat to kill the resident of 1600 Pennsylvania Avenue. Al Qaida terrrrst training camps will hold tutorials on metonymy, aggressive irony, cynical detachment, and sarcasm as a camouflage tactic for suicide bombers. Post-modernist pranks will draw down the full might of law enforcement by mistake, while actual death threats go encoded as LOLCat macros. Any attempt to algorithmically detect sarcasm will fail because sarcasm is self-referential and the awareness that a sarcasm detector may be in use will change the intent behind the message.
As the very first commenter points out, a problem with this is that a substantial proportion of software developers (as indicated by their position on the Asperger/Autism spectrum) find it very difficult to detect sarcasm in real life…
Reposting at his own site an article he did for The Mark News:
The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.
It was a software insecurity, but the problem was entirely human.
Software has vulnerabilities because it’s written by people, and people make mistakes — thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.
In retrospect, the mistake should have been obvious, and it’s amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.
The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google’s security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.
When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it’s announced.
May 27, 2014
Cory Doctorow sympathizes with young people who have literally grown up with the internet:
The problem with being a “digital native” is that it transforms all of your screw-ups into revealed deep truths about how humans are supposed to use the Internet. So if you make mistakes with your Internet privacy, not only do the companies who set the stage for those mistakes (and profited from them) get off Scot-free, but everyone else who raises privacy concerns is dismissed out of hand. After all, if the “digital natives” supposedly don’t care about their privacy, then anyone who does is a laughable, dinosauric idiot, who isn’t Down With the Kids.
“Privacy” doesn’t mean that no one in the world knows about your business. It means that you get to choose who knows about your business.
It’s difficult to explain to people just how open their online “secrets” really are … and that’s not even covering the folks who are specifically targets of active surveillance … just being on Facebook or other social media sites hands over a lot of your personal details without your direct knowledge or (informed) consent. But you can start to take back some of your own privacy online:
If you start using computers when you’re a little kid, you’ll have a certain fluency with them that older people have to work harder to attain. As Douglas Adams wrote:
- Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
- Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
- Anything invented after you’re thirty-five is against the natural order of things.
If I was a kid today, I’d be all about the opsec — the operational security. I’d learn how to use tools that kept my business between me and the people I explicitly shared it with. I’d make it my habit, and get my friends into the habit too (after all, it doesn’t matter if all your email is encrypted if you send it to some dorkface who keeps it all on Google’s servers in unscrambled form where the NSA can snaffle it up).
Here’s some opsec links to get you started:
- First of all, get a copy of Tails, AKA “The Amnesic Incognito Live System.” This is an operating system that you can use to boot up your computer so that you don’t have to trust the OS it came with to be free from viruses and keyloggers and spyware. It comes with a ton of secure communications tools, as well as everything you need to make the media you want to send out into the world.
- Next, get a copy of The Tor Browser Bundle, a special version of Firefox that automatically sends your traffic through something called TOR (The Onion Router, not to be confused with Tor Books, who publish my novels). This lets you browse the Web with a much greater degree of privacy and anonymity than you would otherwise get.
- Learn to use GPG, which is a great way to encrypt (scramble) your emails. There’s a Chrome plugin for using GPG with Gmail, and another version for Firefox
- If you like chatting, get OTR, AKA “Off the Record,” a very secure private chat tool that has exciting features like “perfect forward secrecy” (this being a cool way of saying, even if someone breaks this tomorrow, they won’t be able to read the chats they captured today).
Once you’ve mastered that stuff, start to think about your phone. Android phones are much, much easier to secure than Apple’s iPhones (Apple tries to lock their phones so you can’t install software except through their store, and because of a 1998 law called the DMCA, it’s illegal to make a tool to unlock them). There are lots of alternative operating systems for Android, of varying degrees of security. The best place to start is Cyanogenmod, which makes it much easier to use privacy tools with your mobile device.
May 19, 2014
Nick Gillespie thinks that the uproar about net neutrality may end up with the worst of all possible solutions by letting the FCC control the internet:
Reports of the imminent death of the Internet’s freewheeling ways and utopian possibilities are more wildly exaggerated and full of spam than those emails from Mrs. Mobotu Sese-Seko.
In fact, the real problem isn’t that the FCC hasn’t shown the cyber-cojones to regulate ISPs like an old-school telephone company or “common carrier,” but that it’s trying to increase its regulatory control of the Internet in the first place.
Under the proposal currently in play, the FCC assumes an increased ability to review ISP offerings on a “case-by-case basis” and kill any plan it doesn’t believe is “commercially reasonable.” Goodbye fast-moving innovation and adjustment to changing technology on the part of companies, hello regulatory morass and long, drawn-out bureaucratic hassles.
In 1998, the FCC told Congress that the Internet should properly be understood as an “information service,” which allows for a relatively low level of government interference, rather than as a “telecommunication service,” which could subject it to the sort of oversight that public utilities get (as my Reason colleague Peter Suderman explains, there’s every reason to keep that original classification). The Internet has flourished in the absence of major FCC regulation, and there’s no demonstrated reason to change that now. That’s exactly why the parade of horribles — non-favored video streams slowed to an unwatchable trickle! whole sites blocked! plucky new startups throttled in the crib! — trotted out by net neutrality proponents is hypothetical in a world without legally mandated net neutrality.
Apart from addressing a problem that doesn’t yet exist, if you are going to pin your hopes for free expression and constant innovation on a government agency, the FCC is about the last place to start. For God’s sake, we’re talking about the agency that spent the better part of a decade trying to figuratively cover up Janet Jackson’s tit by fining Viacom and CBS for airing the 2004 Super Bowl.