Sounds good, right? Canada’s telecom companies telling the government that there’s no reason to pass laws requiring surveillance capabilities … except that the reason they’re saying this is that “they will be building networks that will feature those capabilities by default“:
After years of failed bills, public debate, and considerable controversy, lawful access legislation received royal assent last week. Public Safety Minister Peter MacKay’s Bill C-13 lumped together measures designed to combat cyberbullying with a series of new warrants to enhance police investigative powers, generating criticism from the Privacy Commissioner of Canada, civil liberties groups, and some prominent victims rights advocates. They argued that the government should have created cyberbullying safeguards without sacrificing privacy.
While the bill would have benefited from some amendments, it remains a far cry from earlier versions that featured mandatory personal information disclosure without court oversight and required Internet providers to install extensive surveillance and interception capabilities within their networks.
The mandatory disclosure of subscriber information rules, which figured prominently in earlier lawful access bills, were gradually reduced in scope and ultimately eliminated altogether. Moreover, a recent Supreme Court ruling raised doubt about the constitutionality of the provisions.
Perhaps the most notable revelation is that Internet providers have tried to convince the government that they will voluntarily build surveillance capabilities into their networks. A 2013 memorandum prepared for the public safety minister reveals that Canadian telecom companies advised the government that the leading telecom equipment manufacturers, including Cisco, Juniper, and Huawei, all offer products with interception capabilities at a small additional cost.
In light of the standardization of the interception capabilities, the memo notes that the Canadian providers argue that “the telecommunications market will soon shift to a point where interception capability will simply become a standard component of available equipment, and that technical changes in the way communications actually travel on communications networks will make it even easier to intercept communications.”
In other words, Canadian telecom providers are telling the government there is no need for legally mandated surveillance and interception functionality since they will be building networks that will feature those capabilities by default.
Michael Geist on the most recent Supreme Court of Canada ruling on the ability of the police to conduct warrantless searches of cellphones taken during an arrest:
The Supreme Court of Canada issued its decision in R. v. Fearon today, a case involving the legality of a warrantless cellphone search by police during an arrest. Given the court’s strong endorsement of privacy in recent cases such as Spencer, Vu, and Telus, this seemed like a slam dunk. Moreover, the U.S. Supreme Court’s June 2014 decision in Riley, which addressed similar issues and ruled that a warrant is needed to search a phone, further suggested that the court would continue its streak of pro-privacy decisions.
To the surprise of many, a divided court upheld the ability of police to search cellphones without a warrant incident to an arrest. The majority established some conditions, but ultimately ruled that it could navigate the privacy balance by establishing some safeguards with the practice. A strongly worded dissent disagreed, noting the privacy implications of access to cellphones and the need for judicial pre-authorization as the best method of addressing the privacy implications.
The majority, written by Justice Cromwell (joined by McLachlin, Moldaver, and Wagner), explicitly recognizes that cellphones are the functional equivalent of computers and that a search may constitute a significant intrusion of privacy. Yet the majority cautions that not every search is a significant intrusion. It ultimately concludes that there is the potential for a cellphone search to be intrusive, it does not believe that that will be the case in every instance.
Given that conclusion, it is prepared to permit cellphone searches that are incident to arrest provided that the law is modified with some additional protections against invasion of privacy. It proceeds to effectively write the law by creating four conditions: a lawful arrest, the search is incidental to the arrest with a valid law enforcement purpose, the search is tailored or limited to the purpose (i.e., limited to recent information), and police take detailed notes on what they have examined and how the phone was searched.
The courts have far too often rolled over for any kind of police intrusions into the private lives of Canadians, but a decision from earlier this year has actually helped deter the RCMP from pursuing trivial or tangential inquiries into their online activity:
A funny thing happens when courts start requiring more information from law enforcement: law enforcers suddenly seem less interested in zealously enforcing the law.
Back in June of this year, Canada’s Supreme Court delivered its decision in R. v. Spencer, which brought law enforcement’s warrantless access of ISP subscriber info to an end.
In a unanimous decision written by (Harper appointee) Justice Thomas Cromwell, the court issued a strong endorsement of Internet privacy, emphasizing the privacy importance of subscriber information, the right to anonymity, and the need for police to obtain a warrant for subscriber information except in exigent circumstances or under a reasonable law.
The effects of this ruling are beginning to be felt. Michael Geist points to a Winnipeg Free Press article that details the halcyon days of the Royal Canadian Mounted Police’s warrantless access.
Prior to the court decision, the RCMP and border agency estimate, it took about five minutes to complete the less than one page of documentation needed to ask for subscriber information, and the company usually turned it over immediately or within one day.
Five minutes! Amazing. And disturbing. A 5-minute process indicates no one involved made even the slightest effort to prevent abuse of the process. The court’s decision has dialed back that pace considerably. The RCMP is now complaining that it takes “10 hours” to fill out the 10-20 pages required to obtain subscriber info. It’s also unhappy with the turnaround time, which went from nearly immediate to “up to 30 days.”
In response, the RCMP has done what other law enforcement agencies have done when encountering a bit of friction: given up.
“Evidence is limited at this early stage, but some cases have already been abandoned by the RCMP as a result of not having enough information to get a production order to obtain (basic subscriber information),” the memo says.
Michael Geist looks at one of the less obvious issues in the Uber dispute with Canadian regulators:
The mounting battle between Uber, the popular app-based car service, and the incumbent taxi industry has featured court dates in Toronto, undercover sting operations in Ottawa, and a marketing campaign designed to stoke fear among potential Uber customers. As Uber enters a growing number of Canadian cities, the ensuing regulatory fight is typically pitched as a contest between a popular, disruptive online service and a staid taxi industry intent on keeping new competitors out of the market.
My weekly technology law column (Toronto Star version, homepage version) notes that if the issue was only a question of choosing between a longstanding regulated industry and a disruptive technology, the outcome would not be in doubt. The popularity of a convenient, well-priced alternative, when contrasted with frustration over a regulated market that artificially limits competition to maintain pricing, is unsurprisingly going to generate enormous public support and will not be regulated out of existence.
While the Uber regulatory battles have focused on whether it constitutes a taxi service subject to local rules, last week a new concern attracted attention: privacy. Regardless of whether it is a taxi service or a technological intermediary, it is clear that Uber collects an enormous amount of sensitive, geo-locational information about its users. In addition to payment data, the company accumulates a record of where its customers travel, how long they stay at their destinations, and even where they are located in real-time when using the Uber service.
Reports indicate that the company has coined the term “God View” for its ability to track user movements. The God View enables it to simultaneously view all Uber cars and all customers waiting for a ride in an entire city. When those mesh – the Uber customer enters an Uber car – they company can track movements along city streets. Uber says that use of the information is strictly limited, yet it would appear that company executives have accessed the data to develop portfolios on some of its users.
Think today’s ads on TV are irritating? You ain’t seen nothing yet:
I’ve discussed in the past how many people mistake privacy as some sort of absolute “thing” rather than a spectrum of trade-offs. Leaving your home to go to the store involves giving up a small amount of privacy, but it’s a trade-off most people feel is worth it (not so much for some uber-celebrities, and then they choose other options). Sharing information with a website is often seen as a reasonable trade-off for the services/information that website provides. The real problem is often just that the true trade-offs aren’t clear. What you’re giving up and what you’re getting back aren’t always done transparently, and that’s where people feel their privacy is being violated. When they make the decision consciously and the trade-off seems worth it, almost no one feels that their privacy is violated. Yet, when they don’t fully understand, or when the deal they made is unilaterally changed, that’s when the privacy is violated, because the deal someone thought they were striking is not what actually happened.
The amount of data this thing collects is staggering. It logs where, when, how, and for how long you use the TV. It sets tracking cookies and beacons designed to detect “when you have viewed particular content or a particular email message.” It records “the apps you use, the websites you visit, and how you interact with content.” It ignores “do-not-track” requests as a considered matter of policy.
To some extent, that’s not really all that different than a regular computer. But, then it begins to get creepier:
It also has a built-in camera — with facial recognition. The purpose is to provide “gesture control” for the TV and enable you to log in to a personalized account using your face. On the upside, the images are saved on the TV instead of uploaded to a corporate server. On the downside, the Internet connection makes the whole TV vulnerable to hackers who have demonstrated the ability to take complete control of the machine.
More troubling is the microphone. The TV boasts a “voice recognition” feature that allows viewers to control the screen with voice commands. But the service comes with a rather ominous warning: “Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party.” Got that? Don’t say personal or sensitive stuff in front of the TV.
You may not be watching, but the telescreen is listening.
David Akin posted a list of questions posed by John Gilmore, challenging the Apple iOS8 cryptography promises:
Gilmore considered what Apple said and considered how Apple creates its software — a closed, secret, proprietary method — and what coders like him know about the code that Apple says protects our privacy — pretty much nothing — and then wrote the following for distribution on Dave Farber‘s Interesting People listserv. I’m pretty sure neither Farber nor Gilmore will begrudge me reproducing it.
And why do we believe [Apple]?
- Because we can read the source code and the protocol descriptions ourselves, and determine just how secure they are?
- Because they’re a big company and big companies never lie?
- Because they’ve implemented it in proprietary binary software, and proprietary crypto is always stronger than the company claims it to be?
- Because they can’t covertly send your device updated software that would change all these promises, for a targeted individual, or on a mass basis?
- Because you will never agree to upgrade the software on your device, ever, no matter how often they send you updates?
- Because this first release of their encryption software has no security bugs, so you will never need to upgrade it to retain your privacy?
- Because if a future update INSERTS privacy or security bugs, we will surely be able to distinguish these updates from future updates that FIX privacy or security bugs?
- Because if they change their mind and decide to lessen our privacy for their convenience, or by secret government edict, they will be sure to let us know?
- Because they have worked hard for years to prevent you from upgrading the software that runs on their devices so that YOU can choose it and control it instead of them?
- Because the US export control bureacracy would never try to stop Apple from selling secure mass market proprietary encryption products across the border?
- Because the countries that wouldn’t let Blackberry sell phones that communicate securely with your own corporate servers, will of course let Apple sell whatever high security non-tappable devices it wants to?
- Because we’re apple fanboys and the company can do no wrong?
- Because they want to help the terrorists win?
- Because NSA made them mad once, therefore they are on the side of the public against NSA?
- Because it’s always better to wiretap people after you convince them that they are perfectly secure, so they’ll spill all their best secrets?
There must be some other reason, I’m just having trouble thinking of it.
In the Guardian, Cory Doctorow says that we need privacy-enhancing technical tools that can be easily used by everyone, not just the highly technical (or highly paranoid) among us:
You don’t need to be a technical expert to understand privacy risks anymore. From the Snowden revelations to the daily parade of internet security horrors around the world – like Syrian and Egyptian checkpoints where your Facebook logins are required in order to weigh your political allegiances (sometimes with fatal consequences) or celebrities having their most intimate photos splashed all over the web.
The time has come to create privacy tools for normal people – people with a normal level of technical competence. That is, all of us, no matter what our level of technical expertise, need privacy. Some privacy measures do require extraordinary technical competence; if you’re Edward Snowden, with the entire NSA bearing down on your communications, you will need to be a real expert to keep your information secure. But the kind of privacy that makes you immune to mass surveillance and attacks-of-opportunity from voyeurs, identity thieves and other bad guys is attainable by anyone.
I’m a volunteer on the advisory board for a nonprofit that’s aiming to do just that: Simply Secure (which launches Thursday at simplysecure.org) collects together some very bright usability and cryptography experts with the aim of revamping the user interface of the internet’s favorite privacy tools, starting with OTR, the extremely secure chat system whose best-known feature is “perfect forward secrecy” which gives each conversation its own unique keys, so a breach of one conversation’s keys can’t be used to snoop on others.
More importantly, Simply Secure’s process for attaining, testing and refining usability is the main product of its work. This process will be documented and published as a set of best practices for other organisations, whether they are for-profits or non-profits, creating a framework that anyone can use to make secure products easier for everyone.
MIT, Adobe and Microsoft have developed a technique that allows conversations to be reconstructed based on the almost invisible vibrations of surfaces in the same room:
Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analyzing minute vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof glass.
In other experiments, they extracted useful audio signals from videos of aluminum foil, the surface of a glass of water, and even the leaves of a potted plant. The researchers will present their findings in a paper at this year’s Siggraph, the premier computer graphics conference.
“When sound hits an object, it causes the object to vibrate,” says Abe Davis, a graduate student in electrical engineering and computer science at MIT and first author on the new paper. “The motion of this vibration creates a very subtle visual signal that’s usually invisible to the naked eye. People didn’t realize that this information was there.”
Reconstructing audio from video requires that the frequency of the video samples — the number of frames of video captured per second — be higher than the frequency of the audio signal. In some of their experiments, the researchers used a high-speed camera that captured 2,000 to 6,000 frames per second. That’s much faster than the 60 frames per second possible with some smartphones, but well below the frame rates of the best commercial high-speed cameras, which can top 100,000 frames per second.
I was aware that you could “bug” a room by monitoring the vibrations of a non-soundproofed window, at least under certain circumstances, but this is rather more subtle. I wonder how long this development has been known to the guys at the NSA…
It’s been my constant experience that laws that are purported to “protect” my privacy always seem to restrict me from being given information that doesn’t seem to merit extra protection (for example, my son’s university administration goes way out of the way to protect his privacy … to the point they barely acknowledge that I might possibly have any interest in knowing anything about him). The effect of most “privacy” laws is to allow bureaucrats to prevent outsiders from being given any information at all. Anything they don’t want to share now seems to be protected by nebulous “privacy concerns” (whether real or imaginary). It’s not just my paranoia, however, as Stewart Baker points out:
It’s time once again to point out that privacy laws, with their vague standards and selective enforcement, are more likely to serve privilege than to protect privacy. The latest to learn that lesson are patients mistreated by the Veterans Administration and the whistleblowers who sought to help them.
Misuse of privacy law is now so common that I’ve begun issuing annual awards for the worst offenders — the Privies. The Veterans Administration has officially earned a nomination for a 2015 Privy under the category “We All Got To Serve Someone: Worst Use of Privacy Law to Serve Power and Privilege.”
Tim Cushing wonders why we don’t seem to sympathize with the plight of poor, overworked law enforcement officials who find the crushing burden of getting a warrant for accessing your cell phone data to be too hard:
You’d think approved warrants must be like albino unicorns for all the arguing the government does to avoid having to run one by a judge. It continually acts as though there aren’t statistics out there that show obtaining a warrant is about as difficult as obeying the laws of thermodynamics. Wiretap warrants have been approved 99.969% of the time over the last decade. And that’s for something far more intrusive than cell site location data.
But still, the government continues to argue that location data, while possibly intrusive, is simply Just Another Business Record — records it is entitled to have thanks to the Third Party Doctrine. Any legal decision that suggests even the slightest expectation of privacy might have arisen over the past several years as the public’s relationship with cell phones has shifted from “luxury item/business tool” to “even grandma has a smartphone” is greeted with reams of paper from the government, all of it metaphorically pounding on the table and shouting “BUSINESS RECORDS!”
When that fails, it pushes for the lower bar of the Stored Communications Act [PDF] to be applied to its request, dropping it from “probable cause” to “specific and articulable facts.” The Stored Communications Act is the lowest bar, seeing as it allows government agencies and law enforcement to access electronic communications older than 180 days without a warrant. It’s interesting that the government would invoke this to defend the warrantless access to location metadata, seeing as the term “communications” is part of the law’s title. This would seem to imply what’s being sought is actual content — something that normally requires a higher bar to obtain.
Update: Ken White at Popehat says warrants are not particularly strong devices to protect your liberty and lists a few distressing cases where warrants have been issued recently.
We’re faced all the time with the ridiculous warrants judges will sign if they’re asked. Judges will sign a warrant to give a teenager an injection to induce an erection so that the police can photograph it to fight sexting. Judges will, based on flimsy evidence, sign a warrant allowing doctors to medicate and anally penetrate a man because he might have a small amount of drugs concealed in his rectum. Judges will sign a warrant to dig up a yard based on a tip from a psychic. Judges will kowtow to an oversensitive politician by signing a warrant to search the home of the author of a patently satirical Twitter account. Judges will give police a warrant to search your home based on a criminal libel statute if your satirical newspaper offended a delicate professor. And you’d better believe judges will oblige cops by giving them a search warrant when someone makes satirical cartoons about them.
I’m not saying that warrants are completely useless. Warrants create a written record of the government’s asserted basis for an action, limiting cops’ ability to make up post-hoc justifications. Occasionally some prosecutors turn down weak warrant applications. The mere process of seeking a warrant may regulate law enforcement behavior soomewhat.
Rather, I’m saying that requiring the government to get a warrant isn’t the victory you might hope. The numbers — and the experience of criminal justice practitioners — suggests that judges in the United States provide only marginal oversight over what is requested of them. Calling it a rubber stamp is unfair; sometimes actual rubber stamps run out of ink. The problem is deeper than court decisions that excuse the government from seeking warrants because of the War on Drugs or OMG 9/11 or the like. The problem is one of the culture of the criminal justice system and the judiciary, a culture steeped in the notion that “law and order” and “tough on crime” are principled legal positions rather than political ones. The problem is that even if we’d like to see the warrant requirement as interposing neutral judges between our rights and law enforcement, there’s no indication that the judges see it that way.
The argument that you’ve got nothing to worry about because you’re not doing anything wrong has long since passed its best-before date. As Nick Gillespie points out, you don’t need to be a member of Al Qaeda, a black-hat hacker, or a registered Republican to be of interest to the NSA’s information gathering team:
If You’re Reading Reason.com, The NSA is Probably Already Following You
Two things to contemplate on early Sunday morning, before church or political talk shows get underway:
Remember all those times we were told that the government, especially the National Security Agency (NSA), only tracks folks who either guilty of something or involved in suspicious-seeming activity? Well, we’re going to have amend that a bit. Using documents from Edward Snowden, the Washington Post‘s Barton Gellman, Julie Tate, and Ashkan Soltani report
Ordinary Internet users, American and non-American alike, far outnumber legally targeted foreigners in the communications intercepted by the National Security Agency from U.S. digital networks, according to a four-month investigation by The Washington Post.
Nine of 10 account holders found in a large cache of intercepted conversations, which former NSA contractor Edward Snowden provided in full to The Post, were not the intended surveillance targets but were caught in a net the agency had cast for somebody else.
Many of them were Americans. Nearly half of the surveillance files, a strikingly high proportion, contained names, e-mail addresses or other details that the NSA marked as belonging to U.S. citizens or residents. NSA analysts masked, or “minimized,” more than 65,000 such references to protect Americans’ privacy, but The Post found nearly 900 additional e-mail addresses, unmasked in the files, that could be strongly linked to U.S. citizens or U.S.residents.
The cache of documents in question date from 2009 through 2012 and comprise 160,000 documents collected up the PRISM and Upstream, which collect data from different sources. “Most of the people caught up in those programs are not the targets and would not lawfully qualify as such,” write Gellman, Julie Tate, and Ashkan Soltani, who also underscore that NSA surveillance has produced some very meaningful and good intelligence. The real question is whether the government can do that in a way that doesn’t result in massive dragnet programs that create far more problems ultimately than they solve (remember the Church Committee?).
Read the whole thing. And before anyone raises the old “if you’re innocent, you’ve got nothing to hide shtick,” read Scott Shackford’s “3 Reasons the ‘Nothing to Hide’ Crowd Should be worried about Government Surveillance.”
Michael Geist talks about another court attempting to push local rules into other jurisdictions online — in this case it’s not the European “right to be forgotten” nonsense, it’s unfortunately a Canadian court pulling the stunt:
In the aftermath of the European Court of Justice “right to be forgotten” decision, many asked whether a similar ruling could arise in Canada. While a privacy-related ruling has yet to hit Canada, last week the Supreme Court of British Columbia relied in part on the decision in issuing an unprecedented order requiring Google to remove websites from its global index. The ruling in Equustek Solutions Inc. v. Jack is unusual since its reach extends far beyond Canada. Rather than ordering the company to remove certain links from the search results available through Google.ca, the order intentionally targets the entire database, requiring the company to ensure that no one, anywhere in the world, can see the search results. Note that this differs from the European right to be forgotten ruling, which is limited to Europe.
The implications are enormous since if a Canadian court has the power to limit access to information for the globe, presumably other courts would as well. While the court does not grapple with this possibility, what happens if a Russian court orders Google to remove gay and lesbian sites from its database? Or if Iran orders it remove Israeli sites from the database? The possibilities are endless since local rules of freedom of expression often differ from country to country. Yet the B.C. court adopts the view that it can issue an order with global effect. Its reasoning is very weak, concluding that:
the injunction would compel Google to take steps in California or the state in which its search engine is controlled, and would not therefore direct that steps be taken around the world. That the effect of the injunction could reach beyond one state is a separate issue.
Unfortunately, it does not engage effectively with this “separate issue.”