At Techdirt, Karl Bode sings the praises of dumb TVs that don’t share your every word with unspecified “third parties” who may or may not have any compunction about further sharing of what happens in your home (within audio range of your TV, anyway):
But it’s something else stupid that Samsung did this week that got less press attention, but that I actually find far more troubling. Numerous Samsung smart TV users around the world this week stated that the company has started injecting ads into content being watched on third-party devices and services. For example, some users found that when streaming video content from PC to the living room using Plex, they suddenly were faced with a large ad for Pepsi that actually originated from their Samsung TV:
“Reports for the unwelcome ad interruption first surfaced on a Subreddit dedicated to Plex, the media center app that is available on a variety of connected devices, including Samsung smart TVs. Plex users typically use the app to stream local content from their computer or a network-attached storage drive to their TV, which is why many were very surprised to see an online video ad being inserted into their videos. A Plex spokesperson assured me that the company has nothing to do with the ad in question.”
Now Samsung hasn’t responded yet to this particular issue, and you’d have to think that the company accidentally enabled some kind of trial ad injection technology, since anything else would be idiotic brand seppuku (in fact it does appear like it has been working with Yahoo on just this kind of technology). Still, users say the ads have them rushing to disable the smart portion of Samsung TVs, whether that’s by using a third party solution or digging into the bowels of the TV’s settings to refuse Samsung’s end user agreement. And that raises an important point: many consumers (myself included) want their TV to be as slack-jawed, glassy-eyed, dumb and dim-witted as possible.
Michael Geist on the rather disturbing news that Canadian intelligence agencies are busy watching the uploads of every internet user (including the Canadian users that CSE/CSIS are theoretically banned from tracking by the letter of the law):
… the problem with oversight and accountability as the primary focus is that it leaves the substantive law (in the case of CSE Internet surveillance) or proposed law (as in the case of C-51) largely unaddressed. If we fail to examine the shortcomings within the current law or within Bill C-51, no amount of accountability, oversight, or review will restore the loss of privacy and civil liberties.
First, consider the Snowden revelations that the CSE has been the lead on a surveillance initiative that gathers as many as 15 million uploads and downloads per day from a wide range of hosting sites that even appear to include the Internet Archive. The goal is reputed to be to target terrorist propaganda and training materials and identify who is uploading or downloading the materials. The leaked information shows how once a downloader is identified, intelligence agencies use other databases (including databases on billions of website cookies) to track the specific individual and their Internet use within hours of identified download.
The Levitation program, which removes any doubt about Canada’s role in global Internet surveillance, highlights how seemingly all Internet activity is now tracked by signals intelligence agencies. Note that the sites that host the downloads do not hand over their usage logs. Rather, intelligence agencies are able to track who visits the sites and what they do from the outside. That confirms a massive surveillance architecture of Internet traffic operating on a global scale. Is improved oversight in Canada alone going to change this dynamic that crosses borders and surveillance agencies? It is hard to see how it would.
Moreover, these programs point to the fundamental flaw in Canadian law, where Canadians are re-assured that CSE does not – legally cannot – target Canadians. However, mass surveillance of this nature does not distinguish between nationalities. Mass surveillance of a hundred million downloads every week by definition targets Canadians alongside Internet users from every corner of the globe. To argue that Canadians are not specifically targeted when it is obvious that the personal information of Canadians is indistinguishable from everyone else’s data at the time of collection, is to engage in meaningless distinctions that only succeed in demonstrating the weakness of Canadian law. Better oversight of CSE is needed, but so too is a better law governing CSE activities.
Published on 14 Dec 2013
Foucault’s take on the elf on the shelf through an imagined conversation by @DrLauraPinto
H/T to Anthony L. Fisher for the video link:
Dr. Laura Elizabeth Pinto, a digital technology professor at the University of Ontario Institute of Technology, thinks Elf on the Shelf poses a criticial ethical dilemma. In a paper for the Canadian Centre for Policy Alternatives, Pinto wonders if the Elf is “preparing a generation of children to accept, not question, increasingly intrusive (albeit whimsically packaged) modes of surveillance.”
Sensing that she might come off as a humorless paranoid crank, Pinto clarified her position to the Washington Post:
“I don’t think the elf is a conspiracy and I realize we’re talking about a toy. It sounds humorous, but we argue that if a kid is okay with this bureaucratic elf spying on them in their home, it normalizes the idea of surveillance and in the future restrictions on our privacy might be more easily accepted.” (Emphasis mine).
One could argue that the millions of adults walking around with NSA-trackable and criminal-hackable smartphones in their pockets are far more influential than a seasonal doll in setting the example to the next generation that surveillance is inevitable and Big Brother is not to be feared. Still, Pinto has a point when she writes:
What The Elf on the Shelf represents and normalizes: anecdotal evidence reveals that children perform an identity that is not only for caretakers, but for an external authority (The Elf on the Shelf), similar to the dynamic between citizen and authority in the context of the surveillance state.
Sounds good, right? Canada’s telecom companies telling the government that there’s no reason to pass laws requiring surveillance capabilities … except that the reason they’re saying this is that “they will be building networks that will feature those capabilities by default“:
After years of failed bills, public debate, and considerable controversy, lawful access legislation received royal assent last week. Public Safety Minister Peter MacKay’s Bill C-13 lumped together measures designed to combat cyberbullying with a series of new warrants to enhance police investigative powers, generating criticism from the Privacy Commissioner of Canada, civil liberties groups, and some prominent victims rights advocates. They argued that the government should have created cyberbullying safeguards without sacrificing privacy.
While the bill would have benefited from some amendments, it remains a far cry from earlier versions that featured mandatory personal information disclosure without court oversight and required Internet providers to install extensive surveillance and interception capabilities within their networks.
The mandatory disclosure of subscriber information rules, which figured prominently in earlier lawful access bills, were gradually reduced in scope and ultimately eliminated altogether. Moreover, a recent Supreme Court ruling raised doubt about the constitutionality of the provisions.
Perhaps the most notable revelation is that Internet providers have tried to convince the government that they will voluntarily build surveillance capabilities into their networks. A 2013 memorandum prepared for the public safety minister reveals that Canadian telecom companies advised the government that the leading telecom equipment manufacturers, including Cisco, Juniper, and Huawei, all offer products with interception capabilities at a small additional cost.
In light of the standardization of the interception capabilities, the memo notes that the Canadian providers argue that “the telecommunications market will soon shift to a point where interception capability will simply become a standard component of available equipment, and that technical changes in the way communications actually travel on communications networks will make it even easier to intercept communications.”
In other words, Canadian telecom providers are telling the government there is no need for legally mandated surveillance and interception functionality since they will be building networks that will feature those capabilities by default.
Michael Geist on the most recent Supreme Court of Canada ruling on the ability of the police to conduct warrantless searches of cellphones taken during an arrest:
The Supreme Court of Canada issued its decision in R. v. Fearon today, a case involving the legality of a warrantless cellphone search by police during an arrest. Given the court’s strong endorsement of privacy in recent cases such as Spencer, Vu, and Telus, this seemed like a slam dunk. Moreover, the U.S. Supreme Court’s June 2014 decision in Riley, which addressed similar issues and ruled that a warrant is needed to search a phone, further suggested that the court would continue its streak of pro-privacy decisions.
To the surprise of many, a divided court upheld the ability of police to search cellphones without a warrant incident to an arrest. The majority established some conditions, but ultimately ruled that it could navigate the privacy balance by establishing some safeguards with the practice. A strongly worded dissent disagreed, noting the privacy implications of access to cellphones and the need for judicial pre-authorization as the best method of addressing the privacy implications.
The majority, written by Justice Cromwell (joined by McLachlin, Moldaver, and Wagner), explicitly recognizes that cellphones are the functional equivalent of computers and that a search may constitute a significant intrusion of privacy. Yet the majority cautions that not every search is a significant intrusion. It ultimately concludes that there is the potential for a cellphone search to be intrusive, it does not believe that that will be the case in every instance.
Given that conclusion, it is prepared to permit cellphone searches that are incident to arrest provided that the law is modified with some additional protections against invasion of privacy. It proceeds to effectively write the law by creating four conditions: a lawful arrest, the search is incidental to the arrest with a valid law enforcement purpose, the search is tailored or limited to the purpose (i.e., limited to recent information), and police take detailed notes on what they have examined and how the phone was searched.
The courts have far too often rolled over for any kind of police intrusions into the private lives of Canadians, but a decision from earlier this year has actually helped deter the RCMP from pursuing trivial or tangential inquiries into their online activity:
A funny thing happens when courts start requiring more information from law enforcement: law enforcers suddenly seem less interested in zealously enforcing the law.
Back in June of this year, Canada’s Supreme Court delivered its decision in R. v. Spencer, which brought law enforcement’s warrantless access of ISP subscriber info to an end.
In a unanimous decision written by (Harper appointee) Justice Thomas Cromwell, the court issued a strong endorsement of Internet privacy, emphasizing the privacy importance of subscriber information, the right to anonymity, and the need for police to obtain a warrant for subscriber information except in exigent circumstances or under a reasonable law.
The effects of this ruling are beginning to be felt. Michael Geist points to a Winnipeg Free Press article that details the halcyon days of the Royal Canadian Mounted Police’s warrantless access.
Prior to the court decision, the RCMP and border agency estimate, it took about five minutes to complete the less than one page of documentation needed to ask for subscriber information, and the company usually turned it over immediately or within one day.
Five minutes! Amazing. And disturbing. A 5-minute process indicates no one involved made even the slightest effort to prevent abuse of the process. The court’s decision has dialed back that pace considerably. The RCMP is now complaining that it takes “10 hours” to fill out the 10-20 pages required to obtain subscriber info. It’s also unhappy with the turnaround time, which went from nearly immediate to “up to 30 days.”
In response, the RCMP has done what other law enforcement agencies have done when encountering a bit of friction: given up.
“Evidence is limited at this early stage, but some cases have already been abandoned by the RCMP as a result of not having enough information to get a production order to obtain (basic subscriber information),” the memo says.
Michael Geist looks at one of the less obvious issues in the Uber dispute with Canadian regulators:
The mounting battle between Uber, the popular app-based car service, and the incumbent taxi industry has featured court dates in Toronto, undercover sting operations in Ottawa, and a marketing campaign designed to stoke fear among potential Uber customers. As Uber enters a growing number of Canadian cities, the ensuing regulatory fight is typically pitched as a contest between a popular, disruptive online service and a staid taxi industry intent on keeping new competitors out of the market.
My weekly technology law column (Toronto Star version, homepage version) notes that if the issue was only a question of choosing between a longstanding regulated industry and a disruptive technology, the outcome would not be in doubt. The popularity of a convenient, well-priced alternative, when contrasted with frustration over a regulated market that artificially limits competition to maintain pricing, is unsurprisingly going to generate enormous public support and will not be regulated out of existence.
While the Uber regulatory battles have focused on whether it constitutes a taxi service subject to local rules, last week a new concern attracted attention: privacy. Regardless of whether it is a taxi service or a technological intermediary, it is clear that Uber collects an enormous amount of sensitive, geo-locational information about its users. In addition to payment data, the company accumulates a record of where its customers travel, how long they stay at their destinations, and even where they are located in real-time when using the Uber service.
Reports indicate that the company has coined the term “God View” for its ability to track user movements. The God View enables it to simultaneously view all Uber cars and all customers waiting for a ride in an entire city. When those mesh – the Uber customer enters an Uber car – they company can track movements along city streets. Uber says that use of the information is strictly limited, yet it would appear that company executives have accessed the data to develop portfolios on some of its users.
Think today’s ads on TV are irritating? You ain’t seen nothing yet:
I’ve discussed in the past how many people mistake privacy as some sort of absolute “thing” rather than a spectrum of trade-offs. Leaving your home to go to the store involves giving up a small amount of privacy, but it’s a trade-off most people feel is worth it (not so much for some uber-celebrities, and then they choose other options). Sharing information with a website is often seen as a reasonable trade-off for the services/information that website provides. The real problem is often just that the true trade-offs aren’t clear. What you’re giving up and what you’re getting back aren’t always done transparently, and that’s where people feel their privacy is being violated. When they make the decision consciously and the trade-off seems worth it, almost no one feels that their privacy is violated. Yet, when they don’t fully understand, or when the deal they made is unilaterally changed, that’s when the privacy is violated, because the deal someone thought they were striking is not what actually happened.
The amount of data this thing collects is staggering. It logs where, when, how, and for how long you use the TV. It sets tracking cookies and beacons designed to detect “when you have viewed particular content or a particular email message.” It records “the apps you use, the websites you visit, and how you interact with content.” It ignores “do-not-track” requests as a considered matter of policy.
To some extent, that’s not really all that different than a regular computer. But, then it begins to get creepier:
It also has a built-in camera — with facial recognition. The purpose is to provide “gesture control” for the TV and enable you to log in to a personalized account using your face. On the upside, the images are saved on the TV instead of uploaded to a corporate server. On the downside, the Internet connection makes the whole TV vulnerable to hackers who have demonstrated the ability to take complete control of the machine.
More troubling is the microphone. The TV boasts a “voice recognition” feature that allows viewers to control the screen with voice commands. But the service comes with a rather ominous warning: “Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party.” Got that? Don’t say personal or sensitive stuff in front of the TV.
You may not be watching, but the telescreen is listening.
David Akin posted a list of questions posed by John Gilmore, challenging the Apple iOS8 cryptography promises:
Gilmore considered what Apple said and considered how Apple creates its software — a closed, secret, proprietary method — and what coders like him know about the code that Apple says protects our privacy — pretty much nothing — and then wrote the following for distribution on Dave Farber‘s Interesting People listserv. I’m pretty sure neither Farber nor Gilmore will begrudge me reproducing it.
And why do we believe [Apple]?
- Because we can read the source code and the protocol descriptions ourselves, and determine just how secure they are?
- Because they’re a big company and big companies never lie?
- Because they’ve implemented it in proprietary binary software, and proprietary crypto is always stronger than the company claims it to be?
- Because they can’t covertly send your device updated software that would change all these promises, for a targeted individual, or on a mass basis?
- Because you will never agree to upgrade the software on your device, ever, no matter how often they send you updates?
- Because this first release of their encryption software has no security bugs, so you will never need to upgrade it to retain your privacy?
- Because if a future update INSERTS privacy or security bugs, we will surely be able to distinguish these updates from future updates that FIX privacy or security bugs?
- Because if they change their mind and decide to lessen our privacy for their convenience, or by secret government edict, they will be sure to let us know?
- Because they have worked hard for years to prevent you from upgrading the software that runs on their devices so that YOU can choose it and control it instead of them?
- Because the US export control bureacracy would never try to stop Apple from selling secure mass market proprietary encryption products across the border?
- Because the countries that wouldn’t let Blackberry sell phones that communicate securely with your own corporate servers, will of course let Apple sell whatever high security non-tappable devices it wants to?
- Because we’re apple fanboys and the company can do no wrong?
- Because they want to help the terrorists win?
- Because NSA made them mad once, therefore they are on the side of the public against NSA?
- Because it’s always better to wiretap people after you convince them that they are perfectly secure, so they’ll spill all their best secrets?
There must be some other reason, I’m just having trouble thinking of it.
In the Guardian, Cory Doctorow says that we need privacy-enhancing technical tools that can be easily used by everyone, not just the highly technical (or highly paranoid) among us:
You don’t need to be a technical expert to understand privacy risks anymore. From the Snowden revelations to the daily parade of internet security horrors around the world – like Syrian and Egyptian checkpoints where your Facebook logins are required in order to weigh your political allegiances (sometimes with fatal consequences) or celebrities having their most intimate photos splashed all over the web.
The time has come to create privacy tools for normal people – people with a normal level of technical competence. That is, all of us, no matter what our level of technical expertise, need privacy. Some privacy measures do require extraordinary technical competence; if you’re Edward Snowden, with the entire NSA bearing down on your communications, you will need to be a real expert to keep your information secure. But the kind of privacy that makes you immune to mass surveillance and attacks-of-opportunity from voyeurs, identity thieves and other bad guys is attainable by anyone.
I’m a volunteer on the advisory board for a nonprofit that’s aiming to do just that: Simply Secure (which launches Thursday at simplysecure.org) collects together some very bright usability and cryptography experts with the aim of revamping the user interface of the internet’s favorite privacy tools, starting with OTR, the extremely secure chat system whose best-known feature is “perfect forward secrecy” which gives each conversation its own unique keys, so a breach of one conversation’s keys can’t be used to snoop on others.
More importantly, Simply Secure’s process for attaining, testing and refining usability is the main product of its work. This process will be documented and published as a set of best practices for other organisations, whether they are for-profits or non-profits, creating a framework that anyone can use to make secure products easier for everyone.
MIT, Adobe and Microsoft have developed a technique that allows conversations to be reconstructed based on the almost invisible vibrations of surfaces in the same room:
Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analyzing minute vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof glass.
In other experiments, they extracted useful audio signals from videos of aluminum foil, the surface of a glass of water, and even the leaves of a potted plant. The researchers will present their findings in a paper at this year’s Siggraph, the premier computer graphics conference.
“When sound hits an object, it causes the object to vibrate,” says Abe Davis, a graduate student in electrical engineering and computer science at MIT and first author on the new paper. “The motion of this vibration creates a very subtle visual signal that’s usually invisible to the naked eye. People didn’t realize that this information was there.”
Reconstructing audio from video requires that the frequency of the video samples — the number of frames of video captured per second — be higher than the frequency of the audio signal. In some of their experiments, the researchers used a high-speed camera that captured 2,000 to 6,000 frames per second. That’s much faster than the 60 frames per second possible with some smartphones, but well below the frame rates of the best commercial high-speed cameras, which can top 100,000 frames per second.
I was aware that you could “bug” a room by monitoring the vibrations of a non-soundproofed window, at least under certain circumstances, but this is rather more subtle. I wonder how long this development has been known to the guys at the NSA…
It’s been my constant experience that laws that are purported to “protect” my privacy always seem to restrict me from being given information that doesn’t seem to merit extra protection (for example, my son’s university administration goes way out of the way to protect his privacy … to the point they barely acknowledge that I might possibly have any interest in knowing anything about him). The effect of most “privacy” laws is to allow bureaucrats to prevent outsiders from being given any information at all. Anything they don’t want to share now seems to be protected by nebulous “privacy concerns” (whether real or imaginary). It’s not just my paranoia, however, as Stewart Baker points out:
It’s time once again to point out that privacy laws, with their vague standards and selective enforcement, are more likely to serve privilege than to protect privacy. The latest to learn that lesson are patients mistreated by the Veterans Administration and the whistleblowers who sought to help them.
Misuse of privacy law is now so common that I’ve begun issuing annual awards for the worst offenders — the Privies. The Veterans Administration has officially earned a nomination for a 2015 Privy under the category “We All Got To Serve Someone: Worst Use of Privacy Law to Serve Power and Privilege.”