Quotulatiousness

November 16, 2025

3D printing and firearms

Filed under: Liberty, Technology, USA, Weapons — Tags: , , , , — Nicholas @ 03:00

On the social media site formerly known as Twitter, ESR discusses a recent user notification from one of the 3D printer companies to their users:

I’m told that 3D printed gun parts are far more sophisticated than this Liberator from 2013, but I’m sure nobody would actually do that, right? It would draw the attention of various government agencies for sure.

The recent flap about FlashForge attempting to forbid its customers from printing gun parts means it’s time for another reminder about technological risk.

Their weasel-worded climb-down carefully avoids stating that they never collect data on what you print. They only say they don’t collect data during your prints. The wording is so careful that I think we can conclude they do in fact ship telemetry on your print jobs when g-code arrives at the printer, immediately before printing.

So I repeat a warning I’ve given previously: never buy a 3D printer that requires an internet connection to function. And, always assume that if the printer’s firmware isn’t open-source, it is written to spy on you and could at any time prevent you from printing disapproved objects.

Oh, and never trust FlashForge again or buy their products, no matter how much groveling they do. After this, it’s safest to assume that anything they say about respecting the privacy and autonomy of their customers will be a lie. Hear that, @ff3dprinters
?

We need to make a public example of FlashForge. Other vendors need to hear that shit like this will not be tolerated, that attempting to constrain what their customers print will do them permanent and irreversible damage.

It’s possible that this was merely a blunder on FlashForge’s court, and the attempts they’ve made so far to recover are compounding blunders, but they have sincerely repented of trying to control their customers. That’s too bad; in order to create the right incentives bearing on the future behavior of other vendors, we must show no mercy. We must make them hurt – ideally, to the point of being driven out of business.

And really these warnings apply to all “smart” devices, not just 3D printers. Unless you can audit the source code, the only safe assumption to make is that the firmware is spyware, controlware, and malware.

Device vendors need to know that we do not forgive, and will not forget.

In response, Hopalong Ginsberg posted this helpful item:

October 3, 2025

Adding digital ID to the pocket moloch … what could possibly go wrong?

Filed under: Britain, Bureaucracy, Government, Liberty, Technology — Tags: , , , , — Nicholas @ 03:00

On Substack, Andrew Doyle explains why it’s a terrible idea to trust the government — any government — in forcing digital ID on everyone:

An illustration of Jeremy Bentham’s Panopticon prison.
Drawing by Willey Reveley, 1791.

During a trip to Russia in 1785, the philosopher Jeremy Bentham sketched an outline for a new prison design. The cells were arranged around the circular perimeter and, at the centre, he placed his “panopticon”: a watchtower which afforded a view of any of the cells at all times. The prisoners might not always be being observed, but they could never be sure that they weren’t.

Bentham’s design was never directly used, but the idea took hold as a symbol of state overreach and control, most famously in Michel Foucault’s Discipline and Punish (1975). Foucault was alert to the political ramifications of such a concept, and how surveillance might become an internalised experience. With Keir Starmer now pledging to introduce a digital ID system as a mandatory condition for the right to work, are we seeing the first step towards the realisation of Bentham’s vision?

I suppose we are already there. I have seen friends switch off their phones before discussing politically sensitive issues, genuinely convinced that digital eavesdropping is the norm. Many people are mistrustful of the “Alexa” voice assistant, which they are persuaded is recording their every word. While this all seems terribly conspiratorial, I’m sure most of us remember those reports a few years ago about the Pegasus spyware which had been covertly installed on the phones of journalists and government figures, turning the devices into pocket spies.

[…]

Few will be surprised to hear that public trust in political institutions has plummeted. The increasingly authoritarian tendencies of successive governments, our two-tier policing system, public manipulation as embodied in the “nudge unit”, and the corrupt prioritisation of the interests of the political class over the people they serve – perhaps best demonstrated by parliament’s flagrant efforts to overturn the Brexit vote – have all contributed to this climate of mistrust. The bizarre overreach of police during the lockdowns – in which dog walkers were publicly shamed with drone footage, and shopping trolleys were probed for “non-essential items” – has hardly helped matters.

To many of us, it is baffling that anyone at all would support the prospect of the government keeping track of our movements and holding our private details in a database. Starmer claims that the scheme will curb illegal immigration, but we are talking about criminals who already work outside the system and will doubtless continue to do so. Besides, identity cards have been a reality on the continent for years, and have done precisely nothing to resolve the problem. Employers in the UK are already legally obliged to insist on proof of immigration status from workers.

Labour’s digital ID scheme seems more about control than anything else. The possibility of fraud is also a major concern. It’s not as though the government has an unblemished track record of preventing data breaches. We all recall the massive leak of official MOD data regarding Afghans who had worked with the British government during the UK’s military campaigns. And who could forget the senior civil servant who, in 2008, left top-secret documents concerning al-Qaeda and Iraq’s security forces on a train from London Waterloo? Are we really to suppose that the creation of an all-encompassing centralised database will not leave the public open to risk from hackers and hostile foreign powers?

Tim Worstall adds that “they c’n fuck off ‘n’ all”:

So we’ve that wet dream of Tony Blair raising its ugly head again. There should be a national ID system. Actually, it’s not just Blair, T — the bureaucracy has been right pissed at the erasure of the wartime system since the 50s when it was abolished.

For there are two ways of looking at, thinking about, the whole governance thing. One is — the Blair, bureaucrats’, version — that the population are cattle, kine, to be managed. For the benefit of the bureaucracy of course — or at very least to be forced into doing what the bureaucracy thinks they — we — should be doing.

Then there’s that stout Englishman, the Anglo Saxon, version, which is that government are just the slaves we communally hire to make sure the bins get emptied. Well, OK, maybe raise a bit of tax for a Royal Navy to sink the Frenchies. But even then, not too much of that — the Civil War was, after all, triggered by Ship Money. Did the people who would not be slaughtered by the first wave of invading Frenchies — because they had the silly excuse of living 25 miles inland — have to pay the tax to run the Royal Navy to keep the Frenchies at bay or not? The King said yes — the King was right — and not for the first nor last time in British political history the guy who was right had his head cut off for being so.

Digital ID, so which version should we have? That one beloved of Froggie-type bureaucrats who view La Profonde as kine to be corralled? Or the Anglo Saxon version where we just devolve the scut work to a few slaves?

[…]

The reason this never will be proposed is that it doesn’t fit the reasons why our rulers wish to have an ID system. They’re insistent that we be their kine rather than they our. So, the Hell w’ ’em.

But it could be done. Government simply publishes an interface — an API — which says that proof of identity needs to be presented in this format. We’re done as far as whose kine is whose.

Update 4 October: From Samizdata, another illustration of just how toxic Two Tier Keir has become to British voters:

The Guardian reports:

    “Reverse Midas touch”: Starmer plan prompts collapse in support for digital IDs

    Public support for digital IDs has collapsed after Keir Starmer announced plans for their introduction, in what has been described as a symptom of the prime minister’s “reverse Midas touch”.

    Net support for digital ID cards fell from 35% in the early summer to -14% at the weekend after Starmer’s announcement, according to polling by More in Common.

    The findings suggest that the proposal has suffered considerably from its association with an unpopular government. In June, 53% of voters surveyed said they were in favour of digital ID cards for all Britons, while 19% were opposed.

July 31, 2025

The intent of Britain’s Online Safety Act … and the actual implementation

In The Conservative Woman, Dr. Frederick Attenborough discusses the gap between what the Online Safety Act was intended to do and how it’s actually being enforced now that it’s the law of the land:

X posts like this may not be visible to uses in the UK under the age verification rules of the Online Safety Act.

At the heart of the regime is a requirement to implement “highly effective” age checks. If a platform cannot establish with high confidence that a user is over 18, it must restrict access to a wide category of “sensitive” content, even when that content is entirely lawful. This has major implications for platforms where news footage and political commentary appear in real time.

Ofcom’s guidance makes clear that simple box-ticking exercises, such as declaring your age or agreeing to terms of service, will no longer suffice. Instead, platforms are expected to use tools such as facial age estimation, ID scans, open banking credentials and digital identity wallets.

The Act also pushes companies to filter harmful material before it appears in users’ feeds. Ofcom’s broader regulatory guidance warns that recommender systems can steer young users toward material they didn’t ask for. In response, platforms may now be expected to reconfigure their algorithms to filter out entire categories of lawful expression before it reaches underage or unverified users.

One platform already moving in this direction is X. Its approach offers a revealing – and potentially sobering – glimpse of where things may be heading. The company uses internal signals, including when an account was created, any prior verification, and behavioural data, to estimate a user’s age. If that process fails to confirm the user is over 18, he or she is automatically placed into a sensitive content filtering mode. As the platform’s Help Center explains: “Until we are able to determine if a user is 18 or over, they may be defaulted into sensitive media settings, and may not be able to access sensitive media”.

This system runs without user opt-in and applies at scale. Depending on how X classifies it, filtered material may include adult humour, graphic imagery, political commentary or footage of violence. Already there are signs that lawful content is quietly being screened out.

One example came on July 25, the day the Act’s age-verification duties took effect, during a protest outside the Britannia Hotel in Seacroft, Leeds, where asylum seekers are being housed. A video showing police officers restraining and arresting a protester was posted on X, but quickly became inaccessible to many UK-based users. Instead, viewers saw the message: “Due to local laws, we are temporarily restricting access to this content until X estimates your age”.

West Yorkshire Police denied any involvement in blocking the footage. X declined to comment, but its AI chatbot, Grok, indicated that the clip had been restricted under the Online Safety Act due to violent content. Though lawful and clearly newsworthy, the footage was likely flagged by automated systems intended to shield children from real-world violence.

In The Critic, Christopher Snowdon explains the breakdown of trust between the British public and their government that the implementation of the Online Safety Act only exacerbates:

People are right to be concerned about this slippery slope and yet it cannot be denied that it is pornography enthusiasts who have been hardest hit by the Online Safety Act in the short term. They must now verify themselves in one of three ways, each less appealing than the last. They can submit their credit card details, they can scan in proof of ID, such as a passport, or they can take a photo of their face and allow AI to judge how old they are. If they want to maximise their chances of being the victim of blackmail and identity theft, they could do all three.

While we might not think twice about submitting our credit card details to Amazon or posting our photos on Instagram, there is an understandable reluctance to hand over private data in order to access dubious websites for the purposes of sordid acts of self-pollution. The government assures us that the data will be kept confidential but it is only two weeks since we learned about a data breach that led to the names of 19,000 Afghans who wanted to flee the Taliban being given to the Taliban and it is less than two months since the names and addresses of 6.5 million Co-op customers were stolen in a cyber-attack. Rightly or wrongly, millions of British plank-spankers and rug-tuggers do not wish to identify themselves to anybody.

The result is a surge in interest in Virtual Private Networks (VPNs) which allow internet users to access websites as if they were in a less censorious country. Half of the top ten free apps in Apple’s app download charts yesterday were for VPNs. Google Trends data show that searches for “VPN” have gone through the roof since Friday. Readers can draw their own conclusions from the fact that these searches have been peaking between midnight and 2am.

Downloading random VPNs comes with risks of its own and opens up a whole new world of illicit online activity from free Premier League football to the Dark Web. But there is a deeper reason to feel uneasy about this unintended, albeit predictable, consequence of paternalistic regulation. By driving another wedge between the state and the individual, it further normalises rule-breaking in a country where casual lawlessness is becoming part of daily life. A law-abiding society cannot long endure if the median citizen thinks that the law is an ass.

The breakdown of trust can be seen most clearly when the ordinary man or woman does not share the moral certainties of the governing class. Among smokers, a collapse in tax morale — the intrinsic motivation to pay taxes — has led to a huge rise in the consumption of illegal tobacco in recent years. Smokers no longer feel any obligation to pay taxes that are designed to impoverish them to a government that vilifies them. Cannabis smokers learn from an early age to be suspicious of a police force that they might otherwise respect. Motorists who are faced with 20mph speed limits that were introduced by people who hate private transport have no scruples about flouting the law.

July 22, 2025

Age verification schemes are just another attempt to control everyone’s internet usage

Filed under: Britain, Government, Law, Technology — Tags: , , , , , , — Nicholas @ 03:00

Marian Halcombe is specifically discussing the British age verification provisions of their Online Safety Act, but similar schemes are popping up all over the west, and they’re only pretending to be about protecting young people from online content:

“Privacy” by g4ll4is is licensed under CC BY 2.0 .

The British State, in its infinite filth and hypocrisy, would like you to believe that it is deeply concerned about what you do with your penis. Or more precisely, what you look at while your hand is on it. The latest wheeze — part of the Online Safety Act — is mandatory age verification for all pornographic websites. We’re told it’s to stop children from seeing naughty videos. In reality, it’s a spyware regime disguised as child protection, devised by a ruling class that snorts coke with one hand while signing surveillance warrants with the other.

Let’s start with the pretence. No one in Westminster cares what children watch online. These are the same people who presided over the industrial-scale rape of working-class girls in Rotherham, Telford, Rochdale, and elsewhere — refusing to intervene for fear of “racism”. The idea that they now lie awake worrying about a Year Eight boy glimpsing a MILF thumbnail on Pornhub is an insult to the intelligence. They don’t care about children. They care about you.

The age-verification scheme isn’t just about proving you’re eighteen. It’s about linking your name and your age, and your IP address to your viewing habits. Whether it’s ID upload or facial recognition or some third-party database, the outcome is the same: a digital file that knows what you watch and when you watch it.

In a normal country, this would be recognised as deeply perverse. In ours, it’s dressed up as safety. The State that can’t fix the trains, that can’t keep the hospitals clean, now wants the power to log whether you’re big-enders or little-enders. And all under the banner of protecting the kiddies.

Yes, of course it’s technically possible to anonymise verification. But only if you believe that governments, regulators, and their corporate collaborators are incapable of abuse. That’s a belief I do not share. This is the same British government that let GCHQ harvest your webcam feeds and your phone calls under the TEMPORA programme. You didn’t vote for that. You weren’t told about it. You found out because Edward Snowden blew the whistle.

Do you really think the same regime won’t take an interest in which adult videos you watch? Anyone with an ounce of memory knows how this goes. Every intrusive policy begins with “think of the children”. The Video Recordings Act. The Dangerous Dogs Act. The Terrorism Act. And now the Online Safety Act. Once the infrastructure is in place, it never stays limited to its original purpose.

The definition of “harmful content” is vague for a reason. It can grow. It can stretch. Today it’s Pornhub. Tomorrow it’s Twitter. Then it’s dissident blogs, pro-life websites, or even a dodgy meme about immigration statistics. In the end, the target isn’t porn — it’s dissent.

July 10, 2025

Mandatory online age verification

Michael Geist discusses the rush of the Canadian and other governments in the west to try to impose one-size-fits-all age verification schemes on the internet:

The Day I Knew I Was Old 😉 by artistmac CC BY-SA 2.0

When the intersection of law and technology presents seemingly intractable new challenges, policy makers often bet on technology itself to solve the problem. Whether countering copyright infringement with digital locks, limiting access to unregulated services with website blocking, or deploying artificial intelligence to facilitate content moderation, there is a recurring hope the answer to the policy dilemma lies in better technology. While technology frequently does play a role, experience suggests that the reality is far more complicated as new technologies also create new risks and bring unforeseen consequences. So too with the emphasis on age verification technologies as a magical solution to limiting under-age access to adult content online. These technologies offer some promise, but the significant privacy and accuracy risks that could inhibit freedom of expression are too great to ignore.

The Hub runs a debate today on the mandated use of age verification technologies. I argue against it in a slightly shorter version of this post. Daniel Zekveld of the Association for Reformed Political Action (ARPA) Canada makes the case for it in this post.

The Canadian debate over age verification technologies – which has now expanded to include both age verification and age estimation systems – requires an assessment of both the proposed legislative frameworks and the technologies themselves. The last Parliament featured debate over several contentious Internet-related bills, notably streaming and news laws (Bills C-11 and C-18), online harms (Bill C-63) and Internet age verification and website blocking (Bill S-210). Bill S-210 fell below the radar screen for many months as it started in the Senate and received only cursory review in the House of Commons. The bill faced only a final vote in the House but it died with the election call. Once Parliament resumed, the bill’s sponsor, Senator Julie Miville-Dechêne, wasted no time in bringing it back as Bill S-209.

The bill would create an offence for any organization making available pornographic material to anyone under the age of 18 for commercial purposes. The penalty for doing so is $250,000 for the first offence and up to $500,000 for any subsequent offences. Organizations can rely on three potential defences:

  1. The organization instituted a government-approved “prescribed age-verification or age estimation method” to limit access. There is a major global business of vendors that sell these technologies and who are vocal proponents of this kind of legislation.
  2. The organization can make the case that there is “legitimate purpose related to science, medicine, education or the arts”.
  3. The organization took steps required to limit access after having received a notification from the enforcement agency (likely the CRTC).

Note that Bill S-209 has expanded the scope of available technologies for implementation: while S-210 only included age verification, S-209 adds age estimation technologies. Age estimation may benefit from limiting the amount of data that needs to be collected from an individual, but it also suffers from inaccuracies. For example, using estimation to distinguish between a 17 and 18 year old is difficult for both humans and computers, yet the law depends upon it. Given the standard for highly effective technologies, age estimation technologies may not receive government approvals, leaving only age verification in place.

June 5, 2025

The Liberals believe this time they’ll keep kids away from internet porn

Filed under: Cancon, Government, Liberty, Media, Technology — Tags: , , , , , , — Nicholas @ 03:00

Sometimes it’s hard to get a grip on what Liberals actually believe, as on the one hand they’re actively resisting pulling literal pornography out of school libraries (because it’s “LGBT friendly”) and on the other hand, they’re all gung-ho for yet another attempt to pass legislation that will try to prevent kids from seeing porn on the internet:

How does a website automatically, “responsibly” prove someone’s age down the end of an internet connection, without actually verifying their ID? Answer: It doesn’t. Obviously

There is another legislative effort afoot to keep Canadian children away from pornography. It’s well-intentioned effort, I suppose, but such efforts didn’t work very well when pornography was printed on glossy paper and distributed on VHS tapes and pay-per-view, so it seems particularly improbable in the internet age.

Bill S-209 is Independent (Liberal-appointed) Senator Julie Miville-Dechêne’s second attempt at a private member’s bill on the topic. It is predicated on the notion that it’s easier to verify age automatically than it used to be: “Online age-verification and age-estimation technology is increasingly sophisticated and can now effectively ascertain the age of users without breaching their privacy rights”, the bill’s preamble avers.

It is absolute rubbish, to the extent that even the Liberals under former prime minister Justin Trudeau seemed to realize it the first time it was tried. We can only hope Mark Carney’s Liberals are of similar mind. Early signs are not positive. The reappointment of Steven Guilbeault as heritage minister (now called Canadian identity and culture minister, for some reason) doesn’t bode well. He seems genuinely to dislike the online world on principle.

Or, maybe it does bode well. Guilbeault did a singularly terrible job trying to sell the Liberals’ anti-internet agenda in English Canada. I’m not sure he could give away ice cream in a Calgary heatwave. So if you think laws targeting “online harms” are doomed to fail at best — and could lead to dystopian outcomes — then maybe Guilbeault is exactly the fellow you want in charge.

When it came to online porn, the Trudeau Liberals seemed to have some sense of the Sisyphean proposition before them. Miville-Dechêne’s first attempt at a bill received support from MPs of all parties in the House of Commons last year, but the Liberal leadership cited privacy concerns in refusing to get behind it.

In large part that might just have been because Conservative Leader Pierre Poilievre supported the idea and, to Liberals, anything Poilievre supports must obviously be a serious threat to humanity’s survival. But still, Trudeau was pretty unequivocal in rejecting the idea.

May 30, 2025

Senate to once again try to pass internet age verification and website blocking

Filed under: Cancon, Government, Liberty, Politics, Technology — Tags: , , , , , , — Nicholas @ 03:00

Some ideas are so horrible that they never, ever die. The Canadian Senate nearly got an age verification and website blocking ban into law during the last Parliament, and as Michael Geist discusses, they’re not giving up now:

“In the east wing of the Centre Block is the Senate chamber, in which are the thrones for the Canadian monarch and consort, or for the federal viceroy and his or her consort, and from which either the sovereign or the governor general gives the Speech from the Throne and grants Royal Assent to bills passed by parliament. The senators themselves sit in the chamber, arranged so that those belonging to the governing party are to the right of the Speaker of the Senate and the opposition to the speaker’s left. The overall colour in the Senate chamber is red, seen in the upholstery, carpeting, and draperies, and reflecting the colour scheme of the House of Lords in the United Kingdom; red was a more royal colour, associated with the Crown and hereditary peers. Capping the room is a gilt ceiling with deep octagonal coffers, each filled with heraldic symbols, including maple leafs, fleur-de-lis, lions rampant, clàrsach, Welsh Dragons, and lions passant. On the east and west walls of the chamber are eight murals depicting scenes from the First World War; painted in between 1916 and 1920.”
Photo and description by Saffron Blaze via Wikimedia Commons.

The last Parliament featured debate over several contentious Internet-related bills, notably streaming and news laws (Bills C-11 and C-18), online harms (Bill C-63) and Internet age verification and website blocking (Bill S-210). Bill S-210 fell below the radar screen for many months as it started in the Senate and received only cursory review in the House. The bill faced only a final vote in the House but it died with the election call. This week, the bill’s sponsor, Senator Julie Miville-Dechêne, wasted no time in bringing it back. Now Bill S-209, the bill starts from scratch in the Senate with the same basic framework but with some notable changes that address at least some of the concerns raised by the prior bill (a fulsome review of those concerns can be heard in a Law Bytes podcast I conducted with Senator Miville-Dechêne).

Bill S-209 creates an offence for any organization making available pornographic material to anyone under the age of 18 for commercial purposes. The penalty for doing so is $250,000 for the first offence and up to $500,000 for any subsequent offences. The previous bill used the term “sexually explicit material”, borrowing from the Criminal Code provision. This raised concerns as the definition in the Criminal Code is used in conjunction with other sexual crimes. The bill now features its own definition for pornographic material, which is defined as

    any photographic, film, video or other visual representation, whether or not it was made by electronic or mechanical means, the dominant characteristic of which is the depiction, for a sexual purpose, of a person’s genital organs or anal region or, if the person is female, her breasts, but does not include child pornography as defined in subsection 163.1(1) of the Criminal Code.

Organizations can rely on three potential defences:

  1. The organization instituted a government-approved “prescribed age-verification or age estimation method” to limit access. There is a major global business of vendors that sell these technologies and who are vocal proponents of this kind of legislation.
  2. The organization can make the case that there is “legitimate purpose related to science, medicine, education or the arts”.
  3. The organization took steps required to limit access after having received a notification from the enforcement agency (likely the CRTC).

Note that Bill S-209 has expanded the scope of available technologies for implementation: while S-210 only included age verification, S-209 adds age estimation technologies. Age estimation may benefit from limiting the amount of data that needs to be collected from an individual, but it also suffers from inaccuracies. For example, using estimation to distinguish between a 17 and 18 year old is difficult for both humans and computers, yet the law depends upon it. Given the standard for highly effective technologies, age estimation technologies may not receive government approvals, leaving only age verification in place.

May 10, 2025

QotD: Undocumented America

Filed under: Government, Quotations, USA — Tags: , , , , — Nicholas @ 01:00

In the Panopticon State, the Shadowlands are thriving: a state that presumes to tax and license Joe Schmoe for using the table in the corner of his basement as a home office apparently doesn’t spot the half-dozen additional dwellings that sprout in José Schmoe’s yard out on the edge of town. Do-it-yourself wiring stretches from bungalow to lean-to trailer to RV to rusting pick-up on bricks, as five, six, eight, twelve different housing units pitch up on one lot. The more Undocumented America secedes from the hyper-regulatory state, the more frenziedly Big Nanny documents you and yours.

This multicultural squeamishness is most instructive. Illegal immigrants are providing a model for survival in an impoverished statist America, and on the whole the state is happy to let them do so. In Undocumented America, the buildings have no building codes, the sales have no sales tax, your identity card gives no clue as to your real identity. In the years ahead, for many poor Overdocumented-Americans, living in the Shadowlands will offer if not the prospect of escape then at least temporary relief. As America loses its technological edge and the present Chinese cyber-probing gets disseminated to the Wikileaks types, the blips on the computer screen representing your checking and savings accounts will become more vulnerable. After yet another brutal attack, your local branch never reconnects to head office; it brings up from the vault the old First National Bank of Deadsville shingle and starts issuing fewer cards and more checkbooks. And then fewer checkbooks and more cash. In small bills.

The planet is dividing into two extremes: an advanced world — Europe, North America, Australia — in which privacy is vanishing and the state will soon be able to monitor you every second of the day; and a reprimitivizing world — Somalia, the Pakistani tribal lands — where no one has a clue what’s going on. Undocumented America is giving us a lesson in how Waziristan and CCTV London can inhabit the same real estate, like overlapping area codes. There will be many takers for that in the years ahead. As Documented America fails, poor whites, poor blacks, and many others will find it easier to assimilate with Undocumented America, and retreat into the shadows.

Mark Steyn, After America, 2011.

August 29, 2024

Pavel Durov’s arrest isn’t for a clear crime, it’s for allowing everyone access to encrypted communications services

Filed under: France, Government, Liberty, Media, Technology — Tags: , , , , — Nicholas @ 03:00

J.D. Tuccille explains the real reason the French government arrested Pavel Durov, the CEO of Telegram:

It’s appropriate that, days after the French government arrested Pavel Durov, CEO of the encrypted messaging app Telegram, for failing to monitor and restrict communications as demanded by officials in Paris, Meta CEO Mark Zuckerberg confirmed that his company, which owns Facebook, was subjected to censorship pressures by U.S. officials. Durov’s arrest, then, stands as less of a one-off than as part of a concerted effort by governments, including those of nominally free countries, to control speech.

“Telegram chief executive Pavel Durov is expected to appear in court Sunday after being arrested by French police at an airport near Paris for alleged offences related to his popular messaging app,” reported France24.

A separate story noted claims by Paris prosecutors that he was detained for “running an online platform that allows illicit transactions, child pornography, drug trafficking and fraud, as well as the refusal to communicate information to authorities, money laundering and providing cryptographic services to criminals”.

Freedom for Everybody or for Nobody

Durov’s alleged crime is offering encrypted communications services to everybody, including those who engage in illegality or just anger the powers that be. But secure communications are a feature, not a bug, for most people who live in a world in which “global freedom declined for the 18th consecutive year in 2023”, according to Freedom House. Fighting authoritarian regimes requires means of exchanging information that are resistant to penetration by various repressive police agencies.

“Telegram, and other encrypted messaging services, are crucial for those intending to organise protests in countries where there is a severe crackdown on free speech. Myanmar, Belarus and Hong Kong have all seen people relying on the services,” Index on Censorship noted in 2021.

And if bad people occasionally use encrypted apps such as Telegram, they use phones and postal services, too. The qualities that make communications systems useful to those battling authoritarianism are also helpful to those with less benign intentions. There’s no way to offer security to one group without offering it to everybody.

As I commented on a post on MeWe the other day, “Somehow the governments of the west are engaged in a competition to see who can be the most repressive. Canada and New Zealand had the early lead, but Australia, Britain, Germany, and France have all recently moved ahead in the standings. I’m not sure what the prizes might be, but I strongly suspect “a bloody revolution” is one of them (if not all of them).”

June 9, 2024

Microsoft’s latest ploy to be the most hated tech company

Filed under: Media, Technology, USA — Tags: , , , , , — Nicholas @ 03:00

Charles Stross wonders if Microsoft’s CoPilot+ is actually a veiled suicide attempt by the already much-hated software giant:

The breaking tech news this year has been the pervasive spread of “AI” (or rather, statistical modeling based on hidden layer neural networks) into everything. It’s the latest hype bubble now that Cryptocurrencies are no longer the freshest sucker-bait in town, and the media (who these days are mostly stenographers recycling press releases) are screaming at every business in tech to add AI to their product.

Well, Apple and Intel and Microsoft were already in there, but evidently they weren’t in there enough, so now we’re into the silly season with Microsoft’s announcement of CoPilot plus Recall, the product nobody wanted.

CoPilot+ is Microsoft’s LLM-based add-on for Windows, sort of like 2000’s Clippy the Talking Paperclip only with added hallucinations. Clippy was rule-based: a huge bundle of IF … THEN statements hooked together like a 1980s Expert System to help users accomplish what Microsoft believed to be common tasks, but which turned out to be irritatingly unlike anything actual humans wanted to accomplish. Because CoPilot+ is purportedly trained on what users actually do, it looked plausible to someone in marketing at Microsoft that it could deliver on “help the users get stuff done”. Unfortunately, human beings assume that LLMs are sentient and understand the questions they’re asked, rather than being unthinking statistical models that cough up the highest probability answer-shaped object generated in response to any prompt, regardless of whether it’s a truthful answer or not.

Anyway, CoPilot+ is also a play by Microsoft to sell Windows on ARM. Microsoft don’t want to be entirely dependent on Intel, especially as Intel’s share of the global microprocessor market is rapidly shrinking, so they’ve been trying to boost Windows on ARM to orbital velocity for a decade now. The new CoPilot+ branded PCs going on sale later this month are marketed as being suitable for AI (spot the sucker-bait there?) and have powerful new ARM processors from Qualcomm, which are pitched as “Macbook Air killers”, largely because they’re playing catch-up with Apple’s M-series ARM-based processors in terms of processing power per watt and having an on-device coprocessor optimized for training neural networks.

Having built the hardware and the operating system Microsoft faces the inevitable question, why would a customer want this stuff? And being Microsoft, they took the first answer that bubbled up from their in-company echo chamber and pitched it at the market as a forced update to Windows 11. And the internet promptly exploded.

First, a word about Apple. Apple have been quietly adding AI features to macOS and iOS for the past several years. In fact, they got serious about AI in 2015, and every Apple Silicon processor they’ve released since 2016 has had a neural engine (an AI coprocessor) on board. Now that the older phones and laptops are hitting end of life, the most recent operating system releases are rolling out AI-based features. For example, there’s on-device OCR for text embedded in any image. There’s a language translation service for the OCR output, too. I can point my phone at a brochure or menu in a language I can’t read, activate the camera, and immediately read a surprisingly good translation: this is an actually useful feature of AI. (The ability to tag all the photos in my Photos library with the names of people present in them, and to search for people, is likewise moderately useful: the jury is still out on the pet recognition, though.) So the Apple roll-out of AI has so far been uneventful and unobjectionable, with a focus on identifying things people want to do and making them easier.

Microsoft Recall is not that.

May 20, 2024

The first post-privacy generation in human history

Filed under: Economics, Liberty, Media, Technology, USA — Tags: , , , , — Nicholas @ 05:00

You may have mixed feelings about the Zoomers — even if you happen to be a Zoomer — but it’s beyond argument that they are the first generation who have grown up in a zero-privacy world:

“Privacy” by g4ll4is is licensed under CC BY 2.0 .

Zoomers are the first post-privacy generation in human existence. They will never know a world in which they can try to lose themselves without somehow being tracked. Roughly three years ago, I was speaking with the CEO and founder of a commercial digital advertising company from NYC. He told me that their technology was so powerful that they were able to figure out when people were getting up from their couches to go into another room simply via their own digital advertising software.

It’s very tough to wrap our heads around the complete loss of privacy. For me, I have trouble remembering how it was to be out of instantaneous reach via mobile phone. Pre-mass adoption of cell phones, people would effectively be out of reach i.e. disappear for hours at a time, as the only way to contact them was to call them at home (inb4 beepers, as I never had one). We are constantly tracked and monitored, and our personal data is sold by data brokers all over the globe. One customer of personal ad tracking data is the CIA, as Matthew Petti explains:

    For years, the U.S. government has bought information on private citizens from commercial data brokers. Now, for the first time ever, American spymasters are admitting that this data is sensitive—but they’re leaving it up to the spy agencies on how to use it.

    Last week, Director of National Intelligence (DNI) Avril Haines released a “Policy Framework for Commercially Available Information.” Her office oversees 18 agencies in the “intelligence community“, including the CIA, the FBI, the National Security Agency (NSA), and all military intelligence branches.

    In the 2018 case Carpenter v. United States, the Supreme Court ruled that police need a warrant to obtain mobile phone location data from phone companies. (During the case, the Reason Foundation filed an amicus brief against warrantless snooping.) As a workaround, the feds instead started buying data from third-party brokers.

    Haines’ new framework claims that “additional clarity” on the government’s policies will help protect Americans’ privacy. Yet the document is vague about the specific limits. It orders the agencies themselves to come up with “safeguards that are tailored to the sensitivity of the information” and write an annual report on how they use this data.

more:

    As national security journalist Spencer Ackerman points out in his Forever Wars newsletter, the framework doesn’t require the feds to delete old purchased data. Earlier this year, Sen. Ron Wyden (D–Ore.) called on the NSA to purge all data that it bought without a warrant and without following the Federal Trade Commission’s privacy policies.

    “The framework’s absence of clear rules about what commercially available information can and cannot be purchased by the intelligence community reinforces the need for Congress to pass legislation protecting the rights of Americans,” Wyden tells Reason. “The DNI’s framework is nonetheless an important step forward in starting to bring the intelligence community under a set of principles and policies, and in documenting all the various programs so that they can be overseen.”

Case in point:

    Wyden has been aggressively pushing for transparency on data purchases over the past few years. In 2021, he uncovered that the Defense Intelligence Agency was buying Americans’ smartphone location data. That same year, he sent a letter to Haines and CIA Director Bill Burns complaining about a secretive CIA data collection program. (In an Orwellian turn, the letter itself was classified until 2022.) This year, Wyden revealed more details on NSA data purchases.

    Some of this data is collected and sold directly by the apps. For example, an intelligence company called X-Mode once paid MuslimPro, an app that offers a daily prayer calendar and a compass pointing towards Mecca, to include a few lines of location tracking code. X-Mode then sold the data to U.S. government agencies. MuslimPro claims that it did not intend to sell the data to the government and ended the arrangement after the story broke.

So, yeah … app maker will sell your personal data to a buyer like the CIA.

    In other cases, the data is siphoned from advertising markets. Every time a user opens a website with paid advertisements, their location and attributes appear on a real-time bidding (RTB) exchange, a virtual auction where companies buy ad space. Data brokers posing as advertisers scrape the listings for information on users.

    “Any government with a halfway decent cyber intelligence program is participating in these RTB exchanges, because it’s such an immensely valuable source of data,” says Byron Tau, author of Means of Control: How the Hidden Alliance of Tech and Government Is Creating a New American Surveillance State.

    As a demonstration of how powerful RTB data is, an intelligence contractor used data from the dating app Grindr to track gay government employees from their offices to their homes, Tau reported in his book.

The IRS is in on it too:

    Lawyers for the Internal Revenue Service, on the other hand, have argued that users voluntarily handed over the information, so the government is free to use it. Tau points out that users don’t really know how their data is being resold, and even the RTB exchanges themselves aren’t supposed to be used for data scraping.

    “A lot of these companies that are collecting data from the global population don’t have a real consumer relationship” with the people they’re spying on, Tau says. “Unless you know how to decompile software and you’re technically savvy, you can’t even make informed choices.”

In an increasingly digitized world, the right to privacy becomes wholly unworkable. Think digital payments by way of credit and debit cards vs. cash.

March 15, 2024

QotD: The ever-growing state

Filed under: Government, Law, Liberty, Quotations, USA — Tags: , , , — Nicholas @ 01:00

“Inconvenience would seem to be a small price to pay for peace of mind.”

That one phrase sums up all the problems we are having with government in this country. It justifies the humiliating personal searches at airports. It justifies the police state tactics of “sobriety checkpoints” or “identification stops”. It justifies the Patriot Act, and the new Intelligence Reform Act, with all their draconian intrusions on personal privacy, including the repulsive, illegal and un-Constitutional parts, such as no-warrant-required searches, a national ID card, federal snooping into our reading habits at libraries and book stores. It justifies any intrusion into private, personal, or intimate matters. After all, if someone has more than one wife (or husband), doesn’t your peace of mind require that that person be harassed, jailed, or otherwise punished for violation of your religious or moral code? It doesn’t matter that the people involved are adults who freely and willingly consent to live in that situation. For that matter, if two men or women live together, doesn’t your peace of mind require that their “immoral and ungodly” lifestyle be exposed, and the people involved publicly pilloried?

Ron Beatty, “Peace of Mind”, Libertarian Enterprise, 2005-03-06.

February 24, 2024

QotD: Big government

Filed under: Cancon, Government, Liberty, Quotations, USA — Tags: , , , — Nicholas @ 01:00

I’m Canadian and have a romantic fondness for the famous motto of the Royal Canadian Mounted Police, the one about the Mounties always getting their man. But the bigger you make the government, the more you entrust to it, the more powers you give it to nose around the country’s bank accounts, and phone calls, and e-mails, and favourite Internet porn sites, the more you’ll enfeeble it with the siren song of the soft target. The Mounties will no longer get their man, they’ll get you instead. Frankly, it’s a lot easier.

[…]

What should have died on September 11th is the liberal myth that you can regulate the world to your will. The reduction of a free-born citizenry to neutered sheep upon arrival at the airport was the most advanced expression of this delusion. So how’s the FAA reacting to September 11th? With more of the same kind of obtrusive, bullying, useless regulations that give you the comforting illusion that if they’re regulating you they must be regulating all the bad guys as well. We don’t need big government, we need lean government — government that’s stripped of its distractions and forced to concentrate on the essentials. If Hillary and Co want to argue for big government, conservatives could at least make the case for what’s really needed — grown-up government.

Mark Steyn, “Big Shift”, National Review, 2001-11-19.

January 3, 2024

They all spy on you, the FBI, RCMP, MI5 … and apparently your Subaru

Filed under: Business, Liberty, Technology, USA — Tags: , , — Nicholas @ 03:00

JoNova linked to this disturbing little article explaining what legal rights you give away merely by being a passenger in a modern Subaru vehicle:

Subaru is a Japanese car company started back in the 1950s. Their all-wheel drive, sporty SUVs and cars are popular with outdoor types and the LGBT+ community (and your privacy researcher’s Mom … Mom swears by Subaru and has since the 1980s). Popular models in the Outback, Forester, Crosstrek, Impreza, Legacy, the sporty WRX, and the electric Solterra. The MySubaru app and Subaru’s Starlink connected services offer up all the usual connected car things like remote start/stop, lock/unlock, honk your horn and flash your lights from bedroom, automatic collision notification, multimedia services like navigation and news, trip logs, and a way to manage other people who might drive your Subaru with boundary, speed, and curfew alerts. So, do we love Subaru’s privacy? Not really. But hey, they aren’t the worst car company we reviewed, so there’s that.

Here’s something you might not realize. The moment you sit in the passenger seat of a Subaru that uses connected services, you’ve consented to allow them to use — and maybe even sell — your personal information. According to their privacy policy, that means things like your name, location, “Audio recordings of Vehicle Occupants“, and inferences they can draw about things like your “characteristics, predispositions, behavior, or attitudes“. Call us bonkers, but we don’t think that simply sitting in the passenger seat of someone’s Subaru should mean you consent to having any of your personal information use for, well, pretty much anything at all. Let alone potentially sold to data brokers or shared with third party marketers so they can target you with ads about who knows what based on the the inferences they draw about you because you sat in the back seat of a Subaru in the mountains of Colorado. We’re gonna really call out Subaru for this, because they lay it out so clearly in their privacy policy, but please know, Subaru isn’t the only car company doing this sort of icky thing.

If you go read Subaru’s privacy policy (or don’t, we did it for you, you can just read our review here), you’ll see at the very start they say this: “This Privacy Policy applies to each user of the Services, including any ‘Vehicle Occupant’, which includes each driver or passenger in a Subaru vehicle that uses Connected Vehicle Services, such as Subaru Starlink (such vehicle, a ‘Connected Vehicle’), whether or not such driver or passenger is the vehicle owner or a registered user of the Connected Vehicle Services. For the avoidance of doubt, for purposes of this Privacy Policy, ‘using’ the Services includes being a Vehicle Occupant in a Connected Vehicle.” So yeah, they don’t want there to be any doubt that when you sit in a connected Subaru, you’ve entered the world of using their services.

December 15, 2023

Bill S-210 “isn’t just a slippery slope, it is an avalanche”

You sometimes get the impression that the only person in Ottawa who actually pays attention to online privacy issues is Michael Geist:

“2017 Freedom of Expression Awards” by Elina Kansikas for Index on Censorship https://flic.kr/p/Uvmaie (CC BY-SA 2.0)

After years of battles over Bills C-11 and C-18, few Canadians will have the appetite for yet another troubling Internet bill. But given a bill that envisions government-backed censorship, mandates age verification to use search engines or social media sites, and creates a framework for court-ordered website blocking, there is a need to pay attention. Bill S-210, or the Protecting Young Persons from Exposure to Pornography Act, was passed by the Senate in April after Senators were reluctant to reject a bill framed as protecting children from online harm. The same scenario appears to be playing out in the House of Commons, where yesterday a majority of the House voted for the bill at second reading, sending it to the Public Safety committee for review. The bill, which is the brainchild of Senator Julie Miville-Duchêne, is not a government bill. In fact, government ministers voted against it. Instead, the bill is backed by the Conservatives, Bloc and NDP with a smattering of votes from backbench Liberal MPs. Canadians can be forgiven for being confused that after months of championing Internet freedoms, raising fears of censorship, and expressing concern about CRTC overregulation of the Internet, Conservative MPs were quick to call out those who opposed the bill (the House sponsor is Conservative MP Karen Vecchio).

I appeared before the Senate committee that studied the bill in February 2022, where I argued that “by bringing together website blocking, face recognition technologies, and stunning overbreadth that would capture numerous mainstream services, the bill isn’t just a slippery slope, it is an avalanche”. As I did then, I should preface criticism of the bill by making it clear that underage access to inappropriate content is indeed a legitimate concern. I think the best way to deal with the issue includes education, digital skills, and parental oversight of Internet use including the use of personal filters or blocking tools if desired. Moreover, if there are Canadian-based sites that are violating the law in terms of the content they host, they should absolutely face investigation and potential charges.

However, Bill S-210 goes well beyond personal choices to limit underage access to sexually explicit material on Canadian sites. Instead, it envisions government-enforced global website liability for failure to block underage access, backed by website blocking and mandated age verification systems that are likely to include face recognition technologies. The government establishes this regulatory framework and is likely to task the CRTC with providing the necessary administration. While there are surely good intentions with the bill, the risks and potential harms it poses are significant.

The basic framework of Bill S-210 is that it creates an offence for any organization making available sexually explicit material to anyone under the age of 18 for commercial purposes. The penalty for doing so is $250,000 for the first offence and up to $500,000 for any subsequent offences. Organizations (broadly defined under the Criminal Code) can rely on three potential defences:

  1. The organization instituted a “prescribed age-verification method” to limit access. It would be up to the government to determine what methods qualify with due regard for reliability and privacy. There is a major global business of vendors that sell these technologies and who are vocal proponents of this kind of legislation.
  2. The organization can make the case that there is “legitimate purpose related to science, medicine, education or the arts”.
  3. The organization took steps required to limit access after having received a notification from the enforcement agency (likely the CRTC).

The enforcement of the bill is left to the designated regulatory agency, which can issue notifications of violations to websites and services. Those notices can include the steps the agency wants followed to bring the site into compliance. This literally means the government via its regulatory agency will dictate to sites how they must interact with users to ensure no underage access. If the site fails to act as instructed within 20 days, the regulator can apply for a court order mandating that Canadian ISPs block the site from their subscribers. The regulator would be required to identify which ISPs are subject to the blocking order.

Older Posts »

Powered by WordPress