Michael Geist thinks a substantial part of the Online Harms Act should be removed:
Having a spent virtually the entire day yesterday talking with media and colleagues about Bill C-63, one thing has become increasingly clear: the Criminal Code and Human Rights Act provisions found in the Online Harms Act should be removed. In my initial post on the bill, I identified the provisions as one of three red flags, warning that they “feature penalties that go as high as life in prison and open the door to a tidal wave of hate speech related complaints”. There is no obvious need or rationale for penalties of life in prison for offences motivated by hatred, nor the need to weaponize human rights complaints by reviving Human Rights Act provisions on communication of hate speech. As more Canadians review the bill, there is a real risk that these provisions will overwhelm the Online Harms Act and become a primary area of focus despite not being central to the law’s core objective of mitigating harms on Internet platforms.
Indeed, these concerns are already attracting media coverage and were raised yesterday in columns and commentary from Andrew Coyne and Professor Richard Moon, who I think rightly describes the core provisions of the Online Harms Act as “sensible and workable” but notes that these other provisions are troubling. Bill C-63 is effectively four bills in one: (1) the Online Harms Act, which forms the bulk of the bill and is focused on the duties of Internet platforms as they respond to seven identified harms, (2) the expansion of mandatory child pornography reporting requirements to include those platforms, (3) the Criminal Code provisions, which opens the door to life in prison for committing offences that are motivated by hatred of certain groups, and (4) the changes to the Canadian Human Rights Act, which restores Section 13 involving communicating hate speech through the Internet as a discriminatory practice. The difference between the first two and the latter two is obvious: the first two are focused on the obligations of Internet platforms in addressing online harms, while the latter two have nothing directly to do with Internet platforms at all.
The Criminal Code and Human Rights Act changes originate in Bill C-36, which was introduced in 2021 on the very last sitting day of the Parliamentary session. The bill died on the order paper with an election call several weeks later and did not form a core part of either the online harms consultation or the 2022 expert panel on online harms. These provisions simply don’t fit within a legislative initiative that is premised on promoting online safety by ensuring that social media services are transparent and accountable with respect to online harms. Further, both raise legitimate concerns regarding criminal penalties and misuse of the human rights complaint system.
At the National Post, Carson Jerema points out that under the Online Harms Act, the truth is no defence:
As much as the Liberals want everyone to believe that their proposed online harms act is focused almost exclusively on protecting children from predators, and that, as Justice Minister Arif Virani said, “It does not undermine freedom of speech,” that simply isn’t true. While the legislation, tabled Monday, could have been much worse — it mercifully avoids regulating “misinformation” — it opens up new avenues to censor political speech.
Under the bill, condemning the Hamas massacre of 1,200 people on Oct. 7, could, under some circumstances, be considered “hate speech”, and therefore subject to a human rights complaint with up to $50,000 in penalties. As part of the new rules designed to protect Canadians from “online harms”, the bill would reinstate Section 13 of the Canadian Human Rights Act, the hate speech provision repealed under the Harper government.
The new version is more tightly defined than the original, but contains the same fatal flaws, specifically that truth is no defence and that what counts as hate speech remains highly subjective.
Under the new Section 13: “it is a discriminatory practice to communicate or cause to be communicated hate speech by means of the Internet or any other means of telecommunication in a context in which the hate speech is likely to foment detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination”.
It is distressingly easy to imagine scenarios where everyday political speech finds itself under the purview of the Canadian Human Rights Commission. Criticizing Hamas and the murderous ideology that motivates it could, to some, be seen as “likely to foment detestation or vilification” against a group, especially if the condemnation of Hamas notes that Palestinians generally support the terrorist group or that Hamas is driven by religious fanaticism.
Dan Knight calls it “the sequel no one asked for”:
Morning my fellow Canadians and lets break into the liberals latest sequel with Bill C-63 the its failed predecessor, Bill C-36, which is a sequel nobody asked for in the saga of online hate speech legislation. We’re witnessing a government’s second attempt to police what you can say online.
Now, the Liberal government in Canada initially put forward Bill C-36. This bill aimed to tackle extreme forms of hate speech online. It sought to bring back a version of a section that was repealed from the Canadian Human Rights Act in 2013. Why was it repealed, you might ask? Because critics argued it violated free speech rights. But here we are, years later, with the Liberals trying to reintroduce similar measures under the guise of combating hate speech. Under the proposed changes, folks could be fined up to $20,000 if found guilty of hate speech that identifies a victim. But here’s the kicker: the operators of social media platforms, the big tech giants, are initially left out of the equation. Instead, the focus is on individuals and website operators. Now, the government says it plans to hold consultations over how to make these social media platforms more accountable. But the details are hazy, and the timeline is, well, as clear as mud.
The justice minister of Canada has framed these amendments as a way to protect the vulnerable and hold individuals accountable for spreading hatred online. But let’s be clear: there’s a thin line between protecting individuals and infringing upon free speech. And that line is looking blurrier by the day in Canada. Critics, including the Opposition Conservatives, have voiced concerns that these measures could curb freedom of speech and be difficult to enforce. They argue that the government’s efforts might not just be about protecting citizens but could veer into controlling what can and cannot be said online. And when the government starts deciding what constitutes “hate speech”, you have to start wondering: Who gets to draw that line? And based on what standards?
And, just when you thought it couldn’t get any more Orwellian, enter the pièce de résistance: the Digital Safety Commission of Canada. Because, clearly, what’s missing in the fight against “hate speech” is another layer of bureaucracy, right? Another set of initials to add to the alphabet soup of governmental oversight. So, here’s the deal: this newly minted commission, with its CEO and officers — oh, you better believe there will be officers — is tasked with overseeing the online speech of millions. And let me tell you, nothing says “independent” like a government-appointed body policing what you can and cannot say on the internet. I can just imagine the job postings: Now Hiring: Online Expression Regulators, proficiency in silencing dissent highly valued.