Why Canada must overhaul its approach to regulating online harms

OPINION: This country has long been seen as a champion of internet freedom — a proposed law would jeopardize that reputation
By Yuan Stevens - Published on Oct 25, 2021
Research suggests automated removal decisions by platforms are biased against marginalized and racialized communities. (Chinnapong/iStock)

Comments

X

In the wake of the leaks by Facebook whistleblower Frances Haugen, at least one thing remains clear: social-media companies cannot be left to their own devices when it comes to addressing harmful content online.

But the path Canada is currently on to regulating “online harms” has been decried by global experts — such as the Global Network InitiativeRanking Digital Rightsinternet scholar Daphne Kellerlegal scholar Michael Geist, and others — as among the worst in the world.

Why was this law proposed in Canada, and why now? Immediately after the storming of the U.S. Capitol on January 6, Justin Trudeau’s Liberal government began to make good on an election promise from 2019 to introduce a law modelled after the German Network Enforcement Act — commonly known as NetzDG.

Despite Canada’s longstanding role as a champion of human rights and internet freedom, the law proposed has numerous flaws that call the country’s reputation into question.

Like the German law, which has provided a blueprint for online censorship by authoritarian regimes, the Canadian law would have 24-hour content-blocking requirements for illegal content.

A man filming in The Agenda studio

Our journalism depends on you.

You can count on TVO to cover the stories others don’t—to fill the gaps in the ever-changing media landscape. But we can’t do this without you.

But the law would go much further than Germany’s NetzDG, and not in a good way. NetzDG requires removal of “manifestly unlawful” content within 24 hours but gives platforms seven days to assess content that falls in legally grey areas. There is no nuance like this in Canada’s proposed blocking requirements, and that’s a problem.

Canada’s requirement is bound to lead to the over-removal and censorship of legitimate speech, especially given that companies can face massive fines of up to 5 per cent of gross global revenues or $25 million under the proposed law. There is also mounting evidence that automated removal decisions by platforms are biased against marginalized and racialized communities, causing further harms to the very people that this law aims to protect.

The proposed law could well require websites and social-media companies to proactively monitor and filter five types of content, ranging from “terrorist” content to intimate images shared without consent. It would also force websites to disclose personally identifying information to law-enforcement and intelligence agencies.

Entire websites could be blocked in Canada, and that has enormous implications for the rights to free expression and access to information in this country and beyond.

Requiring websites and social-media platforms to proactively monitor content and feed data on their users to the police is tantamount to pre-publication censorship, according to David Kaye, former special rapporteur on the promotion and protection of the right to freedom of opinion and expression.

It also effectively transforms online service providers into an investigative tool and “suspicion database” for law enforcement.

When combined, these intrusive obligations pose an unacceptable risk to the privacy of Canadians; they have no place in the laws of a free and democratic society.

The University of Ottawa’s Canadian Internet Policy and Public Interest Clinic  and many other non-governmental organizations ranging from Citizen Lab to the Internet Society of Canada and the Canadian Civil Liberties Association  have all filed comments describing the problems with the law.

What happens in Canada won’t stay in Canada. The landmark ruling in Google Inc vs. Equustek Solutions Inc, for example, enabled worldwide online takedowns and spawned international imitators; other countries will leap on Canada’s example and pass similar laws that advance their own governmental interests.


Canada needs a new approach to regulating online harms — one that respects human rights. We must change course before authoritarian regimes replicate Canada’s approach for intrusive surveillance, censorship, and other human-rights abuses.

A fundamental problem with the Canadian online-harms legislation is that it deals with the most controversial aspect of internet governance — the issue of online-speech regulation — in isolation.

Unlike its global peers in the United States and the European Union, Canada not fostered conversation about the bigger picture of big-tech regulation.

Canada hasn’t reckoned with the business models of behemoth social-media platforms premised on surveillance capitalism or with the problems of anti-competitive actions by technology companies.

The government hasn’t devoted a fraction of the political energy it is spending on online harms to reforming Canada’s outdated online-privacy laws.

After Trudeau’s Liberal government called for a snap election, his party promised to introduce legislation to regulate online harms within 100 days.

Some promises are best not kept. This is one of them.

The digital-rights community needs to hold Canada to account and urge the country to slow down, think things through, and come up with a model of internet regulation that should be emulated — not avoided — around the world.

This article was originally published on The Conversation. Read the original article.

 ​​​​​​​

  image widget
Related tags:
Author
Thinking of your experience with tvo.org, how likely are you to recommend tvo.org to a friend or colleague?
Not at all Likely
Extremely Likely

Most recent in Opinion