Canada is Bringing in New Legislation to Stop the Spread of Online Hate. Here’s How It Can Work.
What can you expect to see in Canada's new federal online hate legislation?
The federal government is preparing to unveil strong legislation that aims to severely curtail the proliferation of online hate.
What will that look like and how will it protect people? That remains to be seen as there’s no consensus among experts and activists.
Yet everyone agrees that something must be done and they need only point to a chilling anniversary this past spring.
A massacre livestreamed on Facebook
In March, New Zealanders marked the second anniversary of the Christchurch terrorist attack .
The individual who committed the mass shootings at two mosques had long been a consumer of online hate, as early as the age of 14. He would go on to live stream the killings in real time on Facebook, where he also posted links to his white supremacist manifesto.
The attacks led to much hand-wringing around the world around the role of online hate in the radicalization of individuals like the killer.
— The Verge (@verge) December 8, 2020
For Canadians, the horrific episode had echoes of a similar shooting spree at a Quebec City Mosque in which a young man who was similarly radicalized online would kill six men, and leave many others permanently injured and disabled. (The Christchurch killer had even inscribed that individual’s name on his ammunition magazines, underscoring how violent acts can inspire others to commit similar atrocities.)
Now, in the midst of a pandemic, hate continues to fester online, targeting women, racialized folks and other minority communities in Canada, and around the world.
The need to act could not be more pressing. The Canadian government is set to bring down legislation any day now that aims to curb, even eliminate, its spread.
What’s been done to date internationally
Jacinda Ardern, New Zealand’s prime minister, moved decisively to call out social media platforms and the role of online hate speech in radicalization following the Christchurch massacre. Canada joined in these international efforts.
Yet observers have pointed out that the Christchurch Call to Action, a voluntary pledge, doesn’t go far enough for a variety of reasons, including that it relies on the social media platforms to govern themselves.
“While there is no doubt that these entities have a fundamental role to play in stopping the spread of terrorist and extremist content on the internet, the responsibility of content moderation must not be outsourced entirely to private entities,” points out a paper by the India-based Observer Research Foundation.
What sets #ChristchurchCall apart is its aim to bring together governments and their affiliated agencies, tech platforms, internet companies, and civil society within defined set of goals to prevent use of the internet for terrorism, says @PriyalPandey2: https://t.co/N75q0n68X3 pic.twitter.com/YkGvuqFjrV
— ORF (@orfonline) August 16, 2020
Here’s what Canada says it plans to do
Heritage Minister Steven Guilbeault has promised that new legislation to tackle online hate is on its way.
Guilbeault told the standing committee on Canadian Heritage this past January that there would be new rules around hate speech and a new regulator to oversee a framework, which will include fines against companies for non-compliance.
“Illegal content” would cover five categories: Hate speech, terrorist content, content that incites violence, child sexual exploitative content and non-consensual sharing of intimate content.
Arif Virani, parliamentary secretary to Justice Minister David Lametti, has further said that legislation will include a new statutory definition of hate that would rely on case law, including the Supreme Court affirmation of the 11 “Hallmarks of Hate” as defined in a 2006 Canadian Human Rights Tribunal decision.
The hallmarks include allegations towards a target group that could include the following elements or themes:
- The targeted group is presented as a powerful menace to society;
- Perpetrators use news reports and purportedly reputable sources to further negative stereotypes;
- The targeted group is portrayed as preying upon children, the aged, the vulnerable, etc.;
- The targeted group is portrayed as responsible for the world’s problems;
- The targeted group is portrayed as dangerous or violent by nature;
- The targeted group is portrayed as devoid of redeeming qualities and innately evil;
- Perpetrators communicate the idea that banishment, segregation, or eradication of the group required;
- The group is de-humanized through association with or comparison to animals, vermin, etc.;
- Perpetrators use highly inflammatory language and rhetoric to create a tone of extreme hatred and contempt;
- Perpetrators trivialize and/or celebrate past persecutions or tragedies involving the target group; and,
- Call for violent action against the target group.
What about free speech?
In a recent conversation between Michael Geist and David Kaye, the tensions around addressing online hate and freedom of expression were front and centre.
Geist is a law professor at the University of Ottawa and Canada Research Chair in Internet and E-Commerce Law, and Kaye is a former United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression.
“The focus on harms is coming from a place where many governments believe they have responsibility to protect individuals, communities democratic processes,” said Kaye on the March 8 episode of the Law Bytes Podcast.
“In democratic society there is a real clash of interests, on the one hand, dealing with the harms, on the other hand the resistance to definitions that are so broadly written that they might put excessive discretion in the hands of the government to clamp down on legitimate speech.”
Kaye, who is also the author of Speech Police: The Global Struggle to Govern the Internet, told Geist that he supports government regulation that is based on transparency and human rights.
Civil society speaks
Several submissions and reports on what to do about online hate have been published and submitted to the federal government in advance of its deliberations and eventual tabling of legislation.
Among these is a report by the Canadian Commission on Democratic Expression, a three-year initiative of the Public Policy Forum. In its first year, a group of commissioners (myself included) heard from a plethora of experts and scholars on online hate, victims and representatives from social media platforms.
The resulting paper includes a six-part plan including the requirement that platforms act responsibly and are held accountable with the creation of a new regulator and a ‘social media council’ that acts as a bridge between citizens, government and Big Tech.
The Canadian Anti-Hate Network (CAHN), where I serve as a board member, also made its views known.
Social media companies continue to avoid responsibility and can’t be trusted to take their own steps to control hate on these platforms.
Here’s what we and 30 other social justice organizations told the government.https://t.co/1UJ0kIO5mf
— Canadian Anti-Hate Network (@antihateca) January 13, 2021
Working with a coalition of 30 organizations, the network made similar recommendations, though it went further in calling for major penalties be levied against social media platforms that fail to take down hate speech within 24 hours of a complaint or notice, more severe criminal penalties for platform operators that fail to take action, and a focus on algorithms and machine learning to automate the detection and removal of hate speech.
The network also recently published a poll that shows widespread support among Canadians of all political stripes for removing hate speech as a way to ensure the freedom of expression of targeted groups. The political pendulum appears to have swung in favour of regulation among supporters of all major political parties.
“The free expression of women, people of colour, and others is more valuable than the hate speech of trolls,” said Evan Balgord, CAHN’s executive director.
In the meantime, various groups and organizations aren’t waiting for government legislation to counter hate. That includes the YWCA, which launched a research and mobilization project called “Block Hate: Building Resilience against Online Hate Speech” last month.
The Canadian Council of Muslim Women (CCMW) is currently running a Digital Anti-Racism Education 2 Project (D.A.R.E. 2) to empower social media users. The Chinese Canadian National Council for Social Justice is also undertaking a research project aimed at the development of an online program that would allow users to identify anti-Asian racist social media posts and remove them from a user’s feed.
How would you feel receiving these messages?
Let’s be clear – online hate is violence.
Bigotry and threats of violence that target people of colour, women, young people, 2SLGTBQ+ folks and marginalized communities must end.
— YWCA Canada (@YWCA_Canada) March 22, 2021
Overwhelming support for regulation
78% of Canadians are concerned about online hate, according to a poll released earlier this year by Abacus Data and the Canadian Race Relations Foundation, with nearly 80% supporting stronger laws.
Mirroring the national picture, a study released by the Mosaic Institute this past February found that in Ontario, 76% witnessed online hate against Black, Indigenous, Muslim and Jewish communities.
In other words, the federal government’s next moves will matter to a whole lot of people and address a phenomenon that has troubling, even fatal, real-world impacts.
Our journalism is powered by readers like you.
We’re an award-winning non-profit news organization that covers topics like social and economic inequality, big business and labour, and right-wing extremism.
Help us build so we can bring to light stories that don’t get the attention they deserve from Canada’s big corporate media outlets.