With considerable attention on an increase in hate and harassment online, questions are being raised over where the federal government鈥檚 promises stand, related to advancing legislative and regulatory changes aimed at tamping down harmful content.
After receiving heaps of largely-critical feedback, and going back to the drawing board with the help of experts over the last several months, sources close to the file tell CTVNews.ca that the government is still contemplating how to approach the complex "online safety" legislation in a way that responds to critics鈥 concerns while addressing the state of discourse online.
The pledge originated with the intention of forcing 鈥渙nline communication service providers,鈥 such as Facebook, YouTube, Twitter, Instagram, and TikTok, to be more accountable and transparent in handling five kinds of harmful content on their platforms: hate speech, child exploitation, the sharing of non-consensual images, incitements to violence, and terrorism.
The Liberals' intention was to ensure the kinds of behaviours that are illegal in-person are also illegal online, with a focus on public content, not private communication.
"Online platforms are increasingly central to participation in democratic, cultural and public life. However, such platforms can also be used to threaten and intimidate Canadians and to promote views that target communities, put people鈥檚 safety at risk, and undermine Canada鈥檚 social cohesion or democracy," reads for this initiative.
"Now, more than ever, online services must be held responsible for addressing harmful content on their platforms and creating a safe online space that protects all Canadians," it continues.
Beyond seeking expert advice, this summer Canadian Heritage Minister Pablo Rodriguez and top officials from his department have travelled across the country with stakeholders and representatives from minority groups.
Public hearings intended to capture the concerns of Canadians 鈥 particularly those in marginalized communities 鈥攁re set to continue into the fall, sources said, adding that this continuing outreach is meant to help inform the scope of the legislation.
PAST PROMISED DEADLINE, FOCUS ON GETTING 鈥楻IGHT鈥
The Liberals have already blown past their to move on a "balanced and targeted" online harms bill within the first 100 days of their post-2021 election mandate.
Amid this heightened attention and considerable work left before the bill is completed, CTVNews.ca asked when Canadians might expect to see a bill introduced in Parliament and, in an emailed response, Rodriguez's office wouldn't commit to a timeframe, saying the government's priority is getting the legislation right.
"The minister is now engaging directly with Canadians across the country on the insights provided by the experts," said the heritage minister's press secretary Laura Scaffidi. "Canadians should be able to express themselves freely and openly without fear of harm online鈥 We鈥檙e committed to getting this right and to engaging Canadians in a thorough, open, and transparent manner every step of the way."
Despite reluctance to put a timeline on seeing the bill tabled, sources told CTVNews.ca that it is unlikely this fall, with early 2023 appearing to be the most realistic timeframe.
Sources said that the government remains committed to putting forward legislation that will give Canadians more tools to address harms online, but there are some factors adding pressure to Rodriguez' desire to "get it right."
Mindful of the pushback from opposition parties and some platforms during the government's push to pass Broadcasting Act updates 鈥 including accusations of attacking free speech 鈥 sources said the Liberals are bracing for an even bigger fight over this bill.
Given this, some consideration has been made about waiting until there is more space on the legislative agenda to allow the Liberals to dedicate greater attention to this bill once it is tabled. Rodriguez currently has two pieces of outstanding legislation; Bill C-11, the revived Broadcasting Act bill is before the Senate, and Bill C-18 regarding online news remuneration is before a House committee.
Speaking about recent examples of politicians and journalists facing threats, Public Safety Minister Marco Mendicino said on Monday that in addition to engaging law enforcement, "Minister Rodriguez is very eager to bring forward his legislation, so that legislatively the tools are there as well."
However, sources CTVNews.ca spoke with cautioned that while there are elements of the legislation that will likely help鈥攑articularly when it comes to platforms taking more accountability for content posted鈥 it is not going to be the "panacea" for rectifying increasingly toxic online discourse. Rather, the coming legislation is being seen as one piece of a bigger puzzle.
For example, an element of the initial proposal from the government that may be unlikely to change given Charter and privacy considerations, is that it鈥檚 meant to focus on public content and not private communications such as text messages or emails. Hate-filled and harassing emails have been a central focus of what some journalism advocacy groups are considering a co-ordinated campaign.
Sources said there鈥檚 some hope that recent attention, spurred by more federal political figures from across the political spectrum speaking out, will help galvanize support and allow for a serious conversation about tackling the issue.
Ahead of the government presenting the online harms legislation, here's what you need to know about what's transpired on this file so far, and how it may ultimately shape the bill.
CONCERNS WITH WHAT WAS INITIALLY PROPOSED
Two weeks before Prime Minister Justin Trudeau called the 2021 federal election, the government presented a 鈥渢echnical discussion paper鈥 and rolled out a summer-long consultation process on a proposed online harms legislative framework, promising that responses would inform the new laws and regulations.
That proposal included implementing a 24-hour takedown requirement for content deemed harmful, as well as creating federal 鈥渓ast resort鈥 powers to block online platforms that repeatedly refuse to take down harmful content.
The Liberals' initial proposal also floated:
- Compelling platforms to provide data on their algorithms and other systems that scour for and flag potentially harmful content and provide a rationale for when action is taken on flagged posts;
- Obligations for sites to preserve content and identifying information for potential future legal action and new options to alert authorities to potentially illegal content and content of national security concern if an imminent risk of harm is suspected;
- Outlining potential new ways for CSIS and RCMP to play a role when it comes to combating online threats to national security and child exploitation content; and
- Installing a new system for Canadians to appeal platforms鈥 decisions around content moderation.
The regime proposed a series of severe new sanctions for companies deemed to be repeatedly non-compliant, including fines of up to five per cent of the company鈥檚 annual global revenue or $25 million, whichever is higher.
In order to operate and adjudicate this new system, the government suggested creating a new 鈥淒igital Safety Commission of Canada鈥 that would be able to issue binding decisions for platforms to remove harmful content, ordering them to do so when they 鈥済et it wrong.鈥
During the 2021 summer feedback period, the government got an earful from stakeholders expressing concerns with then-Canadian heritage minister Steven Guilbeault鈥檚 proposals, as well as what was described as a 鈥渕assively inadequate鈥 consultation process.
From concerns the proposal didn't strike an appropriate balance between addressing online harms and safeguarding freedom of expression, to questioning why the range of harms are being treated as equivalent, experts called for some significant changes.
Facing concerted pressure from stakeholders that the government would ideally want onside as it pushes ahead with this conversation, after Rodriguez was re-appointed as heritage minister, he announced plans to go back to the drawing board.
The decision to rework the plan was announced in February, alongside the release of a based on its assessment of the feedback from the consultation process.
It concluded that, while the majority of respondents felt there is a need for the government to take action to crack down on harmful content online, given the complexity of the issue, the coming legislation needed to be thoughtful in its approach to guard against 鈥渦nintended consequences.鈥
Minister of Canadian Heritage, Pablo Rodriguez announces a new expert advisory group on online safety as a next step in developing legislation to address harmful online content during a press conference in Ottawa on Wednesday, March 30, 2022. THE CANADIAN PRESS/Sean Kilpatrick
This online harms framework is separate from a piece of government legislation tabled at the eleventh hour of the 43rd Parliament.
Called Bill C-36, it focused on amendments to the Code and the Canadian Human Rights Act to address , but after dying when the 2021 election was called, the legislation has not been revisited by the Liberals.
The bill was mentioned in the latest Liberal campaign platform as part of their promise to "more effectively combat online hate," so remains to be seen whether it could be folded into the coming legislation.
HOW THE COMING PLAN COULD LOOK
In late March, the government tasked experts and specialists in platform governance, content regulation, civil liberties, tech regulation, and national security to help guide the government on what the bill should and shouldn鈥檛 include. Among the panelists were stakeholders who were publicly critical of the initial proposal.
Over the course of eight sessions 鈥 each focusing on , from the regulatory powers and law enforcement's role, to freedom of expression 鈥 the panel deliberated over the proposal and discussed their concerns.
The panel held its concluding workshop in June, and in early July a high-level summary was published online.
It included some key advice that may help shed light on the ways the coming approach will look different from the initial outline.
Among the were:
- That any regulatory regime should put an equal emphasis on managing risk and protecting human rights, with any legislative obligations needing to be flexible and adaptable as to not become quickly outdated;
- That public education needs to be a "fundamental component" of any framework, suggesting the plan come alongside programs to improve media literacy;
- That "particularly egregious" content like child sexual exploitation material may require its own solution unique from what other forms of harm may require, with some questioning whether not each of the five forms of harm set out in 2021 require tailored approaches;
- That the proposed regulator should be well-resourced and equipped with audit and enforcement powers because it shouldn't be left up to the platforms who need to take responsibility for their role;
- That clear consequences should be set for regulated services that do not fulfill their obligations under the regime, alongside a "content review and appeal process" at the platform level; and
- Separate from the Digital Safety Commissioner there should be an independent ombudsperson for victim support who could play a "useful intermediary role between users and the regulator."
It was clear from the summaries of each session that there were some sticky areas of disagreement among panellists as well, including whether the legislation should compel services to remove content.
While some experts said a 24-hour takedown requirement should be avoided, except for instances of content that explicitly calls for violence and child sexual exploitation content, others suggested it would be preferable to "err on the side of caution."
Others raised concerns that content removal could disproportionately affect marginalized groups.
The experts also emphasized that "something must be done about disinformation," as it has grown to become one of the more troubling forms of harmful online behaviour. However, the experts cautioned against defining disinformation in legislation.
"Doing so would put the government in a position to distinguish between what is true and false 鈥 which it simply cannot do," read the summary, instead suggesting the possibility of focusing on the co-ordinated amplification of misinformation through bots and bot networks.