Skip to main content
opinion
Open this photo in gallery:

BRYAN GEE/THE GLOBE AND MAIL

Sue Gardner is the former executive director of the Wikimedia Foundation, the San-Francisco-based nonprofit that runs Wikipedia, and the former head of cbc.ca, the Canadian Broadcasting Corporation’s English-language website. She is on the boards of Privacy International and the Canadian Anti-Hate Network.

Heritage Minister Steven Guilbeault has been struggling for weeks to defend Bill C-10, which expands the Broadcasting Act to cover not just conventional broadcasters, but also digital streaming services such as Netflix and Disney+. For more than 50 years, the CRTC has been empowered to set Canadian content requirements for radio and TV broadcasters: C-10 would give it the same authority in the digital realm.

That’s not the controversial part. The problem arose when the Heritage committee reviewing the bill removed a clause that would have exempted user-generated content from CRTC oversight. The government has said it has no interest in regulating such content through C-10, but even so, the change prompted concern that the CRTC could end up interfering with Canadians’ freedom of expression online.

It’s a sideshow, and I wish it would stop. Mr. Guilbeault and his team are on a mission to regulate Big Tech. It’s important work, it’s overdue and it needs to move forward.

It’s been three years since The Guardian’s Carole Cadwalladr and Emma Graham-Harrison broke the story that analytics firm Cambridge Analytica had “harvested” the personal information of 50 million people from Facebook and then used it to target American voters with personalized political ads. It was a complicated story and practically nobody fully understood it, but people instinctively bristled at the idea of their information being collected and used without their consent.

That was the moment when the public started to fall out of love with Big Tech, and it’s been a downhill slide ever since.

Over the past three years, the government of Sri Lanka has criticized Facebook for fuelling anti-Muslim hatred in that country, and the United Nations has blamed it for the same thing in Myanmar, saying the social network played “a determining role” in enabling the Rohingya genocide. In 2019, Facebook was criticized for enabling a white supremacist to livestream his massacre of 51 people in two Christchurch mosques, and a year later, a New Zealand Royal Commission found the shooter had been radicalized by YouTube. Last December, The New York Times accused Montreal-based Pornhub of hosting videos of child rape and revenge porn, leading the company to remove almost two-thirds of its videos practically overnight, and causing Visa and Mastercard to cut ties. In January, Facebook, Twitter, Gab and Parler were blamed for facilitating violence after extremists in the U.S. used those platforms to organize the Jan. 6 riot and attack on the Capitol.

It’s an unhappy litany, and it’s prompted a major shift in public sentiment. Across the political spectrum, in the U.S., Canada and other liberal democracies, increasing numbers of people want tech companies reined in.

Governments are listening. In 2017 Germany passed its Network Enforcement Act (NetzDG), aimed at combatting hate speech on social networks. France passed a similar law in 2020. In December, Australia introduced legislation requiring Google and Facebook to give money to news organizations, to make up for the advertising revenue the journalism industry has lost to tech giants. The British government is preparing legislation that will impose a “duty of care” on platforms, aimed at forcing them to proactively take responsibility for the societal harms they create.

The Canadian government, too, is poised for action. First came the revision of the Broadcasting Act and a new privacy bill, both tabled in November. Coming soon, we expect legislation directly tackling online harms, including the increased spread of child pornography, revenge porn, hate speech, incitement to violence and incitement to terrorism.

Governments – and the people they represent – are correct to want interventions when the market is so clearly failing to provide acceptable outcomes. But the interventions, as they’re currently being constructed, won’t be sufficient. They risk targeting symptoms rather than the disease itself, and shying away from complex challenges to focus instead on what’s clear-cut and familiar.

The symptoms here are pieces of individual content: a threatening tweet, a white supremacist YouTube video, an anti-vax post on Facebook. The disease is the business model.

The business model of the internet is fully mature and highly profitable. Companies track us and collect and analyze our personal information, then use it to enable others to micro-target us with persuasive messaging. What they’re selling is our attention, which means they need us glued to our devices. Sometimes they achieve that by meeting real user needs, but too often it’s by serving up and amplifying material that provokes strong emotional responses and is destructive to society. Few people go online looking for conspiracy theories, vitriol and abuse. Too often, we get it anyway.

This business model, which incentivizes platforms to not just host harmful material but to actively amplify it, leads to a myriad of harms. Most fall in the realm of what regulators call “lawful but awful.” That includes most technology-facilitated harassment, abuse and bullying. It includes the negative emotions – envy, stress, anxiety, loneliness and unhappiness – that people report feeling as a result of their social media use. It includes the wasted hours spent pointlessly scrolling and clicking. And perhaps most consequentially, it includes social divisiveness, political polarization, a truly stunning increase in belief in conspiracy theories and a rise in extremism of all sorts.

These are the negative externalities created by the tech industry, where the costs are absorbed not by the companies that create them, but by individuals – people driven out of public discourse by harassment, people drowning in spam and online scams, kids being bullied and pushed toward self-harm – and by society itself, in the form of things such as politically motivated violence and vaccine hesitancy slowing the pandemic recovery.

It poses a difficult policy problem. I respect the regulatory and legislative efforts we’ve been seeing, but I believe they risk making two fundamental mistakes.

First, they overfocus on what’s already illegal.

Take, for example, child porn. The internet has made it easier to find child porn than ever before. But what to do about that isn’t, and shouldn’t be, a real policy question, because the answer is clear. Distributing and viewing child porn is illegal and should be illegal, and the law needs to be enforced. The open question is not about what we want, it’s purely about how to achieve it.

We shouldn’t need our legislators to be spending significant time on matters where the current law is clear and correct. We need them to engage in areas that are fuzzier and less obvious.

This is the realm of disinformation and misinformation, conspiracy theories, racism, misogyny and threats. And here, much of the recent legislative efforts have focused on individual pieces of content. That’s a mistake. It’s an approach that isn’t scalable, and so it won’t touch the bulk of harmful material. When it comes to awful but lawful, solutions lie in influencing the business model itself.

I think I understand why journalists and policy makers are so drawn to the content level. Content is visible: We can see it and screenshot it and share it with others. It’s how most of us experience the internet; we are not poking around in the code itself. And we’re used to debating the merits of individual pieces of content – a holdover, probably, from a world in which not that much stuff got published. That’s not the world we live in anymore.

If I were a tech executive I’d be thrilled by this approach – precisely because it doesn’t scale, and because energy spent relitigating longstanding holy wars (what is hate speech? is censorship ever okay?) is energy not spent addressing the central issue.

A new business model is producing new harms. The way to curb those harms is to compel the companies producing them to bear their costs.

What might that look like? Governments need to mandate significantly higher levels of transparency from tech firms, to allow both experts and the general public to better understand company practices and their effects. We need awareness and education campaigns aimed at boosting digital literacy. We do need to curb harmful content. We need to update laws and regulations related to advertising so they fit better with how persuasive messaging actually works today. And perhaps most fundamentally, we need to limit companies’ ability to collect, store and analyze personal information, because that is the root of many of the harms we’re seeing.

Canada has an opportunity here. We’re a smart, well-educated country. We understand the United States, where Big Tech originates, and we have an appropriate critical distance from it. There is a lot of digital expertise here. And Mr. Guilbeault, who despite his recent struggles, seems to understand the internet, and whose comparison of Big Tech to environmental polluters is entirely apt. His 2019 book, The Good, The Bad, and the Ugly, was ostensibly about AI but was in actuality a broad survey of the societal implications of new technologies. It’s rare for a senior government official to understand the digital landscape as well as Mr. Guilbeault’s book suggests he does.

Thus far, his efforts have been criticized by pretty much everybody. I think that’s unavoidable. These problems are complex, the solutions aren’t obvious, there are lots of entrenched interests at play and Canada is going to make mistakes just like every other country. What’s important is to move forward with clarity of intent and precision in execution, in partnership with other countries, and to refine our approach as we go.

Does our federal government have a plan? It might.

If what Ottawa is rolling out now is the sum total of all its efforts, it’s wildly insufficient. But it’s possible the government is laying the groundwork for legislation designed to go to the heart of the problem: the business model. Let’s hope so. There is an opportunity for Canada to lead here, and we should take it.

Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe