Facebook, Google and Twitter CEOs are yet again set to come together for a new bout of congressional hearings in front of the Senate in USA — this time to discuss, debate and answer important points on whether Section 230, a law that gives technology companies the benefit of calling themselves an intermediary (and therefore exempt of legal responsibility in case of users’ posts on their platforms), should be repealed entirely. Ahead of the hearing, the CEOs have submitted their opening statements, clearly highlighting exactly what lawkeepers may expect from the heads of the tech world in the upcoming hearing.
Zuckerberg’s legal hypothesis
Among the most notable ahead of the hearing has been Mark Zuckerberg, founder and CEO of Facebook, who has given a complete lowdown on what he believes is the ideal way forward instead of cancelling Section 230. Leading the explanation, Zuckerberg highlighted all the self-regulatory steps that Facebook today takes — including its recent transparency report efforts, the oversight committee, all of its fact checking efforts on sensitive topics such as Covid-19 and election misinformation campaigns, and finally, underlining all of Facebook’s efforts to protect sensitive use cases for users such as its classification of dangerous individuals and groups, identifying repeat offenders, and so on.
Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection — that would be impractical for platforms with billions of posts per day.
In the final part of Zuckerberg’s statement, he highlights the important role that Section 230 has played in helping USA’s technology industry flourish into the global leaders that companies such as Google, Twitter and Facebook itself are, today. Stating that arguments around the changes that Section 230 requires are futile, Zuckerberg states, “Congress should consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content. Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection — that would be impractical for platforms with billions of posts per day — but they should be required to have adequate systems in place to address unlawful content. Definitions of an adequate system could be proportionate to platform size and set by a third-party. That body should work to ensure that the practices are fair and clear for companies to understand and implement, and that best practices don’t include unrelated issues like encryption or privacy changes that deserve a full debate in their own right.”
Zuckerberg further added, “Congress should act to bring more transparency, accountability, and oversight to the processes by which companies make and enforce their rules about content that is harmful but legal. While this approach would not provide a clear answer to where to draw the line on difficult questions of harmful content, it would improve trust in and accountability of the systems and address concernsabout the opacity of process and decision-making within companies.”
Twitter, Google resonate thoughts
Twitter CEO Jack Dorsey was less expressive in the should-bes towards the course that Section 230 takes. Instead, he chose to focus on the actions that Twitter has already taken to reflect on accountability and self-regulation, including offering an algorithmic choice to users to decide how much of their data are they okay to hand over to the company. He also instated the importance of projects such as Birdwatch, which bring together a community of apparent gatekeepers in their hunt towards taking responsibility towards their own content. Finally, he also describes Twitter’s investment in Bluesky, which is presently working towards building a decentralised model for social media, while also establishing a functional, profitable business model.
Platforms would either over-filter content or not be able to filter content at all. In the fight against misinformation, Section 230 allows companies to take decisive action on harmful misinformation and keep up with bad actors who work hard to circumvent their policies.
Pichai, interestingly, picked one of Google’s most controversial recent topics to elucidate on in his efforts to talk about why keeping Section 230 is absolutely essential — journalism and Google’s relationship with news media. Some of the figures that Pichai touched upon included over 24 billion clicks on articles via Google Search and News, $1 billion invested in News Showcase for select media partners to earn over three years (essentially opening a rat race with what is really not massive money for the company), partnership with over 500 publications globally and of all scales, more numbers in support of local news publications and fact checkers, and so on. All of these, Pichai said, would not have been possible without the freedom of regulation that Section 230 allows.
He concluded by saying, “Without Section 230, platforms would either over-filter content or not be able to filter content at all. In the fight against misinformation, Section 230 allows companies to take decisive action on harmful misinformation and keep up with bad actors who work hard to circumvent their policies.” In essence, the warning bugle that Pichai blew stated that removing Section 230 to force tech majors into taking greater responsibility might, in fact, blow up in the face of lawkeepers.
More hot action will come in once the hearing begins, which is set to start at 9:30PM later tonight, on March 25.