Breaking News

Facebook is shaping public discourse. We need to understand how | Alex Abdo

Companies like Facebook and Twitter should lift restrictions impeding digital journalism and research on the platforms

At a hearing last week in Washington, senators and executives from Facebook and Twitter wrestled with the existential question of our time: how do we prevent Silicon Valleys signature creation social media from tearing the country apart? Few answers emerged, but several senators expressed an obvious truth: the public urgently needs to better understand how these platforms are shaping public discourse.

Fortunately, there is one vital step that Facebook and Twitter could take today to improve public understanding: lifting the restrictions that impede digital journalism and research focused on the platforms.

Social media platforms are transforming public discourse in ways we do not understand. Take Facebook, for example. Two billion people around the world and 200 million in the United States use Facebook to get their news, debate policy, join political movements and connect with friends and family. The platform has become the substrate of our social interactions, the means by which human relationships are formed and maintained. Facebooks platform disseminates our messages, but it also determines whether their signals will be amplified, suppressed or distorted. Facebook is not just a carrier of social media, but an entirely new social medium.

We need to understand how this new social medium works how Facebook influences the relationships between its users and distorts the flow of the information among them.

But figuring out Facebook isnt easy. Facebooks alluring user interface obscures an array of ever-changing algorithms that determine which information you see and the order in which you see it. The algorithms are opaque even to Facebook because they rely on a form of computation called machine learning, in which the algorithms train themselves to achieve goals set by their human programmers. In the case of Facebook, the machines are generally programmed to maximize user engagement to show you whatever will make you click one more time, stay a little longer and come back for more.

This kind of machine learning can produce unintended effects, with worrying consequences for public discourse. In the past 18 months, many have wondered whether Facebooks algorithms have deepened political divisions and facilitated the spread of misinformation and propaganda. Do Facebooks algorithms show some ads to progressives and others to conservatives? Do they place a salacious conspiracy theory about a political candidate above accurate reporting on the incumbents latest policy decisions? In striving to maximize user engagement, do Facebooks algorithms maximize user outrage?

Some of these questions can be studied by interviewing Facebooks employees or others with firsthand knowledge, by visually inspecting the platform itself, or by using data that Facebook makes available to software developers through its application programming interfaces. (Facebooks APIs allow the automated collection of a limited set of information from its platform.) In fact, reporting based on these sources is the reason we know about some of the ways in which Facebooks platform has been used to facilitate privacy violations and other abuses. Its how we know about the true reach of the Russian disinformation campaign on Facebook; about Facebooks compilation and use of shadow profile data to recommend new friends to its users in invasive ways; and about Cambridge Analyticas exploitation of Facebook user data.

But these channels go only so far. Studying machines often requires the assistance of machines to perform larger-scale statistical analysis and testing. And so the focus of many journalists and researchers who are working to illuminate Facebooks influence on society has been on applying digital tools of study to Facebooks digital platform.

There are two tools in particular that many journalists and researchers would like to use to study Facebooks platform: the automated collection of public data and the use of temporary research accounts. The first would enable journalists and researchers to collect statistically significant samples of what Facebooks users see or post publicly on the platform, allowing them to report on patterns and trends. The second would enable journalists and researchers to test Facebooks algorithmic responses to different inputs, to explore and better understand the correlation between what Facebook knows about its users and what it shows them.

Facebooks terms of service, however, ban digital journalists and researchers from using these basic tools to investigate the ways in which the platform is shaping our society.

While digital journalists and researchers already use these tools in other contexts, the prohibitions in Facebooks terms of service mean that they cant use them to study Facebook at least, not without risking serious sanctions. Journalists and researchers who violate the sites terms of service risk multiple kinds of legal liability. Most directly, they risk a lawsuit by Facebook for breach of contract. They also risk civil and criminal liability under the Computer Fraud and Abuse Act, a law enacted in 1984 to prohibit hacking, but which has been interpreted by both the justice department and Facebook to prohibit violations of a websites terms of service.

The mere threat of liability has a significant chilling effect on the willingness of some journalists and researchers to study Facebooks platform. Some have forgone or curtailed investigations of Facebooks platform for fear of legal action. And some have been told by Facebook to discontinue digital investigations that Facebook claimed violated its terms of service. (With colleagues at the Knight First Amendment Institute at Columbia University, I represent some of these journalists and researchers.)

If Facebook is committed to becoming more transparent, it should amend its terms of service to permit this important journalism and research.

It can do this by amending its terms of service to create a safe harbor for the use of these digital tools in support of journalism and research that would serve the public interest. Any safe harbor for journalistic and research projects should include limitations to protect the privacy of Facebooks users and the integrity of Facebooks platform. But, as the Knight Institute explained in a letter sent to Facebook last month, it is possible to prevent Cambridge Analyticastyle abuse of Facebooks platform while also creating more space for the journalism and research that we urgently need.

Today, Facebook and Twitter are arguably the most powerful of private institutions, shaping society and public discourse in ways not even the companies executives understand. It is time for the companies to stop obstructing digital journalism and research that would help the public understand just how their platforms are affecting us all.

  • Alex Abdo is a senior staff attorney at the Knight First Amendment Institute at Columbia University and a former senior staff attorney at the ACLU

Leave a Reply

Your email address will not be published. Required fields are marked *