Internet communications companies are simultaneously under pressure to tackle disinformation on their networks, as questions are raised regarding the shift to private regulatory action, in which public scrutiny is almost entirely absent. In this blogpost, we explain and argue that through their content moderation policies and practices, internet communication companies are acting as definers, judges and enforcers of freedom of expression on their services. Analysis of their responses to the ‘disinfodemic’, in particular the further automation of curation, confirms the urgent need for transparent and accountable content moderation policies.
In the recently published UNESCO/ITU Broadband Commission research report on Balancing Act: Countering Digital Disinformation While Respecting Freedom of Expression, we examined how 11 geographically diverse and global internet communications companies (or online platforms) that enjoy a large user base expressly or indirectly address the problem of disinformation through content curation or moderation.
As we completed the first round of our analysis in February 2020, the COVID-19 pandemic was gaining in intensity. During the past months there have been unprecedented reactions from internet communications companies to limit the ‘disinfodemic’ of false health-related information and redirect users to authoritative sources. In July 2020, we repeated our exercise for Facebook/Instagram, Google/YouTube and Twitter to take into account platform responses during the pandemic.
In this blogpost, we provide insight into the curatorial responses to harmful content and disinformation outlined in internet communications companies’ terms of service, community guidelines and editorial policies, before and during the COVID-19 pandemic. Although taking a close look at platform policies does not necessarily correspond with platform practices, it allows us to evaluate whether companies are living up to their own standards and to compare measures across time and between platforms. Whether and where the internet communication companies strike the balance between protection and empowerment in their policies sets the tone for our online experience.
Potentially abusive or illegal content on online platforms can be flagged through automated machine learning, and manually by users and third party organisations (such as law enforcement, fact-checking organisations, and news organisations operating in partnership). Automated detection is on the rise and is important to tackle concerted efforts of spreading disinformation, along with other types of communications deemed potentially harmful. To illustrate the automation of content moderation, over the period from April to June 2020, a total of 11,401,696 videos were removed from YouTube. Of these, only 552,062 (or 4.84%) were reported by humans.
After receiving machine or human-driven notifications of potentially objectionable material, internet communications companies remove, block, or restrict content, applying a scale of action depending on the violation at hand. It can be noted that the companies’ rules can be more restrictive than their legal basis in a number of jurisdictions. A good example is Twitter’s decision to ban paid political advertising globally in November 2019. At the other end of the spectrum, Facebook continues to run categories of political advertising without fact-checking their content and has also resisted calls to prevent micro-targeting connected to it. In September 2020, they made slight restrictions in banning new political ads in the week before the US presidential elections.
Another option to tackle disinformation is based, to quote DiResta, on the assumption that “freedom of speech is not freedom of reach”, whereby sources deemed to be trustworthy / authoritative according to certain criteria are promoted via the algorithms, whereas content detected as being disinformational (or hateful or potentially harmful in other ways) can be demoted from feeds. As an example, on Facebook, clickbait content is tackled by reducing the prominence of content that carries a headline which “withholds information or if it exaggerates information separately”. Facebook also commits to reducing the visibility of articles that have been fact-checked by partner organisations and found wanting, and the company adds context by placing fact-checked articles underneath certain occurrences of disinformation.
In addition to curating content, internet communications companies tackle what they call coordinated inauthentic behaviour’ at an account level. Online disinformation can be easily spread through accounts that have been compromised or set up, often in bulk, for the purpose of manipulation. Several companies prohibit ‘coordinated inauthentic behaviour’ (including interference from foreign governments) in their terms of service agreements. For instance, WhatsApp “banned over two million accounts per month for bulk or automated behavior” in a three-month period in 2019. Roughly 20% of these accounts were banned at registration.
Content moderation can interfere with an individual’s right to freedom of expression. Even though private actors have a right (within legal boundaries) to decide on the moderation policies on their services, an individual’s right to due process remains. Insight/transparency should also be given to users and third parties into the process of how decisions are made, in order to guarantee that these are taken on fair and/or legal grounds.
In 2018, a group of US academics and digital rights advocates concerned with free speech in online content moderation developed the Santa Clara Principles on Transparency and Accountability in Content Moderation (a consultation to update the principles is ongoing). These principles set the bar high for the companies, suggesting detailed standards for transparency reporting, notice and appeal mechanisms. Indeed, as a de facto public sphere, there is a need for dominant entities to use international standards, and not operate more limited ones.
Facebook/Instagram, Google/YouTube, Twitter, Snapchat and LINE provide periodic (e.g. quarterly) public transparency reports on their content moderation practices as they align with external (legal) requirements. They tend to be less transparent about their internal processes and practices. All except LINE also run (political) advertising libraries. The libraries of Facebook and Twitter cover all advertisements globally, while Google provides reports for political adverts in the European Union, the UK, New Zealand, India and the United States.
User empowerment requires control of the content, accounts and advertising they see. Internet communications companies offer varying types of involvement, including flagging content for review, prioritising, snoozing/muting and blocking content and accounts, and changing the advertising categories users are placed in. This last tool is only offered by a handful of platforms. Facebook allows users to update their ad preferences by changing their areas of interest, as relevant to the advertisers who use this information, and targeting parameters.
Finally, in response to curatorial action taken and in line with the Santa Clara Principles, it is important from the perspective of protecting freedom of expression that companies have in place procedures to appeal the blocking, demotion or removal of content, disabling or suspension of accounts. This entails a detailed notification of the action, a straightforward option to appeal within the company’s own service, and a notification of the appeal decision.
Although external appeal to an arbitration or judicial body is theoretically possible in some countries, few companies offer robust appeal mechanisms that apply across content and accounts, or to notifying the user when action is taken.
Previous disinformation campaigns have made clear that without curatorial intervention, the services operated by internet communications companies would become very difficult to navigate and use due to floods of spam, abusive and illegal content, and unverified users. As the companies themselves have access to data on their users, they are well placed to monitor and moderate content according to their policies and technologies. Putting strategies in place, such as banning ‘coordinated inauthentic behaviour’ or promoting verified content, can help limit the spread of false and misleading content, and associated abusive behaviours. However, policies are best developed through multi-stakeholder processes, and implementation thereof needs to be done consistently and transparently. Monitoring this could also be aided by more access to company data for ethically-compliant researchers.
Terms of service, community guidelines and editorial policies tend to be more restrictive, and thus limit speech, beyond what is legally required, at least in the jurisdiction of legal registration. Private companies with global reach are thus largely determining, in an uncoordinated manner currently, what is acceptable expression, under their standards’ enforcement. In the absence of harmonised standards and definitions, each company uses its own ‘curatorial yardstick’, with no consistency in enforcement, transparency or appeal across platforms. This results in online platforms acting as definers, judges and enforcers of freedom of expression on their services. Indeed, any move by these companies in terms of review and moderation, transparency, user involvement and appeal can have tremendous implications for freedom of expression. Platforms’ responses to the ‘disinfodemic’, in particular the further automation of curation, confirm once again the need for transparent and accountable content moderation policies.
* This post is a shorter and adapted version of the original article published at EU DisinfoLab on 28 September 2020. Access the original version here.
Add new comment