By Trisha Meyer and Clara Hanot
Internet communications companies are simultaneously under pressure to tackle disinformation on their networks, as questions are raised regarding the shift to private regulatory action, in which public scrutiny is almost entirely absent. In this blogpost, we explain and argue that through their content moderation policies and practices, internet communication companies are acting as definers, judges and enforcers of freedom of expression on their services. Analysis of their responses to the ‘disinfodemic’, in particular the further automation of curation, confirms the urgent need for transparent and accountable content moderation policies.
In the recently published UNESCO/ITU Broadband Commission research report on Balancing Act: Countering Digital Disinformation While Respecting Freedom of Expression, we examined how 11 geographically diverse and global internet communications companies (or online platforms) that enjoy a large user base expressly or indirectly address the problem of disinformation through content curation or moderation.
As we completed the first round of our analysis in February 2020, the COVID-19 pandemic was gaining in intensity. During the past months there have been unprecedented reactions from internet communications companies to limit the ‘disinfodemic’ of false health-related information and redirect users to authoritative sources. In July 2020, we repeated our exercise for Facebook/Instagram, Google/YouTube and Twitter to take into account platform responses during the pandemic.
In this blogpost, we provide insight into the curatorial responses to harmful content and disinformation outlined in internet communications companies’ terms of service, community guidelines and editorial policies, before and during the COVID-19 pandemic. Although taking a close look at platform policies does not necessarily correspond with platform practices, it allows us to evaluate whether companies are living up to their own standards and to compare measures across time and between platforms. Whether and where the internet communication companies strike the balance between protection and empowerment in their policies sets the tone for our online experience.
Flagging and review of content
Potentially abusive or illegal content on online platforms can be flagged through automated machine learning, and manually by users and third party organisations (such as law enforcement, fact-checking organisations, and news organisations operating in partnership). Automated detection is on the rise and is important to tackle concerted efforts of spreading disinformation, along with other types of communications deemed potentially harmful. To illustrate the automation of content moderation, over the period from April to June 2020, a total of 11,401,696 videos were removed from YouTube. Of these, only 552,062 (or 4.84%) were reported by humans.
- As the COVID-19 pandemic unfolded, most of these companies moved towards heavy use of automation for content curation. To limit the spread of COVID-19, internet communications companies and government authorities encouraged confinement of workers at home. With a large number of staff working remotely, the companies chose to increasingly rely on algorithms for content moderation. This has been the case for Facebook/Instagram, but also Twitter and Google/YouTube. As anticipated by the companies, the increase in automated moderation led to bugs and false positives.
Filtering, limiting, blocking or removal of content
After receiving machine or human-driven notifications of potentially objectionable material, internet communications companies remove, block, or restrict content, applying a scale of action depending on the violation at hand. It can be noted that the companies’ rules can be more restrictive than their legal basis in a number of jurisdictions. A good example is Twitter’s decision to ban paid political advertising globally in November 2019. At the other end of the spectrum, Facebook continues to run categories of political advertising without fact-checking their content and has also resisted calls to prevent micro-targeting connected to it. In September 2020, they made slight restrictions in banning new political ads in the week before the US presidential elections.
- To limit the dissemination of disinformation narratives related to COVID-19, several companies have taken a more proactive approach to removing content. Google proactively removes disinformation from its services, including YouTube and Google Maps. For example, YouTube removes videos that promote medically unproven cures. Facebook committed to removing “claims related to false cures or prevention methods — like drinking bleach cures the coronavirus — or claims that create confusion about health resources that are available”. Also, the company committed to removing hashtags used to spread disinformation on Instagram. Twitter broadened the definition of harms on the platform to address content that counters guidance from public health officials.
Promotion/demotion of content
Another option to tackle disinformation is based, to quote DiResta, on the assumption that “freedom of speech is not freedom of reach”, whereby sources deemed to be trustworthy / authoritative according to certain criteria are promoted via the algorithms, whereas content detected as being disinformational (or hateful or potentially harmful in other ways) can be demoted from feeds. As an example, on Facebook, clickbait content is tackled by reducing the prominence of content that carries a headline which “withholds information or if it exaggerates information separately”. Facebook also commits to reducing the visibility of articles that have been fact-checked by partner organisations and found wanting, and the company adds context by placing fact-checked articles underneath certain occurrences of disinformation.
- The primary strategy of the internet communications companies to face disinformation related to COVID-19 has been to redirect users to information from authoritative sources, in particular via search features of the companies’ platforms, and to promote authoritative content on homepages, and through dedicated panels. On Facebook and Instagram, searches on coronavirus hashtags surface educational pop-ups and redirect to information from the World Health Organisation (WHO) and local health authorities. Google also highlights content from authoritative sources when people search for information on coronavirus, as well as information panels to add additional context. On YouTube, videos from public health agencies appear on the homepage. Twitter, meanwhile, curates a COVID-19 event page displaying the latest information from trusted sources to appear on top of the timeline.
Disabling or removal of accounts
In addition to curating content, internet communications companies tackle what they call coordinated inauthentic behaviour’ at an account level. Online disinformation can be easily spread through accounts that have been compromised or set up, often in bulk, for the purpose of manipulation. Several companies prohibit ‘coordinated inauthentic behaviour’ (including interference from foreign governments) in their terms of service agreements. For instance, WhatsApp “banned over two million accounts per month for bulk or automated behavior” in a three-month period in 2019. Roughly 20% of these accounts were banned at registration.
- It does not appear that Facebook/Instagram, Twitter or Google/YouTube have implemented additional measures regarding the disabling and suspension of accounts with regards to COVID-19 disinformation. Nonetheless, Twitter has worked on verifying accounts with email addresses from health institutions to signal reliable information on the topic.
Transparency
Content moderation can interfere with an individual’s right to freedom of expression. Even though private actors have a right (within legal boundaries) to decide on the moderation policies on their services, an individual’s right to due process remains. Insight/transparency should also be given to users and third parties into the process of how decisions are made, in order to guarantee that these are taken on fair and/or legal grounds.
In 2018, a group of US academics and digital rights advocates concerned with free speech in online content moderation developed the Santa Clara Principles on Transparency and Accountability in Content Moderation (a consultation to update the principles is ongoing). These principles set the bar high for the companies, suggesting detailed standards for transparency reporting, notice and appeal mechanisms. Indeed, as a de facto public sphere, there is a need for dominant entities to use international standards, and not operate more limited ones.
Facebook/Instagram, Google/YouTube, Twitter, Snapchat and LINE provide periodic (e.g. quarterly) public transparency reports on their content moderation practices as they align with external (legal) requirements. They tend to be less transparent about their internal processes and practices. All except LINE also run (political) advertising libraries. The libraries of Facebook and Twitter cover all advertisements globally, while Google provides reports for political adverts in the European Union, the UK, New Zealand, India and the United States.
- During the past months, internet communications companies have communicated extensively on their efforts to respond to the COVID-19 pandemic. For instance, Facebook, Twitter and Google have collected and structured their policy actions and announcements in repositories. Public transparency reports have also continued to be published. In May 2020, Facebook announced the members of its long anticipated Oversight Board, which will provide “independent judgment over some of the most difficult and significant content decisions”.
User empowerment
User empowerment requires control of the content, accounts and advertising they see. Internet communications companies offer varying types of involvement, including flagging content for review, prioritising, snoozing/muting and blocking content and accounts, and changing the advertising categories users are placed in. This last tool is only offered by a handful of platforms. Facebook allows users to update their ad preferences by changing their areas of interest, as relevant to the advertisers who use this information, and targeting parameters.
- Related to COVID-19, internet communications companies heavily emphasise prioritising authoritative content and have created information centers to help people find reliable information. Search functionalities have also been altered, such as Twitter‘s COVID search prompts and tabs. Facebook has enabled people to request or offer help in their communities, while Google found itself playing a key role in getting educational communities online.
Appeal mechanisms
Finally, in response to curatorial action taken and in line with the Santa Clara Principles, it is important from the perspective of protecting freedom of expression that companies have in place procedures to appeal the blocking, demotion or removal of content, disabling or suspension of accounts. This entails a detailed notification of the action, a straightforward option to appeal within the company’s own service, and a notification of the appeal decision.
Although external appeal to an arbitration or judicial body is theoretically possible in some countries, few companies offer robust appeal mechanisms that apply across content and accounts, or to notifying the user when action is taken.
- No specific changes to appeal mechanisms related to COVID-19 have been noted, although Facebook cautioned that more mistakes were likely and that it could no longer guarantee a human-based review process.
Towards robust and transparent review and appeal mechanisms
Previous disinformation campaigns have made clear that without curatorial intervention, the services operated by internet communications companies would become very difficult to navigate and use due to floods of spam, abusive and illegal content, and unverified users. As the companies themselves have access to data on their users, they are well placed to monitor and moderate content according to their policies and technologies. Putting strategies in place, such as banning ‘coordinated inauthentic behaviour’ or promoting verified content, can help limit the spread of false and misleading content, and associated abusive behaviours. However, policies are best developed through multi-stakeholder processes, and implementation thereof needs to be done consistently and transparently. Monitoring this could also be aided by more access to company data for ethically-compliant researchers.
Terms of service, community guidelines and editorial policies tend to be more restrictive, and thus limit speech, beyond what is legally required, at least in the jurisdiction of legal registration. Private companies with global reach are thus largely determining, in an uncoordinated manner currently, what is acceptable expression, under their standards’ enforcement. In the absence of harmonised standards and definitions, each company uses its own ‘curatorial yardstick’, with no consistency in enforcement, transparency or appeal across platforms. This results in online platforms acting as definers, judges and enforcers of freedom of expression on their services. Indeed, any move by these companies in terms of review and moderation, transparency, user involvement and appeal can have tremendous implications for freedom of expression. Platforms’ responses to the ‘disinfodemic’, in particular the further automation of curation, confirm once again the need for transparent and accountable content moderation policies.
* This post is a shorter and adapted version of the original article published at EU DisinfoLab on 28 September 2020. Access the original version here.
Add new comment