E.V. (Emillie Victoria) de Keulenaar, MA
Norm and technique: speech moderation across contested public spheres
In the field of Internet studies, content moderation has tended to be studied under the lens of regulation and platform governance (Gorwa, 2019; Douek, 2021; DeCook et al., 2022). Yet, the etymology and many uses of the term “moderation” give reason to broaden its definition as a social, political and legislative practice that modulate public discourse in relation to historically-derived norms (Pohjonen and Udupa, 2017). Such norms are negotiated under constant and oftentimes contentious debate about what aspects of history should or should not be allowed to be repeated through language and behaviour (see, e.g., Vincent, 2008), and have shaped many of the communication affordances of social media platforms (Bucher and Helmond, 2017). In the field of censorship studies, for example, it is understood that the moderation of public speech “materializes everywhere” as a practice that constitutes and defines the boundaries of public spheres (Post, 1998, p. 2), including platforms of all sizes (Gillespie, 2018). Which norms to adopt and how to reinforce them is a question that has increasingly become existential to platforms, who are vulnerable to the fleeting consensus of users, regulators and tech actors on what they think can and cannot be said in a public sphere.
This thesis examines how speech norms have formed platform content moderation practices, and how the struggle to form consensus around such norms affects the wider geography of online public spheres. The main research questions that move this study are:
1. From what (modern) history derives speech moderation as a social and political practice, and how did it materialise in platform governance?
2. How have speech norms and moderation techniques evolved in content moderation practices, and how have they shaped platform speech affordances?
3. How do platforms and users negotiate what speech norms to adopt, and how do disagreements result in the formation of counter spheres?
4. How can digital methods capture and study traces of content moderation as ephemeral and largely inaccessible (meta)data?
Each of these questions is explored in four to eight peer-reviewed articles. The first chapter argues that moderation can be studied as a modular form of speech management system whose purpose is to decrease the reach, and subsequent normalisation, of historical extremes in a given public sphere (de Keulenaar, 2023). Moderation techniques consists in countering, marginalising, embargoing or eventually suspending such extremes based on their level of objectionability, and may resort to censorship when it transgresses historically constituted norms.
The second chapter is a methodological study on how to capture content moderation practices as “platform effects” (de Keulenaar and Rogers, 2024). It designs three strategies to reconstruct the “scene of disappearance” of moderated data, which are: (1) understanding what platforms define as problematic content in their policies and what techniques they use to sanction them; (2) using this information for query design, and collect related data before and while moderation occurs (“dynamic archiving”); and (3) reverse-engineering each moderation effect with various metadata (rankings for demotion; content availability for suspensions; and so on).
The third chapter is an amalgamation of four studies. The first study explores how objectionable content has been defined by Twitter content moderation practices from 2010 to shortly after Musk’s takeover (de Keulenaar, Magalhães and Ganesh, 2023). It argues that Twitter has transitioned from an adjudicative to a modular form of content moderation, which uses dynamic techniques, such as demotion, to build a crisis-resistant speech architecture. The second study looks at how Twitter took on the role of a public authority during the COVID-19 crisis, as consensus between public authorities, health organisations and users on what were COVID-19 treatments, transmission and prevention measures could not be reached (de Keulenaar, Kisjes, et al., 2023). The third (De Keulenaar, 2023) and fourth studies (de Keulenaar and Tuters, 2023) propose a theory of speech affordances, which I define as policies, techniques and user speech cultures that signal what users may and may not say in a given platform, contribute to the circulation of content across more-or-less moderated Web spaces, and form counter-spheres with competing affordances.
The fourth chapter combines one article (de Keulenaar, Burton and Kisjes, 2021) and two studies on, first, how YouTube and its users dispute what kind of speech should be allowed in the platform over time (de Keulenaar and Kisjes, 2022a), and second, how deplatformed users create various interdependencies between YouTube and counter-spheres like BitChute and Rumble (de Keulenaar and Kisjes, 2022b). Additional research includes an empirical study of how US-based platforms moderated the Brazilian elections of 2022 (de Keulenaar and Alves dos Santos Junior, 2023) and conflict regions such as Nagorno-Karabakh (de Keulenaar, Rutherford, et al., 2023).
References
Bucher, T. and Helmond, A. (2017) ‘The affordances of social media platforms’, SAGE handbook of social media. London: Sage [Preprint]. Available at: https://pure.uva.nl/ws/files/9115828/BucherHelmond_SocialMediaAffordances_preprint.pdf (Accessed: 29 August 2017).
De Keulenaar, E. (2023) ‘The affordances of extreme speech’, Big Data & Society, 10(2), p. 20539517231206810. Available at: https://doi.org/10.1177/20539517231206810.
DeCook, J.R. et al. (2022) ‘Safe from “harm”: The governance of violence by platforms’, Policy & Internet, 14(1), pp. 63–78. Available at: https://doi.org/10.1002/poi3.290.
Douek, E. (2021) ‘Governing Online Speech: From “Posts-As-Trumps” to Proportionality and Probability’, Columbia Law Review, 121(3), pp. 759–834. Available at: https://doi.org/10.2139/ssrn.3679607.
Gillespie, T. (2018) Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven and London: Yale University Press.
Gorwa, R. (2019) ‘The platform governance triangle: conceptualising the informal regulation of online content’, Internet Policy Review, 8(2), pp. 1–22. Available at: https://doi.org/10.14763/2019.2.1407.
de Keulenaar, E. (2023) ‘Censorship and moderation’, in. Decifrando censuras: da regulação à produção de inexistências, do arquivo à internet, Lisbon: Decifrando censuras. Available at: https://docs.google.com/document/d/1yS-lgqbEbovjv-xKigim9TDqFzISx1ih/edit.
de Keulenaar, E., Rutherford, A., et al. (2023) ‘Conflict and reconciliation analytics for monitoring the second Nagorno-Karabakh war across social media platforms’, in. Designing Tech for Social Cohesion, San Francisco. Available at: https://www.researchgate.net/publication/374422605_Conflict_and_reconciliation_analytics_for_monitoring_the_second_Nagorno-Karabakh_war_across_social_media_platforms.
de Keulenaar, E., Kisjes, I., et al. (2023) ‘Twitter as accidental authority: how a platform assumed an adjucative role during the COVID-19 pandemic’, in R. Rogers (ed.) The Propagation of Misinformation in Social Media: a Cross-Platform Analysis. Amsterdam: Amsterdam University Press, pp. 109–138. Available at: https://www.aup.nl/en/book/9789048554249.
de Keulenaar, E. and Alves dos Santos Junior, M. (2023) ‘Dislocated moderation: platform content moderation of Brazilian militarism during the 2022 elections’, in Revolution. AoIR, Philadelphia, USA: AoIR. Available at: https://docs.google.com/document/d/1jqYlr97jARmZCfYZ7Wf4r9VwHlgUVE9h/edit?usp=sharing&ouid=118237212392921747507&rtpof=true&sd=true.
de Keulenaar, E., Burton, A.G. and Kisjes, I. (2021) ‘Deplatforming, demotion and folk theories of Big Tech persecution’, Fronteiras - estudos midiáticos, 23(2), pp. 118–139. Available at: https://doi.org/10.4013/fem.2021.232.09.
de Keulenaar, E. and Kisjes, I. (2022a) ‘A genealogy of problematic information in YouTube hate speech controversies’, in Decolonizing the Internet. AoIR, Dublin: AoIR. Available at: A genealogy of problematic information in YouTube hate speech controversies.
de Keulenaar, E. and Kisjes, I. (2022b) ‘Dissenting speech norms and the evolution of “alt-tech” as counter-public spheres’, in The Transformation of Public Dissent. ECREA pre-conference, ECREA. Available at: https://drive.google.com/file/d/1tExMz5Pk8WZK-0bkm__djVug9acEj9rR/view?usp=sharing (Accessed: 15 December 2023).
de Keulenaar, E., Magalhães, J.C. and Ganesh, B. (2023) ‘Modulating moderation: a history of objectionability in Twitter moderation practices’, Journal of Communication, 73(3), pp. 273–287. Available at: https://doi.org/10.1093/joc/jqad015.
de Keulenaar, E. and Rogers, R. (2024) ‘After deplatforming: the return of trace research for the study of platform effects’, in T. Venturini et al. (eds) The SAGE Handbook of Data and Society: An Interdisciplinary Reader in Critical Data Studies. London: SAGE. Available at: https://docs.google.com/document/d/1vmuNXWRI7PJPawf0-UGxs5So9-UB1l04/edit.
de Keulenaar, E. and Tuters, M. (2023) ‘The Affordances of Replacement Narratives: How the White Genocide and Great Replacement Theories Converge in Poorly Moderated Online Milieus’, in The Politics of Replacement. Routledge.
Pohjonen, M. and Udupa, S. (2017) ‘Extreme Speech Online: An Anthropological Critique of Hate Speech Debates’, International Journal of Communication, 11(0), p. 19. Available at: https://ijoc.org/index.php/ijoc/article/view/5843 (Accessed: 16 February 2021).
Post, R. (1998) Censorship and Silencing: Practices of Cultural Regulation. Los Angeles, CA: Getty Publications (Issues & Debates).
Vincent, M.-B. (2008) ‘Punir et réeduquer: le process de dénazification (1945-1949)’, in M.-B. Vincent (ed.) La dénazification. Paris: Perrin (Tempus), pp. 9–88.
Last modified: | 17 January 2024 5.14 p.m. |