Between Censorship and Freedom: DSA and the Moderation Dilemma in the Internet
By Maciej Lesiak
- 19 minutes read - 3964 wordsWhat's in this article
Introduction: Rising Social Costs of Arbitrary Moderation
Today, I want to share some thoughts on Internet moderation, which is very much connected to disinformation and conspiracy theories. During the COVID-19 pandemic, it became apparent to all how dangerous it can be to make health-related decisions based on false premises drawn from manipulated information and conspiracy theories. Saying that false information can lead to violence and vandalism is, by today’s standards, stating the obvious. Examples include the burning of 5G towers, the attack on the U.S. Capitol, or the anti-vaccine movements. Content moderation has never been easy on major social media platforms with billions of users from countries with varying perspectives on what should and shouldn’t be said.
Billions of pieces of content are reported for review, with the bulk of this review work being done by Artificial Intelligence. Sometimes, such arbitrary moderation yields unfortunate results, as illustrated by the case of the Auschwitz Museum, where 21 educational posts commemorating camp victims were tagged (presumably as a malicious act) as “violating community standards.” These posts were allegedly dominated by “adult nudity and sexual activities,” promoting “hate speech” and “incitement to violence.” Despite protests from the Auschwitz Museum, Meta initially restored only some posts on Facebook, and all posts were reinstated after the media scandal.
As can be seen, this is a non-transparent process that causes frustration among reporters when content is not removed, among those affected by malicious actions and algorithm errors, as well as among advocates of full freedom of speech. Unfortunately, the costs of moderation are rising, and I refer here to both financial and social costs. Elon Musk’s case, turning X into a libertarian stream of thoughts and conspiracy theories with virtually zero moderation, is a prime example of what can happen with deregulation and lack of oversight.
The EU Seeks a Path to a Safer Internet - The Digital Services Act (DSA)
The Digital Services Act (DSA) is an EU regulation that aims to tidy up the online space, primarily addressing illegal content and disinformation while introducing clear moderation rules. DSA requires social media platforms to moderate content transparently and responsibly, while also prohibiting manipulative interfaces and user-tracking advertisements. The Polish organization Panoptykon has actively supported the work on DSA and DMA from the beginning, emphasizing the importance of regulation for creating a safer online ecosystem and protecting user rights.
Large platforms arbitrarily censor content (removing specific posts or blocking accounts) and decide which materials are boosted by algorithms and which go unnoticed. These decisions, usually made by algorithms, are not random but reflect the platform’s policies and business objectives. In the real world, this not only influences what handbags are bought in a given season but also who has a better chance of winning an election. read more Panoptykon: DSA and DMA in a nutshell – what you need to know about the law that aims to curb cyber giants
On October 29, the European Commission announced “public consultations on the rules for researcher access to online platform data under the Digital Services Act” EU 29-10-2024. This is access for verified researchers who will be granted data access, allowing them to identify risk areas, develop solutions, and secure what is generally referred to as the online environment. Currently, we are in the second consultation stage. I will discuss this in more depth, addressing the issue of moderation in the context of IAM (Independent Advisory Mechanism) - which is set to play a crucial role in processing applications for data access on online platforms under the DSA, ensuring independence and expertise in reviewing data access requests.
The Role of IAM (Independent Advisory Mechanism) in the Moderation Process
Let me briefly outline IAM’s role (Independent Advisory Mechanism) as I understand it within the narrative surrounding the Digital Services Act. IAM is an advisory mechanism designed to ensure independence and expertise when making data access decisions for VLOPs (Very Large Online Platforms). Thus, IAM will play a key role in monitoring risks associated with published content and reviewing data access requests from researchers and other interested entities, effectively impacting moderation, as these data form the basis for subsequent steps.
As I understand it, IAM is intended to support both the content moderation process and provide tools for a better understanding of the threats posed by disinformation and the promotion of violence. IAM’s independence is crucial for moderation to avoid being perceived as arbitrary – I will discuss this aspect more extensively in the context of the moderation-deletion approach. Arbitrary deletion was one of the main criticisms directed at platforms like Facebook, Twitter, and YouTube in the past. We are witnessing the development of a balanced moderation concept, which is intended to avoid creating a toxic environment with its actions. It requires that decisions to delete content be made based on fact-based analysis, taking into account context and expert opinions, and IAM is to play a supervisory role over these decisions. I must say, it sounds a bit idealistic…
Analysis by Demagog on Moderation and Content Removal
Before moving on to key issues related to moderation, it’s worth noting a recent event. The fact-checking service Demagog published an interesting analysis on disinformation about the DSA (Demagog: Will the DSA allow NGOs to delete content from the Internet?). False information has begun to circulate online, suggesting that private companies and NGOs will have the right to remove politically incorrect posts. This is obviously false. I will not go into detail on this issue; please refer to the source text. However, it is worth noting the correlation, that is, joining procedures for access to VLOP data with the emergence of disinformation targeting moderation and the DSA.
We seem to be at a critical stage, which began more than a year ago and culminated in the publication by the European Commission of a document summarizing the consultations on the Delegated Regulation on Data Access under the Digital Services Act Digital Services Act: Summary report on the call for evidence on the Delegated Regulation on data access. In general terms, the goal is to facilitate scientific research on risk systems in the EU concerning the scope of data access, procedures, data format, and access to public data such as that available through scraping. Without going too deep into technical details… the next phase announced on October 29, 2024, shows that work is progressing, and returning to Demagog, after reading their excellent analysis, I was curious to check the list of VLOPs (Very Large Online Platforms and Search Engines) on the EU site (see the link in the footer), and among the several listed, there is, for example, a Czech company maintaining, among others, the XVIDEOS pornography service, which has monthly traffic of 160 million active users.
To illustrate the scale of the problem, let us highlight the potential risks and areas for moderation in such services:
- Extreme sexual violence and content involving minors
- Content depicting physical or psychological violence
- Zoophilia and other forms of animal cruelty
- Content promoting abuse and exploitation of people
- Deepfake AI and non-consensual content (including Deepfake Pornography, Non-Consensual Pornography, Cyber Exploitation, Non-consensual fictitious imagery) – broad AI image modification use
- Content suggesting consent without actual consent
- Hate speech in comments and interactions
- Offensive or degrading content
- Content normalizing sexual violence or coercion
- Content stimulating toxic and negative user interactions
- Exploitative content
Strange Coincidences? Who is Behind DSA Devaluation?
Time and again, we have had proof that large companies influence the law-making process. If a company I once worked for, along with others in the sector across Poland, hired a PR agency to fight media regulations and restrictions in their business being introduced nationwide and in legislation, then one can only imagine what a multi-billion dollar company could achieve. Listening to a TOKFM broadcast where a PR specialist eristically crushed activists who were played like children using the most basic manipulation methods showed how much can be done with effective PR. Saying that a billion-dollar company can likely do far more in PR and will continue to do more is an understatement. Any regulation requiring the implementation of procedures or interaction with external entities means costs for companies and the risk of liability and revealing trade secrets, e.g., details of a recommendation algorithm. Currently, it seems to me that the existence of fake news about the DSA may be an attempt to manipulate and undermine trust in the EU regulation introduced to protect users and, consequently, an attempt to influence public opinion.
According to what we can read in the quoted Panoptykon article on content moderation rules, the DSA will bring more transparency, and:
Platforms must regularly publish “transparency reports” revealing, among other things, statistics on removed content. (Panoptykon, Ibidem)
For some, providing detailed reports on how controversial content is moderated may be problematic. It may also reveal the extent to which VLOPs are conduits of disinformation. It’s also worth noting that what is illegal in Turkey may not necessarily cause controversy in secular France. Not to mention the global scale.
As for answering the question, “who’s behind DSA devaluation?” – I won’t give an answer here as I don’t know all the information or details from which one could deduce potential motivation and benefits. This task would likely require a whistleblower and an investigative journalist. However, as numerous analyses of various conspiracy and disinformation narratives have repeatedly shown, they were often pumped by specific companies, e.g., offering private education interested in undermining trust in public schools (see analyses by Gabriel Gatehouse).
Moderation and Content Removal – Deepening Reflection
Let’s briefly revisit IAM as an advisory mechanism that, according to DSA creators, can bring numerous benefits in standardizing research methods aimed at evaluating the effectiveness of moderation and its impact on society. For instance, data access that allows verification of what content was moderated and why will only be possible after a thorough assessment by IAM. This is intended to ensure transparency and avoid situations where moderation is misused, potentially limiting freedom of speech. Let’s continue down this path.
According to what I found in official European Commission documentation and available articles (see sources), the current stance does not assume the preservation of content for research purposes and does not address the issue that deleting content online could fuel conspiracy theories, implying that “ensuring transparency” and “clear responsibility guidelines” aim to counteract such phenomena. However, in my opinion, the 2023 summary document, along with comments highlighting the need for further research on risk, implicitly suggests that retaining deleted platform content, at least to some extent, could be beneficial for research purposes. Today’s status is discussed in Panoptykon’s articles:
- Users whose posts are removed or accounts blocked should receive a detailed explanation of the reasons for such restrictions, along with information on which regulation provision they violated. They will be able to appeal such decisions, with a human, not an algorithm, having the final say in their case.
- Major platforms will be required to hire enough moderators to provide proper content moderation in all EU languages, including handling reports of illegal content and appeals regarding content removal or account blocking.
- Special bodies prepared to address content admissibility disputes in the network, as well as courts, will oversee decisions regarding content removal and account blocking. (Panoptykon, Ibidem)
I am particularly interested in moderation through content and profile deletion. I am strongly against erasing fake news and disinformation materials, at least from profiles with large reach, such as politicians. Why? The presence of fake news on the Internet allows for its analysis and examination of its impact on public opinion, which is difficult to achieve after it is removed. I think this should be reconsidered and the consequences studied. What are the benefits and harms?
I am convinced by the position presented below by the British Society, which discusses the costs of erasure without a trace. Deleting posts and content online can lead to the creation of even more conspiracy theories, reinforcing the belief in “covering tracks.” Violent content and AI-generated content in porn services are one thing, but disinformation aimed at undermining democratic institutions is another. By removing the latter, we create a self-confirming conspiracy machine: they’re erasing it to cover their tracks. They!
“But completely removing information can look a lot like censorship, especially for scientists whose careers are based on the understanding that facts can and should be disputed, and that evidence changes. […] In a new report, it advises against social media companies removing content that is “legal but harmful.” Instead, the report authors believe, social media sites should adjust their algorithms to prevent it from going viral - and stop people making money off false claims. […] Removing content may exacerbate feelings of distrust and be exploited by others to promote misinformation content.” Should bad science be censored on social media?
Proposal for a Moderation Solution
In my opinion, posts should be preserved, with internal links affecting the algorithm (reactions, shares, etc.) possibly blocked. Additionally, the concept of geographic moderation exists, meaning posts that violate regional laws and are controversial could be muted. The link itself should remain active but hidden from the feed, highlighted with a special warning/notice background, marked as noindex for crawlers, and accompanied by a brief factual explanation. I would therefore maximize the post’s demotion in the algorithm/system, and additionally, profiles that frequently post conspiratorial content should have reduced reputation, possibly marked in a particular way, and eventually subjected to temporary “shadow bans” of about 45 days to curb disinformation during elections.
Implementing some form of Web of Trust to tag profiles that consistently publish disinformation, thus demoting them in the algorithm, is also a good idea. I noticed attempts to implement this concept in NOSTR Web of Trust.
Some authors also argue that preserving manipulative content by adding context may boost responsible journalism, as manipulative content and disinformation will be exposed and subjected to critical public analysis. This aspect seems relevant in light of what Panoptykon writes:
Reliable analysis and balanced opinions stand no chance against clickbait headlines and polarizing posts. This dynamic contributes to the spread of disinformation and deepens the erosion of professional journalism. (Panoptykon, Ibidem)
Following this line of thought, if clickbait and disinformation pumping are subjected to “silent moderation and context addition,” as some argue, this will serve an educational function, constantly reminding the public that it is false, manipulated, or exaggerated. This is particularly evident today in the age of AI-generated fake historical images or architecture, as recently when I saw an image of Apple’s headquarters with absurdly sci-fi underground structures. These images gain popularity among the poorly educated and easily manipulated people and should be automatically flagged. Similarly, profiles that use pumping, i.e., manipulating the algorithm, should be blocked. These profiles, amassing likes based on AI-generated content, are later used in major disinformation campaigns - more on this below.
Opposing Views: Immediate Removal Due to High Social Costs
It is worth noting that opposing views to those of the British Society (scientific) also exist. They argue that there are viral contents whose destructive impact on social fabric is so significant that they must be removed immediately. Experts from the Center for Countering Digital Hate (CCDH) state:
The team points to Plandemic - a video that went viral at the start of the pandemic, making dangerous and false claims designed to scare people away from effective ways of reducing harm from the virus, like vaccines and masks, and was eventually taken down. Should bad science be censored on social media?
As seen, various approaches exist, and many scientific documents I’ve reviewed highlight methodological difficulties in analyzing samples, comparative materials between services, and drawing far-reaching conclusions about trends.
Does Moderation and Marginalization Lead to Radicalization?
I would like to refer here to an excellent analysis showing the link between moderation, deletion, and migration to other channels. Hard moderation, profile banning, and content deletion push content to the corners of the internet beyond any control. New distribution channels gain a reputation as “freedom havens” in contrast to the “censored” ones. As noted in an analysis by Samantha Walther and Andrew McCoy:
Several alternative social media platforms have emerged in response to perceptions that mainstream platforms are censoring traditional conservative ideologies. However, many of these alternative social media platforms have evolved to be outlets for hate speech and violent extremism. […] QAnon and the COVID-19 pandemic were the most prominent sources of disinformation and conspiracy theories on Telegram. Perspectives on Terrorism. (Vol 15, Issue 2, April 2021).
This creates a niche for alternative social networks that give voice to marginalized groups. Channels like Telegram were often associated with ISIS, but increasingly, we hear about the radicalizing American far-right and far-left, which equally actively use these content distribution channels. The authors provide specific examples of politicians migrating to Telegram after posts were deleted or moderated, for instance, on Twitter/X or Facebook. Examples of conservatives facing moderation by big tech companies, which they describe as liberal and democratic, simply migrate to places beyond their control.
conservative social media users have shifted away from mainstream platforms to sites that promote greater free speech policies and do not enforce extensive content removal guidelines.
This raises the question of what is better: moderation and deleting posts, or limiting their visibility with a warning and context, as I wrote? Since these channels are isolated bubbles, there is a strong echo chamber effect, where we only talk to people who confirm our views. Research has shown that such an environment fosters radicalization, where conspiracy theories are not criticized but only intensified by further disinformation.
Alternative social media platforms enable the building of strong and nearly impenetrable virtual communities that can produce echo chambers of hate speech and violent extremist dialogue that would otherwise be removed by mainstream platforms.
Nostr Beyond Any Jurisdiction?
However, it is worth noting that since 2022, the NOSTR network has been developing intensively as a response to the problems of social media centralization and moderation. In theory, the creators, mainly from the libertarian ideology of blockchain and the digital currency bitcoin, want to free user profiles from any moderation, server affiliation, or user-blocking capabilities. Their concept of a distributed network based on the NOSTR protocol and NIP layers within this protocol allows, at least in principle, for what they call escaping censorship. Currently, the network is primarily dominated by the far-right and cryptocurrency-engaged communities. It is worth noting that moderation here takes place at the RELAYS level, or relays, and the creators are well aware of issues such as illegal content involving pedophilia, which may be hosted illegally on servers.
GDPR, Content Deletion, and Bot Farms
I admit that determining the scope and form of content retention for research purposes, as well as blocking and restricting access, will not be easy. Since EU law in the context of GDPR requires that users have access to their data and may request to cease processing, any data retained in some form would have to be anonymized to exclude it from the deletion request procedure.
Another issue is bot farms. Let’s briefly discuss this by analogy to operation Doppelgänger, where Russian services created a network of fake sites imitating press services and popular portals to pump disinformation. Facebook played a significant role here, as the pumped-up profiles displaying cute kittens and memes and sometimes had hundreds of thousands of followers occasionally posted emotional posts that spread like wildfire… Why do I bring this up? Because there remains the question of what to do with profiles known as botnets, sockpuppets, troll accounts, or amplifiers. Meta has a term for this: CIB Coordinated Inauthentic Behavior. Should we delete these fake account networks in full? This overloads databases and pollutes the internet environment. There is also the issue of distinguishing between fake and real accounts.
This won’t be an easy task. It’s also worth noting that all countries conduct such psychological operations online, and the example I presented is very recent and highly publicized. De facto, politicians routinely use PR agencies specializing in social media to pump manipulated content, using, for example, the Facebook “hate algorithm,” which can change social attitudes on issues such as migration. Manipulation issues at this level and whether content should be moderated are so subtle and politically charged that I believe implementing moderation at this level will be very challenging. And let’s not forget that most “raw material” concerns topics like migration, wars, and climate change.
Konferencja DSA w Łodzi:
Akt o usługach cyfrowych okiem autorów komentarzy.
Zdjęcie ML dadalo.pl
Cyber Giants’ Reign or Conspiracy Theory Fuel? Quo Vadis Moderation?
After attending the conference “The Digital Services Act through the Eyes of Comment Authors” in Łódź, where one could hear from lawyers, agencies handling VLOPs, or even the President of UODO, I get the impression that this element (preserving content) is not being considered at all. Pity.
To be honest, I am not optimistic. For over 10 years, I have periodically reported legal violations, violent actions, fake profiles, or outright scams on Facebook and Twitter/X. None of the Facebook ads I reported, which were blatant scams due to community standards violations and rules, were removed. Death threats, derogatory language, and violent actions were not removed on X/Twitter or Facebook. On Instagram, child prostitution thrives. Alleged age verification for access to YT, Facebook, and Instagram is a fiction. The same goes for moderation. The changes brought by the DSA are supposed to improve this situation.
I get the feeling that regulations are moving towards limitation and deletion instead of adding context. This will cause users to migrate to alternative internet regions that cannot be monitored and will only become more attractive. I would like those involved in lawmaking to think not only about user satisfaction and safety from a legal perspective but also about the consequences of legal decisions on content removal. A blind machine is not a good solution. Big tech has proven time and again its ability to circumvent regulations and create new solutions.
I have a small hope here, as a hidden idealist, that moderation is a continuous process of learning and improving. Frankly, big tech seems to take an extraordinarily long time for this process; maybe the whip will speed it up. Likewise, AI mechanisms and models are getting better, and they are used to process billions of reports. Platforms must find a way to effectively limit harmful content without radicalizing its authors and participants. Time will tell if the DSA, combined with IAM, can meet these challenges and provide a safer Internet that isn’t a toxic sludge.
In closing, it is worth noting that projects analyzing disinformation phenomena are already being developed. If a knowledge base on disinformation and conspiracy theories were to be built, something like Google’s Fact Check Explorer, then I believe that the implementation of linking with adding context to user posts posting disinformation would be feasible - at least from a formal academic point of view, as its usefulness must be confirmed by researchers.
Sources:
META: coordinated inauthentic behavior
Correctiv: Facebook silent as pro-Kremlin disinformation campaign rages on
US CyberCommand: Russian Disinformation Campaign “DoppelGänger” Unmasked: A Web of Deception
„Doppelgänger” – schemat rosyjskiej operacji wpływu przeciwko Zachodowi
Panoptykon: DSA i DMA w pigułce – co warto wiedzieć o prawie, które ma poskromić cybergigantów
Demagog: DSA pozwoli organizacjom pozarządowym usuwać treści z internetu?
Prezes UODO na konferencji „Akt o usługach cyfrowych okiem autorów komentarzy”
Agenda: Akt o usługach cyfrowych okiem autorów komentarzy
Related
- Police, Media, and the Conspiracy Theory Factory: How Disinformation Shapes Public Opinion
- Activism in the Shadow of Cognitive Biases and Confirmation Bias
- FRAGMENT #2420 HAARP and Conspiracy Theories about the Aurora Borealis
- SNIPPET #2426: Google Responds to Creators Protests: Presentation of Arguments
- Crazy Conspiracy Theories in the 2023 Polish Parliamentary and Senate Elections
- Between Innovation and Disinformation - An Analysis of the Market for Books Generated by Artificial Intelligence