The Politics of Profit and Disinformation: Where Is the Responsibility of Large Online Platforms?
Large online platforms are increasingly playing a key role in informing citizens. Yet even their best intentions to preserve the integrity of the information environment are called into question in an era when profit takes precedence.

Photo: Zašto ne
For the purpose of writing this article, interviews were conducted with Milica Kovačević (Raskrinkavanje.me, Center for Democratic Transition) and Vesna Radojević (RasKRIKavanje.rs).
According to a research published in 2022 by the Citizens’ Association “Zašto ne”, 29% of citizens in Bosnia and Herzegovina strongly believe in conspiracy theories. A public opinion survey conducted in 2025 by the Damar Institute for the Center for Democratic Transition (CDT) showed that one in four citizens of Montenegro believes in at least one piece of disinformation every week, and almost 17% do so daily.
This reveals just how pervasive disinformation is, and how believing in false claims can strongly influence decisions in the crucial moments. In today’s world, social networks have, by their very nature, become fertile ground for the spread of disinformation.
Disinformation can have serious real-world consequences. For instance, global anti-vaccination movements may have contributed to declining vaccination rates, leading to the re-emergence of diseases that were once nearly eradicated (1, 2). In other words, people are refusing life-saving vaccines, partly as a result of online disinformation.
The Covid-19 pandemic revealed just how powerful disinformation campaigns can be. The aforementioned research conducted by CA “Zašto ne” found that false claims about Covid-19 vaccines, especially those originating from social networks and the internet in general, were widely accepted among the unvaccinated population. These claims ranged from the utterly bizarre ones, such as assertions that vaccines contained microchips, to those that appeared more plausible, such as claims that the vaccines had not been tested.
In addition to health-related consequences, disinformation may also have financial ones. Numerous scams spread through social networks lead to the theft of financial and personal data from unsuspecting citizens, and sometimes even to direct monetary losses. Fact-checking platforms across the region have repeatedly exposed such fraudulent schemes (1, 2, 3, 4).
The danger of disinformation also lies in its role in fueling narratives of historical revisionism, further deepening social divisions. Hate speech and smear campaigns based on disinformation and manipulation, often amplified by large platforms, can have far-reaching effects on individuals and communities, particularly marginalized groups who are frequently targeted by such campaigns.
What is the role of large platforms in spreading disinformation?
The algorithms and business practices of large online platforms not only allow disinformation to spread but, in some cases, actively encourage it, generating significant profit. Emotionally charged content, in particular, tends to attract massive attention. Depending on the platform, this enables those spreading false information not only to reach a wide audience and amplify their influence but also to monetize their content.
Platforms such as Google, for example, offer content monetization through advertising. According to Google’s rules, ads are not allowed on content containing unreliable or harmful claims on topics such as health, climate, elections, or democracy. Yet an analysis conducted by the nonprofit organization ProPublica, which also used data from the fact-checking platform Raskrinkavanje.ba, found that as much as 87% of websites in Bosnia and Herzegovina, Serbia, and Croatia known to spread disinformation were still earning revenue from Google ads.
According to ProPublica, among 30 websites identified by Raskrinkavanje.ba as the most frequent sources of disinformation, 26 were monetizing through Google. Alongside Turkey, our region had the highest proportion of websites spreading disinformation while simultaneously profiting from Google ads. At the time of the study, Google ads were still present on 60% of articles proven to contain disinformation. This means that not only the publishers but also Google itself profited from disinformation, contrary to its own policies.
The same applies to social media platforms. Disinformation actors not only monetize their content but often pay platforms directly to promote it (1, 2), therefore creating profit for both sides.
These financial incentives encourage platforms to maximize reach and engagement of content, often at the expense of information integrity. This is precisely where their responsibility becomes crucial.
How do platforms view their responsibility in preserving information integrity?
Some large companies, such as Meta and TikTok, have introduced certain mechanisms to combat disinformation. In 2016, Meta launched its Third Party Fact-Checking Program, a partnership with independent fact-checking organizations that assess the accuracy of content published on Meta-owned platforms (Facebook, Instagram, Threads). Once a piece of content is rated by these organizations, Meta takes action to reduce its reach.
Although the program has its shortcomings, in the Western Balkans it has had a notable impact on improving information integrity. Participating fact-checking organizations have recorded thousands of corrected posts – numbers that were unimaginable before the program began. However, in January 2025, this program was discontinued in the United States. While no such announcements have been made for other parts of the world yet, its future remains uncertain.
Large online platforms address disinformation at least to some extent through their community guidelines and various policies. However, as in many other cases, these policies often exist on paper only, with inconsistent or insufficient implementation.
The discontinuation or downsizing of such programs suggests a shift in how platforms perceive their responsibility toward the information environment. Rules governing content moderation and hate speech are also changing, with some previously established policies being withdrawn. Under the guise of protecting freedom of expression, platforms are reshaping their policies in ways that allow them to continue maximizing profit, while paying little attention to the consequences of such changes.
Lack of regulation and loss of trust
In the Western Balkans, large platforms are not subject to any legal obligations regarding the protection of online space integrity, as the region largely falls outside existing regulatory frameworks. Research shows that this regulatory vacuum has led to numerous instances in which the integrity of information, crucial for democratic societies, is compromised on these platforms. Despite this, the platforms exhibit a high degree of passivity and a lack of transparency.
This situation leads to a widespread loss of trust. Actors working to preserve information integrity do not trust the platforms, while citizens have lost trust in everyone. The overwhelming amount of disinformation they encounter makes them skeptical of all information, making it increasingly difficult to navigate an environment saturated with news.
Holding large online platforms accountable and transparent is one of the key principles for preserving democratic societies in a technology-mediated age. Guided by this principle, the European Union adopted a set of rules, including the Digital Services Act (DSA), which requires major platforms to implement risk mitigation measures against the spread of disinformation and other content that can, for example, negatively impact elections or public health.
Such measures may include improving content moderation practices, refining algorithmic recommendation systems, adjusting ad targeting mechanisms, demonetizing disinformation, and cooperating with fact-checkers. Aligning domestic legal and regulatory frameworks with the DSA and implementing its key principles are essential steps toward ensuring more transparent and accountable conduct by large online platforms in our region as well.
(Marija Ćosić and Maida Ćulahović, “Zašto ne”)