(In)Visibility on Social Media: How Dependent Are Media and Content Creators on Online Platforms?
Account shutdowns, content removals, reach limitations, and a lack of transparency turn the digital space into a place of uncertainty, with limited communication options and no clear appeals process. Media outlets and content creators feel the consequences most acutely, as they directly depend on platform policies and decisions.

Photo: Zašto ne
For the purposes of this article, interviews were conducted with Milanka Kovačević (Direkt portal), Tanja Maksić (BIRN Serbia), Maja Ćalović (Mediacentar Sarajevo), Hristina Cvetinčanin Knežević (Feminizam iz teretane), Nina Pavićević (Kritički), and Feđa Kulenović (Faculty of Philosophy, University of Sarajevo)
In September 2022, the Serbian portal Informer published an interview with a serial rapist. Such content, experts warn, leads to further traumatization of victims, and giving media space to perpetrators of such crimes is problematic on multiple levels. After publicly reacting to Informer’s publication of this interview, writer and activist Minja Marđonović had her email account hacked, and child pornography was posted on her Facebook profile. As a consequence, all of her accounts on Meta platforms Facebook, Instagram, and WhatsApp were shut down. Only after organisations working on digital rights intervened, contacting Meta through their international partners and proving that the disputed content had been published as a result of abuse, was Minja’s access to her accounts restored.
However, many people who lose their accounts or individual pieces of content never manage to find out the reason for their removal or to regain access. One such example is the Trebinje-based Direkt portal, whose official Facebook page was deleted four years ago without any prior warning or explanation. Despite numerous attempts to contact the platform, they never received an explanation as to why this happened.
The only option left to them was to create a new page, but they lost the audience they had built over the years. Milanka Kovačević, journalist and editor-in-chief of the outlet, believes this significantly narrowed their space for action. This negative experience made her realise just how dependent media outlets are on decisions made by online platforms, which essentially have no obligations toward them. There is no contact person, and almost every attempt at communication results in an automatically generated response.
“It’s like talking to a wall, and we depend on that wall”, Kovačević said.
Media outlets and content creators therefore often encounter content removal or the inability to publish content on social media, again without any clear explanation or logic. According to our interviewees, they can only speculate or draw conclusions based on experience about the reasons behind such platform reactions.
When Facebook prevented Direkt portal from publishing an investigative story on illegal construction in Trebinje, the only information they received was that the content was “sensitive.” The notification sent by the platform provided no further details as to what exactly the issue was. In resolving the problem, they were therefore left to fend for themselves. They assumed the reason was a photograph of baby strollers, which was meant solely to illustrate that parents with children were unable to move along sidewalks.
Cases like this usually occur in an automated process, at the very moment content is submitted for publication. Algorithms identify the content or one of its elements – most often a visual one – as something that violates platform rules. As Feđa Kulenović explained, in some cases this involves temporary removal of content that the platform reinstates after review by a “live” moderator who determines that no rules were violated. However, this process is neither obvious nor transparent. Users do not know which part of the content violated which rule, so content adjustments are usually based on assumptions and trial-and-error efforts by users to resolve the issue.
He believes platforms should provide clear information about what caused the content removal; whether it was the topic of the post, a specific keyword, a photograph, a comment, or some other element, as well as the basis for the removal or inability to publish the content.
Invisible Sanctions: How Platforms Affect Content Visibility
Even when a platform determines that content does not violate its rules and restores access or allows publication, consequences may remain. When a post is deleted, the visibility of the entire profile is automatically reduced, Kulenović says. However, it is unclear whether visibility is restored once the post is reinstated.
Through their algorithms, platforms decide which news or content will be visible to users. Media outlets have no control over the reach of their content, and algorithm changes, about which little is usually known, can drastically affect visibility.
According to the 2025 Digital News Report by the Reuters Institute, organic reach of media content on social media continues to decline. This is confirmed by the experiences of media outlets in the region, which, according to our interviewees, complain precisely about limited visibility and constant reach reduction, a phenomenon referred to in English as shadowbanning.
As with content removal, the reasons why algorithms limit reach are completely unclear. Maja Ćalović, who coordinates the Coalition for Freedom of Expression and Content Moderation in Bosnia and Herzegovina on behalf of Mediacentar Sarajevo, explains that coalition members have noticed that the use of certain keywords (such as the word “genocide”) results in reduced visibility or problems when posting content. The assumption is that algorithms are configured to react more strictly to such content. However, there is insufficient evidence for any grounded conclusion, as well as a lack of explanations from the platforms themselves.
Nina Pavićević also claims that “almost no one sees” the content she posts on Instagram related to Palestine. She adds that she generally notices drastic changes in the visibility of her posts, regardless of topic. Other interviewees agree that it is impossible to determine why some content goes viral while other posts are not visible even to the most loyal followers. Hristina Cvetinčanin Knežević, for example, assumes that a large number of reports on negative comments she receives on certain viral posts may result in lower visibility of subsequent posts, even when they address “milder” topics.
Our interviewees also note that platforms sometimes do not allow paid promotion (so-called boosting) of certain content, even when the content does not appear to violate any rules. BIRN Serbia frequently encounters the inability to promote content on Facebook. For example, they were not allowed to promote an investigative article on the involvement of top state officials in a corruption affair, allegedly because it conflicted with the platform’s policy on promoting political content, while other content dealing with similar topics could be promoted. In this case as well, due to the lack of platform transparency, media outlets can only speculate about the reasons for such selective treatment.
Profile verification, or the “blue checkmark,” is becoming increasingly crucial for the work of media outlets and individual content creators online. Without verification, which now requires a paid subscription on Meta platforms, it is becoming increasingly difficult to reach audiences, promote content, or receive platform support. Experience shows that verified users, for example, have access to direct appeals to platforms when they cannot access their profiles or when content is removed; options not available to unverified users.
An Unequal Position in Relation to Major Online Platforms
Media outlets, especially independent and local ones, largely depend on online platforms to distribute content to their audiences and are forced to adapt to rules imposed by those platforms. However, these rules are becoming increasingly complex and opaque, and complying with them requires ever greater financial and human resources: from budgets for profile verification and paid content promotion to constant adaptation to trends that determine content visibility and reach.
At the same time, media outlets and content creators in Bosnia and Herzegovina and other countries in the region have no protection whatsoever. As Milanka Kovačević notes, “we have a virtual world that is just as real and important, and in that virtual world our country has no jurisdiction.”
The European Union’s legal framework offers solutions that impose obligations on platforms to be more transparent about their content moderation practices. Under the Digital Services Act, major online platforms must provide detailed explanations for any action resulting in content removal, access restriction, reduced visibility, or suspension of user accounts. These explanations are publicly available in the Digital Services Act transparency database. Platforms are also required to provide users with easily accessible mechanisms for appealing content moderation decisions.
The European framework has also introduced measures aimed at protecting media freedom and pluralism. The Digital Services Act requires major platforms to implement risk mitigation measures related to users’ fundamental rights, including the right to freedom of expression and information. This includes, for example, improved content moderation practices or algorithm adjustments. Based on the provisions of this Act, cases have already been initiated that directly concern content visibility on platforms (1, 2).
Adopting legal solutions modelled on European frameworks, in a way that does not undermine fundamental rights and media freedoms, would bring greater predictability, protection, and visibility to our region in an environment dominated by major platforms.
(Marija Ćosić and Maida Ćulahović, “Zašto ne”)