Child Safety Online: Are Major Platforms Doing Enough?
Despite policies that proclaim child protection, major online platforms often fail to respond to reports of violence, abuse, and harmful content or they respond far too late. And while algorithms are developed to generate profit by attracting and holding users’ attention and encouraging interactions, children remain unprotected from the serious risks they encounter on social media.

Photo: Zašto ne
For the purposes of this article, interviews were conducted with Kristina Mihailović (“Parents” Association), Adi Pejdah (Centre for a Safer Internet), and Snežana Nikčević (NGO “35mm”).
A fifteen-year-old girl from one of the countries in the region endured online abuse for months. Secretly recorded videos of her were posted on social media, accompanied by lies, threats that she would be killed and even livestreams showing her location. Even after six months of online abuse, her parents, the police, and the competent institutions were unable to obtain additional information about the perpetrators from the platform where the content was posted, and there was no feedback whatsoever regarding the frequent reports they submitted.
Although major online platforms proclaim child protection in their policies (1, 2, 3), cases of different forms of endangerment of minors through their services are not rare. The responsiveness of platforms generally depends on the nature of the rules being violated and the type of abuse or misuse being reported. Additionally, according to our interviewees, platform response also depends on who submitted the report.
For example, cases of peer-to-peer violence are rarely or belatedly addressed. Reports usually have little effect. In many cases, platforms do not even send confirmation that the report was received. And even in those cases where platforms do eventually react, their responses often come too late – after harmful content has remained online for a long time, spread further, and caused numerous negative consequences in real life.
Users, hoping to trigger faster action, often call on more people to submit the same report. The reason is the widespread perception that platforms are more likely to respond when the same content is reported by multiple individuals. Meta, however, states in its rules that the number of reports does not affect their response, and that the same guidelines are applied in all cases.
Platforms tend to react more quickly and proactively to reports of the most extreme forms of content, such as sexually explicit material, sexual exploitation, and child abuse, especially when such reports are forwarded through police services, as Adi Pejdah from the Centre for a Safer Internet explained.
Associations dedicated to child online safety, such as the Centre for a Safer Internet, have established protocols for analysing and processing reported content that endangers children’s safety. Following this protocol, they forward reports to competent police services, which then communicate with platforms through their designated contact points. Reports concerning content hosted on servers outside Bosnia and Herzegovina are forwarded to local safer internet centres that are members of the INHOPE network.
However, when the reported content does not contain elements of a criminal offence, there are no mechanisms for direct communication with platforms. The only remaining option for the Centre is to talk to parents and children, and in that way try to influence, for example, the removal and cessation of peer-to-peer online violence on social media.
Children are exposed to a wide range of risks on social media. When it comes to inappropriate content, children on platforms can easily access everything – from explicit pornography to content containing hate speech and incitement to violence (1, 2). The design of algorithms used by major online platforms facilitates the spread of such content, while the restrictions that are meant to protect children, according to the platforms’ own rules, are not effective enough to provide real safety.
Another major problem is the inefficiency of content-moderation systems, which also rely on algorithms to detect inappropriate content. After the killing of eight and wounding of 14 people in May 2023 by a minor K.K. in an elementary school in Belgrade, a large amount of prohibited and harmful content appeared on social media. Across different platforms, “fan accounts” dedicated to the perpetrator emerged, along with accounts impersonating him, messages of support and imitations (1, 2, 3). Although the victims were minors, their photos and identities were also heavily exploited. According to media reports, filters featuring the victims’ faces also have appeared on TikTok. Despite the enormous public attention and the fact that seven children lost their lives, moderation measures on social media were clearly not intensified.
How do children actually use platforms?
The terms of use of major online platforms, such as those run by Meta, TikTok and Snapchat, state that the minimum age for independently creating an account and using the platform is 13. TikTok, for instance, offers a “separate TikTok experience designed specifically for younger users” in the United States. However, enforcement of these restrictions has proven inadequate.
Primarily, all that many platforms require as age verification is that the user self-declares their age. Therefore, in many cases all children under 13 need to do is click that they are older and they are able to create an account. Age-verification mechanisms that involve the use of identity documents, biometric data or AI tools raise additional concerns related to the protection of users’ personal data (1, 2).
Children younger than the permitted age successfully bypass these restrictions (1, 2, 3). A study conducted this year in Australia, for instance, showed that as many as 84% of children aged 8 to 12 used some type of social-media service, with half of them using accounts belonging to their parents or guardians. One-third of the children in the study used their own social media accounts despite platform rules. In 80% of those cases, children had help from their parents/guardians in creating the accounts.
According to our interviewees, parents and guardians often are not aware of the risks associated with children’s use of online platforms. More often, they are concerned about the amount of time their children spend online, rather than the content they consume. As a result, due to a lack of awareness about potential risks, they frequently help children create and use social media accounts in violation of platform rules. Another issue is their limited understanding of how parental-control tools work, as well as generally low levels of digital literacy.
Parents most often react only after negative consequences occur. Lacking better solutions, they support bans. Our interviewees note that the Montenegrin public largely welcomed the news of a one-year ban on TikTok in Albania. At the same time, this decision was criticised by experts for limiting freedom of expression and violating European principles of regulating digital platforms. Moreover, banning a single platform does not solve the problem, as harmful content can easily appear elsewhere. As Snežana Nikčević warns, the most problematic types of content have already moved from social media to closed user groups or other channels where private content exchange prevails, such as Snapchat and similar platforms.
The Responsibility to Protect Children Online Also Lies With Platforms
Instead of introducing bans, legal measures should focus on obliging platforms to implement child-protection measures against online dangers and risks such as harassment, abuse, or exposure to inappropriate content. In the European Union, the Digital Services Act obliges major online platforms to regularly assess potential risks to children and young people and to implement measures to mitigate those risks. However, the mere availability of parental-control tools or age-verification systems is not sufficient.
What the EU’s regulatory framework particularly emphasises is the obligation of platforms to respond promptly to reports of illegal and harmful content. Trusted flaggers, organisations granted special status for identifying and reporting illegal or rule-violating content, can play an important role. Including such a mechanism in domestic legislation in the region would allow organisations working on child-online-safety issues and possessing the necessary expertise to submit reports directly to platforms, while also imposing a legal obligation on platforms to treat those reports as a priority.
Ultimately, online platforms are the ones responsible for protecting children as one of the most vulnerable groups in society. That responsibility must not fall solely on parents and guardians.
(Marija Ćosić i Maida Ćulahović, “Zašto ne”)