Skip to content

Election campaign monitoring: Very large online platforms

Zašto ne

In October 2024, Local Elections were held in Bosnia and Herzegovina. The CA “Zasto ne” (“Why not”) is publishing a series of analyses on how the election campaign looked from the perspective of politicians, the media, and social media platforms.

Photo: Zašto ne

The Citizens’ Association “Zasto ne” (“Why not”) has been monitoring elections in Bosnia and Herzegovina for 15 years. Since 2010, Istinomjer.ba has been tracking public statements made by public office holders as well as election candidates, verifying the truthfulness and consistency of their statements. Raskrinkavanje.ba, another fact-checking platform operating within this association, has been monitoring factual claims published in the media and on social media platforms since 2017. Throughout these years, the primary focus of both fact-checking platforms has been political and media accountability.

With technological development and the rise of social media platforms popularity, these platforms have taken on an increasingly significant role in electoral processes. Therefore, during the pre-election campaign for the 2024 Local Elections, “Zasto ne” focused not only on media and political accountability but also on the responsibility of very large online platforms. During the pre-election campaign for the 2024 Local Elections, Istinomjer and Raskrinkavanje monitored various sources on social media platforms such as Facebook, Instagram, YouTube, X, and TikTok, searching for disinformation, manipulations, and other harmful content defined by domestic legislation and the rules of these very large online platforms. More details about the research and methodology can be found in the article titled “Elections 2024: How did platforms and institutions respond to harmful phenomena during the pre-election campaign?”.

The results of the research conducted by “Zasto ne” are published in a series of three articles. This is the third article, which examines how very large online platforms responded to reports of harmful content. Links to other articles discussing how political actors (mis)used social media platforms during the campaign, as well as the harmful electoral content published on social media platforms by media and other users, can be found in the text available at this link.

Relevant EU frameworks for the responsibility and transparency of online platforms

Since Bosnia and Herzegovina is a candidate country for EU membership and is in the process of aligning its domestic legislation with European legal and regulatory frameworks regarding the management of online platforms and how they handle harmful and illegal content, we aimed to investigate whether any positive effects of these frameworks are already noticeable and in which areas the biggest shortcomings exist.

The Digital Services Act (DSA) is the overarching legally binding regulation for online platforms, including social media platforms, making responsibility and transparency in handling illegal content a legal obligation within the EU. In the context of elections, relevant procedures include reporting and acting on illegal content, i.e. content that violates domestic electoral laws. In this regard, platforms are required to establish mechanisms that allow any user to report content they consider illegal and to act “without undue delay” on each report. This includes sending a confirmation to the user about receiving the report, providing an explanation of the decision regarding the reported content, and offering information on legal remedies available concerning that decision.

When discussing harmful content, the DSA obligates very large online platforms and search engines to conduct risk assessments, including actual or foreseeable negative impacts on civic discourse and electoral processes, and to take measures to mitigate such risks. One way to manage risks related to elections is by implementing the Guidelines for Reducing Systemic Risks for Electoral Processes and, in the context of spreading electoral disinformation, through the implementation of the Code of Practice on Disinformation.

During monitoring, we examined whether users of very large online platforms in Bosnia and Herzegovina were able to report content that violates the country’s electoral legislation and whether platforms were implementing measures related to harmful content as outlined in the Guidelines and the Code, focusing on the following recommendations:

  • Transparency in political advertising: On online platforms where it is allowed, political ads must be clearly labelled and identifiable as such. Additionally, users should be able to access information about the sponsor’s identity, the period during which the political ad is intended to run, the ad’s value, and the targeting parameters applied. Another recommended measure is maintaining a publicly accessible repository (library) of political ads.
  • Integrity of services protection: Online platforms should enforce rules to prevent manipulation of their services, such as the creation and use of fake accounts, impersonation of candidates, the use of misleading manipulated media, fake interactions aimed at artificially boosting reach or creating an impression of public support, non-transparent paid messages, or non-transparent influencer promotions. This also includes applying measures to recognize content generated by artificial intelligence.
  • User empowerment: Access to official information about the electoral process, such as informational banners about elections and the voting process, pop-ups, links directing users to election authority websites, and similar resources. This also includes cooperation with local initiatives and media literacy campaigns in the electoral context.

Rules of social media platforms

Online platforms have their own rules contained in their terms of service as well as community standards or guidelines that specify what types of user behavior and content are prohibited or restricted. In the context of elections, these rules mostly pertain to political advertising (whether it is allowed and what rules apply to it), the spread of disinformation about the electoral process, or other forms of interference, manipulation, and abuse aimed at discrediting the electoral process.

Political advertising is allowed on Facebook and Instagram, and internal rules require anyone who wants to publish an ad about social issues, elections, or politics to undergo an authorization process. Every political ad published on these platforms must include information on who paid for the ad and also must be available in the “Ad Library“, which provides the public with insights into numerous details such as the sponsor’s identity, the amount spent on the ad, the estimated audience size, the location, and the number of ad impressions. These rules also apply to ads from Bosnia and Herzegovina. In contrast, political ads shown to users in Bosnia and Herzegovina through Google Ads have no restrictions or special transparency requirements. In certain countries and regions, including the European Union (although the platform announced that as of October 2025, it plans to discontinue this service in the EU due to new regulations coming into effect), political advertising through Google Ads also requires a verification process, and Google maintains a “Political Ad Library” listing all verified ads. The social media platform TikTok does not allow political advertising, while on X, it is permitted only in certain countries, which do not include Bosnia and Herzegovina.

All very large online platforms (Facebook, Instagram, TikTok, X, and YouTube) observed during monitoring have rules regarding the spread of false information about the electoral process and generally prohibit such content on their platforms. Meta claims to “reduce the distribution of false election news” and removes content aimed at disrupting voting, such as calls for election violence and false information about voting (e.g. dates, locations, time, voting methods, voter registration, disinformation about who can vote, required documents for voting, disinformation about whether a candidate is running, etc.). Similarly, disinformation about voting, election participation, and election results, as well as any interference with the electoral process, is prohibited on TikTok. YouTube’s policies also prohibit content that aims to deceive voters, discourage or interfere with voting, including false claims about candidate eligibility. X has similar rules on electoral process integrity, except that “inaccurate statements about elected or appointed officials, candidates, or political parties” are not considered electoral manipulation on this platform.

Furthermore, some platforms claim to implement user empowerment measures: Meta states that it facilitates the dissemination of official election information through in-app notifications, while TikTok claims to direct search results for topics prone to disinformation, such as elections, to official information sources and to add informative banners to pages and live content.

All the mentioned platforms also offer users the option to report content that violates their rules and/or community standards. This option is available directly on the published content or profile. However, reporting content that is considered illegal under national legislation (e.g. electoral laws) is generally only possible for users from European Union member states, in accordance with the obligations set forth by the Digital Services Act. Facebook and Instagram include the category “Report as unlawful” among their content reporting options. TikTok also offers the option “Report illegal content” directly next to the content, whereas reporting illegal content on the X platform requires filling out a special online form, which is also only available to EU users. Google is the only company that allows the reporting of illegal content on its platforms through an online form accessible in Bosnia and Herzegovina (e.g. YouTube).

What harmful and prohibited content circulated on social media platforms during the election campaign?

In the first two articles published in this series, we presented the findings of the campaign monitoring for the Local Elections in Bosnia and Herzegovina.

The first article, “Election Campaign Monitoring: Candidates and Social Media Platforms”, provides an overview of falsehoods and other manipulations spread by election candidates and their parties during the campaign.

The second article, “Election Campaign Monitoring: Media and Social Media Platforms”, details the disinformation published by the media during the campaign, as well as the harmful informational phenomena that marked the last elections in our country.

All identified examples of problematic content were reported to the platforms using publicly available user reporting mechanisms. The team then monitored the platforms’ responses to these reports, including any feedback received, such as acknowledgement of the report, notification of the decision made on the report, the possibility of appealing the platform’s decision, and the potential outcome of the appeal.

During the monitoring, we focused on the following types of content:

  1. Content that violates the provisions of the Bosnia and Herzegovina Election Law (Chapter 16 – Media in the Election Period)

The implementation of the provisions of this chapter is further regulated by the Regulation on Media Representation and Public Advertising of Political Entities in the Election Period, which defines media as “all electronic media, online media, print media, and social media platforms”.

According to election rules in Bosnia and Herzegovina, from the announcement of elections until the official start of the election campaign, election campaigning and paid political advertising in the media and on social media platforms are prohibited.

Analyses by Istinomjer and Raskrinkavanje revealed numerous instances of premature campaigning on social media platforms and paid political ads outside the legally permitted period. However, these cases could not be reported to the platforms because users from Bosnia and Herzegovina do not have the option to report illegal content.

The Election Law also prohibits political entities from using media to spread false information that could undermine the integrity of the election process and mislead voters.

Due to the lack of an option to report content that violates domestic laws, such cases were reported based on violations of the platforms’ own rules.

The misuse of children for political purposes is also prohibited, meaning that children must not be included in activities related to political advocacy or promotion, such as participating in political advertising.

Election campaign monitoring identified several such cases in content shared on Facebook. Since reporting illegal content was not an option, these problematic posts could only be reported on the basis of potential child exploitation. However, the platform did not respond to the reports, likely because this type of child exploitation for political purposes does not violate Facebook’s community standards.

When publishing the results of surveys and public opinion research, the media must include a series of details that allow the public to determine whether the research is credible.

On the X platform, multiple posts from the same profile presented alleged public opinion poll results regarding mayoral candidates in four cities in Bosnia and Herzegovina, without mentioning the individual or institution that commissioned or conducted the research. These posts were classified as false, but due to the lack of options to report either illegal or false content on X, they could not be reported.

Twenty-four hours before polling stations open, media coverage of any political or election campaign activities is prohibited.

During the election silence period, no such cases were recorded in the monitoring.

  1. Harmful content, including content that violates the platforms’ own rules

The primary focus in this section was on disinformation, factually inaccurate content analyzed by Raskrinkavanje and Istinomjer. Additionally, researchers from these two platforms examined disinformation related to the election process itself, including misleading, malicious, or false information that could potentially discredit political candidates and negatively impact the election process.

The largest number of reports submitted to platforms for violating community standards or their own rules pertained to election-related disinformation.

Other harmful content was also recorded, including hate speech (such as calls for violence, promotion of hate/extremist groups and symbols), homophobia, and misogyny, particularly when used for political purposes to influence election results or the process (e.g. insulting or belittling political opponents).

Such content was found on all monitored platforms except YouTube and was reported for violating community standards or rules related to promoting violence and hate speech or harassment.

We also monitored the enforcement of platform rules regarding political advertising, specifically any violations of the ban on political ads on TikTok and X.

During the monitoring period, no such cases were found, with the note that we focused only on traditional political ads, not other forms of political promotion in content and posts.

  1. Implementation of risk mitigation measures

We monitored whether platforms implemented recommended risk mitigation measures, such as highlighting official and verified election-related information alongside political ads or other election-related content (e.g. a link directing users to the website of the Central Election Commission).

No such examples were recorded during the monitoring.

Monitoring findings by platform

During the monitoring, we reported a total of 119 pieces of content on five social media platforms. We received acknowledgment for only 35 reports, while feedback on the platform’s decision was provided in 25 cases. In all 25 cases, the response stated that the content did not violate community standards. Although we exercised our right to appeal in all 25 cases, only three appeals resulted in the removal of the problematic content.

Facebook

We reported 71 problematic pieces of content identified during the monitoring on Facebook. Of these, 48 reports were related to the spread of false political information, 16 to hate speech, one to harassment, and six to child exploitation.

When false information is shared by media outlets or other Facebook users who are not politicians, it can be assessed as part of the Independent Fact-Checking Program. Most of the disinformation observed during monitoring was evaluated through Raskrinkavanje’s participation in this program, and warnings and fact-checking links were displayed on such content. However, political speech containing disinformation, such as posts on candidates’ Facebook profiles or other profiles sharing their statements, cannot be evaluated through this program and can only be reported. Researchers from the NGO “Zasto ne” reported a total of 15 posts containing such content. No feedback was received on any of these reports, neither confirmation of receipt nor notifications about any actions taken by the platform.

Almost all other reports, 21 in total (14 for hate speech, one for harassment, and six for child exploitation), received both confirmation and notification of the platform’s decision. Each notification stated that the content was not removed, but no information was provided on whether other measures (e.g. reducing visibility) were taken.

We used the opportunity to appeal platform decisions regarding our reports in all 21 cases. As in the previous round, we received responses about the decisions quickly, on the same day or within a maximum of four days. None of the cases involving child exploitation resulted in removal, even after our appeal. As already mentioned, the issue in these cases is that the exploitation of minors for political campaigns does not violate Facebook’s internal rules but rather the Election Law of Bosnia and Herzegovina, which Bosnian users are unable to report.

Most appeals regarding decisions on content we reported as hate speech were also rejected. In one case, Facebook referred us to the possibility of appealing to the Oversight Board. However, the platform’s response to the other four reports of similar content (displaying symbols that incite hatred) on the same profile was either entirely absent or concluded that the content did not violate community standards, indicating inconsistency in decision-making regarding user reports and appeals.

Only two posts (1, 2) were removed following our appeal, with the conclusion that they violated community standards.

Instagram

We reported four problematic pieces of content on Instagram, all of which involved hate speech. Notably, all four pieces of content were posted on the same user account.

In three cases, the platform responded that there was no violation of community standards, while one post was initially removed and then reinstated. Instagram did not find a violation even after an appeal in three cases (including the one that was previously removed), while one post was deleted.

TikTok

Of the 27 reported pieces of content on TikTok, 25 related to electoral disinformation, while two reports were for hate speech.

TikTok did not provide any feedback, neither confirmation of receiving the report nor notification of any steps the platform may have taken.

X

Of the 18 mapped problematic pieces of content on X, we reported 14, as already explained, four posts were related to fake election polls, which we were unable to report due to the lack of an option to report this type of content.

The remaining reports concerned hate speech (7), harassment (6), and spam (1). We received confirmation of receipt for half of these reports, but there was no further notification from the platform regarding any steps taken.

YouTube

We reported three pieces of video content on YouTube as false information. Notification of the report was immediate, but we were not informed of any further steps taken regarding our reports.

Conclusions

The positive effects of European legal and (self-)regulatory frameworks that impose certain obligations and/or recommendations on very large online platforms regarding harmful and illegal content are still not noticeable in Bosnia and Herzegovina. In the context of elections, individual users or civil society organizations do not have the ability to report content to platforms that contradicts domestic election laws. Reports can only be made based on violations of the social media platforms’ own rules. The level of transparency from platforms regarding how they handle reports is generally low. Meta’s platforms, Facebook and Instagram, provide the most feedback and are the only ones that inform users of the actions taken on their reports and the only ones that allow appeals. However, this seems to apply only to categories of content that can ultimately be removed for violating platform rules (e.g. child exploitation, hate speech, or harassment), whereas for reports of false information shared by politicians and parties, users do not receive any notifications. The least transparent platform is TikTok, which does not provide any feedback after a report.

Additionally, there is no indication that platforms are implementing other measures to reduce the risk of election process manipulation. TikTok is the only platform that allows reports of election disinformation, content that disrupts or threatens the electoral process and election integrity, but, as we have found, it is unclear whether the platform takes any action after a report and, if so, what those actions are.

The same applies to the implementation of voluntary measures, such as those platforms committed to applying in the European Union by signing the Code of Practice on Disinformation. Mapping social media platforms rules has shown that transparency measures for political advertising do not apply to political ads from Bosnia and Herzegovina, except for those available through Meta’s platforms.

Finally, no examples of good practices aimed at empowering users and directing them to official sources of election information have been recorded.

 

(Maida Ćulahović)