Unveiling algorithmic bias: Is social media against Gaza?

Daily News Egypt
11 Min Read

In the era of digital communication, social media platforms have become powerful tools for facilitating global conversations and promoting diverse perspectives. However, concerns have arisen regarding the alleged bias in algorithms employed by these platforms, particularly in relation to the Israeli-Palestinian war. In this feature, Daily News Egypt digs to learn more about the impact of algorithm prejudice and how to fight this impact.

Experience on Instagram:

Meta, the parent company of Facebook and Instagram, has implemented various measures in response to the Israeli aggression on Gaza. The company denies deliberately suppressing voices but acknowledges that errors can occur and offers an appeal process for users. Meta has designated Hamas as a “Dangerous Organization” and banned it from its platforms, removing content that promotes or supports the group. However, Meta states that it allows social and political discourse, news reporting, human rights discussions, and academic or neutral discussions.

During the conflict, Meta announced the deletion of 795,000 posts in Arabic and Hebrew across its platforms within the first three days. It also banned certain hashtags on Instagram, including #طوفان_الأقصى (Al-Aqsa Flood). The company has not provided a clear definition of what constitutes “substantive support” of Hamas, leading to criticism and concerns about biased content moderation.

Users have complained that Meta is suppressing pro-Palestinian voices by reducing the reach of their posts, even if they do not violate platform rules. 

Meta’s content moderation policies are systematically censoring pro-Palestinian content on its social media platforms, New York-based NGO Human Rights Watch (HRW) found.

In a recent report, HRW wrote that the Facebook and Instagram owner has increasingly silenced pro-Palestinian voices on Instagram and Facebook following the October 7th attack by Hamas and the beginning of Israel’s war against the militant group.

The NGO accused Meta of furthering the erasure of Palestinians’ pain, effectively throttling their opportunity to let the world know what’s happening in Gaza.

Ahmed Shafei
Ahmed Shafei

With a sizable following on Instagram, Ahmed Shafei has built a reputation for sharing videos that align with current trends and events in his country. However, everything changed when the war in Gaza erupted, and Shafei’s mental health began to suffer, leading him to temporarily close his Instagram account.

 Recognizing his role as an influencer and feeling a responsibility to amplify the voices of Muslims and Egyptians abroad, Shafei decided to reopen his Instagram account. His focus shifted towards publishing content that aimed to defend Gaza and shed light on the situation there.

 Shafei soon noticed a disheartening trend – the algorithmic dynamics on Instagram seemed to be working against his efforts. The viewership of his videos, particularly those related to Gaza, began to decline significantly. Determined to find a solution, Shafei experimented by posting content on various topics, and inserting stories or reels about Gaza within unrelated videos. However, he discovered that non-Gaza-related videos received significantly higher viewing rates, while those concerning Gaza suffered a decline of over 50%.

 Despite the algorithm’s apparent preference for non-Gaza content, Shafei firmly believes that the voices of Egyptians and Arabs advocating for Gaza managed to reach people abroad. He acknowledges the challenges posed by algorithms but remains hopeful that the power of collective voices can transcend these barriers. He said that till now he has been suffering from the decline of viewers, and he thinks that there is no solution for now. 

The main reason for this is that they need to not reveal the truth, and they are afraid form the impact of social media. 

Maybe the solution is to ask people to make more engagement either likes or comments on our really or stories.

 Ahmed Shafei, a 29-year-old graduate in Mass Communication from MIU, has been working as a graphic designer. His educational background and professional experience have equipped him with the skills and knowledge to navigate the digital landscape and effectively communicate his message.

 Ahmed Shafei’s experience as an influencer sheds light on the algorithmic challenges faced by those who strive to advocate for causes like Gaza. Despite encountering declining viewership and algorithmic suppression, Shafei ‘s determination to amplify the voices of the oppressed remains unwavering. His story underscores the ongoing need for dialogues surrounding algorithmic transparency and accountability, ensuring that diverse perspectives can be heard and shared on social media platforms.

Experience on X:

Twitter has become a space for global conversations, but I’ve noticed a troubling pattern. My tweets about Gaza are consistently flagged and removed by the algorithm, even though I follow the guidelines.” – Sarah Ahmed an active X (previously Twitter) user. 

 “I’m passionate about raising awareness about Gaza, but it feels like my voice is being silenced. My tweets, which include news articles, personal narratives, and calls for action, are constantly targeted by the algorithm,” Ahmed added. 

She finalized that while algorithms are meant to maintain a safe environment, they can inadvertently suppress certain voices. She said Twitter’s algorithm seems to flag and remove her Gaza-related tweets, even when they don’t violate any guidelines.

Ahmed mentioned that It’s disheartening to see her efforts to raise awareness about the humanitarian crisis in Gaza hindered by algorithmic biases. Important content that sheds light on the situation or calls for justice is disproportionately scrutinized or removed.

 “Despite the challenges, I remain committed to sharing my perspective on Gaza. I’ve learned to adapt my messaging and find alternative strategies to ensure my content reaches a wider audience, such as engaging with relevant hashtags and collaborating with like-minded activists,” Ahmed stressed. 

“It’s crucial for social media platforms to acknowledge and rectify algorithmic biases. As we strive for inclusivity and equity in the digital space, the silencing of voices on topics like Gaza must be addressed. We need transparency, fair content moderation, and clearer guidelines to ensure that diverse perspectives can be shared without undue suppression,” Ahmed concluded. 

Walid Gad
Walid Gad

Explanation:- 

Walid Gad, Former chairperson of the Chamber of Information Technology and Telecommunications, told Daily News Egypt that social media algorithms operate based on databases or lists containing predetermined words. 

He explained that If posts containing these words appear, the algorithm automatically deletes them. Gad further explained that algorithms function through keyword recognition and artificial intelligence, which identify specific words and images related to topics of interest. 

Gad said that Platforms continuously update their list of keywords, especially during events such as the Gaza conflict, to increase the algorithm’s effectiveness.

According to Walid Haggag, an information security expert and member of the Digital and Information Infrastructure Committee of the Supreme Council of Culture in Egypt, content control is done through a set of software that contains mathematical equations that perform specific functions for either the platform or the user, depending on the field in which it is used, which are known as algorithms.

Haggag added in an interview with Daily News Egypt that these functions change from time to time according to the development of algorithms and tools used, including the issue of content recognition.

Walid Haggag
Walid Haggag

Is there a solution?

In response to these algorithms, Gad explained that individuals can outsmart the system by using alternative words or providing detailed explanations to convey their message. 

“The goal is to introduce a delay in the algorithm’s ability to identify and take action against their posts, giving their content a higher chance of being seen by others,” he said.

Meanwhile, Haggag said that Algorithms can be circumvented by adding some kind of adversarial example (image, designed to cause a machine learning model to make a wrong prediction. It is generated from a clean example by adding a small perturbation, imperceptible for humans, but sensitive enough for the model to change its prediction. So that the AI will fail to recognise it. 

He mentioned that currently a lot of websites offer this service free, but for video, he thinks that as the processing of the video is very high and the duration of ads well is longer sites may offer it with a cost. 

“Publications are circumvented by writing names in parts or adding punctuation marks to them such as an asterisk or something else. Some gimmicks work and some do not, but over time these methods will be discovered by the algorithm and dealt with,” Haggag pointed out. 

When AI fails to recognize a post that does not comply with platform policies, social media reporting staff can detect and act on it, as movements linked to Israel submitted thousands of complaints against pro-Hamas and anti-Israel publications.

“Artificial intelligence learns over time and can recognize and develop itself. You may find posts that have not been deleted, but the platform returns after a while to delete them again,” Haggag concluded. 

TAGGED:
Share This Article