Introduction

As the world undergoes rapid technological transformations, artificial intelligence (AI) has emerged as a central focus in discussions about the future. While AI holds the promise of revolutionizing various sectors, its applications raise profound questions, particularly currently in the Middle East and North Africa (MENA), where these technologies can act as a “double-edged sword”.

AI has the potential to facilitate positive transformation by promoting transparency, good governance, and social justice. At the same time, it poses significant risks, including enabling oppression, surveillance, and the restriction of human rights. In November 2024, under Chatham House Rule, the Democracy Matters Initiative (DMI) held its second private dialogue, Pushing Back Against Anti-Democratic Actors and Narratives. The discussion brought DMI Advisory Group members together with prominent experts in tech, AI, human rights, and democracy from the region to analyze how authoritarian MENA regimes utilize tech and AI to propagate anti-democracy narratives, leveraging domestic and international tools, including influence from global actors. (Visit here for more information on DMI’s first private dialogue, Making the Case for Democracy and Human Rights.) 

 

Background and Context

1. The Impact of AI in the MENA Region

Speakers in the meeting highlighted the MENA as a vital case study for understanding AI’s impacts. On one hand, countries like Saudi Arabia and the UAE are investing heavily in AI technologies, leveraging them to enhance public services and security. On the other hand, these technologies are often weaponized to surveil citizens and suppress freedom of expression.

Morocco and Tunisia

During the COVID-19 pandemic, Morocco and Tunisia implemented facial recognition systems as part of their public health response. These systems were initially deployed to enforce compliance with measures such as mask-wearing, social distancing, and movement restrictions. In Morocco, authorities developed a mobile application to track individuals who were exposed to the virus. In Tunisia, similar technologies were utilized to monitor public spaces and ensure adherence to government-imposed health regulations.

Once the public health crisis subsided, however, these technologies were redirected for other purposes. Reports suggest that facial recognition and surveillance systems were repurposed to monitor political protests, track dissidents, and suppress opposition movements. In Tunisia, participants highlighted concerns that these technologies, developed ostensibly for public health, became tools for controlling freedom of assembly and expression, particularly during waves of civil unrest. Similarly, in Morocco, the use of these tools extended to identifying and monitoring activists participating in peaceful protests.

This repurposing of surveillance tools underscores the dual-use nature of AI technologies in the region. What begins as a public health initiative can easily transition into mechanisms of authoritarian control, eroding public trust and raising serious questions about the governance of such technologies.

Palestine

In the occupied Palestinian territories, Israel employs advanced AI systems for extensive surveillance and control over Palestinians. Experts emphasized the role of biometric data collection as a cornerstone of this strategy. One of the most egregious examples is the collection of iris scans and other biometric data from over 2 million Palestinians in Gaza and the West Bank. This data is then used to classify individuals and predict their movements, affiliations, and behaviors.

Experts detailed how these AI-driven systems are integrated with tools such as facial recognition cameras placed at checkpoints, drones equipped with surveillance capabilities, and predictive algorithms. These technologies have facilitated arbitrary arrests, extrajudicial killings, and collective punishment. For instance, AI systems are used to flag individuals as “high risk” based on patterns in their biometric data, often leading to their detention without due process. In some cases, participants shared how this technology targets not only individuals but entire families, extending the scope of surveillance and punishment to collective communities.

In addition to direct surveillance, AI is also used to determine targets for airstrikes and other military operations. Systems such as “Where’s Daddy?” have reportedly analyzed large datasets to predict the presence of specific individuals or groups in civilian areas. These predictive tools, combined with facial recognition and biometric tracking, allow for precise targeting, resulting in devastating outcomes during military campaigns in Gaza.

Another dimension is the use of AI for systematic control and social engineering. The data collected is utilized to maintain a system of segregation and discrimination, with participants describing Gaza and the West Bank as “experimental labs” for the development of surveillance and military AI. This systemic use of AI to enforce occupation, restrict movement, and subjugate an entire population represents a grave violation of international human rights norms.

2. Political and Social Challenges in the Region

  • Political systems in the region often lack robust regulatory frameworks for overseeing AI.
  • There is a notable absence of laws protecting privacy and data, leaving these technologies open to unethical use.
  • Governments exploit AI to suppress popular mobilization, with participants noting that AI has become a tool to reinforce authoritarian control over societies.


Key Discussion Points

1. AI as an Authoritarian Tool

Mass Surveillance

  • Biometric Data Collection: AI technologies are increasingly capable of collecting and analyzing personal and biometric data on a massive scale. Such technologies are being implemented in many parts of the Middle East. For example, In Gulf states like Saudi Arabia and the UAE, biometric data collection is increasingly used in public spaces, such as airports, to monitor and control citizen movements.
  • Privacy Restrictions: Biometric data, such as fingerprints and facial recognition, is employed for espionage and surveillance. Governments in the region have used biometric technologies for covert surveillance. Data collected through these means is often stored indefinitely and can be accessed by state intelligence agencies to track political dissidents, journalists, and human rights activists. For instance, reports highlight how activists in the UAE have been targeted using AI-powered spyware that analyzes biometric and digital data to predict their next moves.
  • Broader Implications of Mass Surveillance: The widespread deployment of AI for mass surveillance presents profound and far-reaching implications. One significant concern is the normalization of surveillance, where citizens gradually acclimate to constant monitoring, perceiving it as an inescapable reality. This desensitization erodes public resistance and weakens demands for transparency and accountability. Furthermore, the weaponization of biometric data poses a grave threat, as governments can exploit this information to manipulate public narratives, fabricate incriminating evidence, or conduct targeted campaigns against dissenting voices. Compounding these risks is the pervasive lack of accountability; many AI surveillance systems operate under a veil of secrecy, circumventing public scrutiny and limiting opportunities for legal recourse or regulatory oversight.

Manipulation of Information

  • Digital Bots: The use of AI-powered bots to manipulate public discourse was a recurring theme in the discussions. In Saudi Arabia, for instance, government-affiliated bots flood social media platforms with pro-regime content, creating an illusion of widespread support. These bots amplify state narratives, drown out dissenting voices, and make it difficult for ordinary citizens to access unbiased information. Participants noted that such manipulation not only limits freedom of expression but also distorts public perception, making it challenging for grassroots movements to gain traction.
  • Biased Algorithms: AI algorithms, often opaque and unregulated, are increasingly used to control the content presented to users. Governments leverage these algorithms to prioritize state-friendly content and suppress critical voices. For example, in certain regions, algorithmic moderation of platforms like META and X disproportionately censors content from activists and minority groups. Participants pointed out that these biases are not always deliberate but arise from the training data fed into the algorithms, which often reflect existing societal and political prejudices. This exacerbates the marginalization of vulnerable communities and undermines democratic discourse.
  • Information Ecosystem Control: In addition to shaping individual perceptions, governments use AI to dominate the broader information ecosystem. This includes automated flagging systems that identify and remove content deemed politically sensitive. Such systems can be weaponized to silence journalists, human rights defenders, and opposition figures. In some cases, the manipulation of information extends to altering historical narratives, where AI is used to create or suppress digital records to serve authoritarian agendas.

Militarization of AI

  • Modern Warfare: AI’s integration into military operations has redefined modern warfare, particularly in conflict zones like Gaza. Systems like Israel’s “Where’s Daddy” analyze vast amounts of data to predict the locations of high-value targets, increasing the precision of airstrikes. While these systems are touted for minimizing collateral damage, participants argued that they often lead to indiscriminate violence, as the underlying data and algorithms are not immune to biases or errors. The shift from human decision-making to AI-powered targeting raises ethical concerns about accountability in warfare, particularly when civilian casualties occur.
  • Surveillance in War Zones: In addition to direct military applications, AI is used for extensive surveillance in war zones. Drones equipped with facial recognition technology and advanced sensors collect data on civilian movements, further intensifying the militarization of daily life in areas like Gaza and the West Bank. Participants described how these systems strip individuals of their humanity, reducing them to data points in algorithms that determine their access to basic rights and freedoms.
  • Refugee Management: AI’s role in refugee management is another alarming development. In countries like Lebanon and Jordan, AI systems classify refugees based on their sectarian and social backgrounds, often leading to discriminatory practices. These classifications are used to determine access to resources, residency rights, and even the possibility of relocation. Participants highlighted that such systems perpetuate existing inequalities and deepen societal divides, particularly in regions where refugees already face significant challenges.
  • Automated Targeting: Participants also drew attention to the use of AI for automated targeting in military actions. Advanced AI systems analyze real-time data from multiple sources, including social media, satellite imagery, and biometric databases, to identify potential threats. However, this automation often lacks transparency and accountability, leading to mistakes with devastating consequences. In some instances, individuals have been targeted based on flawed AI predictions, resulting in wrongful deaths and destruction.

 

2. AI as a Tool for Strengthening Democracy

Protecting Digital Rights

  • Nuha Project: The “Nuha Project” represents a groundbreaking example of using AI to protect digital rights, specifically targeting the rampant issue of online abuse against women. This initiative develops AI tools capable of detecting and analyzing abusive digital content in local dialects, which is particularly challenging due to the linguistic diversity in the region. Starting in Jordan and Iraq, the project trains its algorithms to understand nuanced local expressions and cultural contexts to identify and flag abusive or harmful language. Participants highlighted that this approach is not only critical for protecting women online but also for creating a safer digital environment where marginalized voices can participate without fear of harassment. Plans to expand the project to other countries, such as Egypt, demonstrate its scalability and potential to transform digital rights protection across the Middle East.
  • Automating Transparency: In the European Union, AI tools are being utilized to promote transparency in governance. These systems monitor parliamentary proceedings, taking detailed notes and analyzing data to ensure that the legislative process is accessible and accountable to the public. By automating routine but essential tasks such as transcription, documentation, and data analysis, these tools free up resources and reduce human error. Participants pointed out that such systems could be adapted for use in other regions, including the Middle East, to bring greater transparency to governance structures. For example, AI could monitor government procurement processes, ensuring that contracts are awarded fairly and without corruption, thus fostering trust between citizens and their governments.Increasing Civic Participation
  • Corruption Analysis — AI for Uncovering Malpractice: AI systems are increasingly being used to analyze patterns in big data to uncover corruption. These tools can sift through vast amounts of information—such as financial transactions, government contracts, and public expenditure records—to identify irregularities and potential instances of corruption. Participants cited examples where AI algorithms have been successfully deployed to detect fraudulent activity and misappropriation of funds, holding corrupt officials accountable. In regions like the Middle East, where systemic corruption often undermines democratic governance, such tools could be transformative. They could empower investigative journalists, civil society organizations, and regulatory bodies to act as watchdogs, enhancing accountability and reducing impunity.
  • Improving Resource Distribution — AI for Social Justice: Algorithms are also being used to promote social justice by optimizing the allocation of public resources. In Taiwan, for instance, AI tools have been employed in environmental projects to determine the most effective use of limited resources. These algorithms analyze data such as pollution levels, resource availability, and community needs to ensure equitable distribution. Participants noted that similar tools could be adapted for use in the Middle East, where economic inequality and resource mismanagement often exacerbate social tensions. By making resource distribution more fair and efficient, AI could help address long-standing issues such as unequal access to healthcare, education, and infrastructure.Awareness and Capacity Building
  • Technical Education — Bridging the Knowledge Gap: One of the key themes of the discussion was the importance of educating communities about AI to ensure its responsible use. Participants stressed the need for both formal education programs and grassroots initiatives to demystify AI technologies. For instance, introducing AI literacy into school curriculums could help younger generations understand its potential benefits and risks. Grassroots initiatives, on the other hand, could target vulnerable groups and activists, providing them with the knowledge and tools needed to protect their digital rights. Participants argued that this education is not just about understanding AI but also about fostering a culture of critical thinking, enabling communities to question and hold accountable those who misuse the technology.
  • Empowering Activists — Training for Effective AI Use: Training programs aimed at human rights workers and activists were highlighted as essential for strengthening democracy. These programs would equip participants with the skills needed to use AI tools effectively, enhancing their ability to monitor human rights abuses, document evidence, and advocate for change. For example, activists could use AI-powered data analysis tools to identify patterns of state repression or to amplify marginalized voices through social media. Participants also emphasized the need to ensure that these training programs are inclusive and accessible, particularly for women and minority groups, who are often at the forefront of social justice movements but lack access to technical resources.

 

3. Legal and Regulatory Challenges

Legislative Gaps

  • Unclear and Outdated Legal Frameworks: Participants highlighted the significant absence of comprehensive laws and regulations governing AI in most Middle Eastern and North African countries. In these nations, existing legal frameworks often fail to address the rapid advancements in AI technology, creating substantial gaps that can be exploited. For instance, in Tunisia, laws regulating data privacy do not distinguish between biometric data (such as facial recognition or iris scans) and personal data (such as email addresses or phone numbers). This lack of specificity allows both governments and private entities to misuse biometric data for surveillance and control without facing legal consequences.
  • Lack of Data Protection Legislation: While some countries in the region, like Morocco, Lebanon, and Tunisia, have enacted data protection laws, participants noted that these are often outdated, incomplete, or riddled with loopholes. For example, the failure to regulate the collection and processing of biometric data means that sensitive personal information can be gathered and stored without individuals’ consent. This gap not only enables the misuse of AI technologies for mass surveillance but also undermines citizens’ privacy and trust in governmental and corporate institutions.
  • Overemphasis on Economic Goals: Participants also pointed out that in some cases laws related to AI and data protection are driven more by economic considerations than by a commitment to human rights. For example, in Jordan, Microsoft was one of many companies that played a significant role in drafting a data protection law that prioritizes enabling big tech companies to capitalize on data-driven business opportunities. This economic-first approach often neglects critical safeguards for individual privacy and leaves citizens vulnerable to exploitation.
  • Fragmented Regional Standards: The absence of unified standards across the region further exacerbates these challenges. Participants emphasized that the lack of harmonized legal and regulatory frameworks creates opportunities for authoritarian regimes to exploit regulatory inconsistencies. For instance, companies or governments can choose to operate in countries with the least restrictive laws, further undermining accountability and human rights protections.

Responsibility of Major Tech Companies

  • Complicity in Authoritarian Practices: The role of major technology companies like Microsoft, Google, and others in enabling authoritarian practices was a recurring theme in the discussion. Participants pointed out that these companies often provide advanced AI tools and platforms to MENA governments without implementing adequate oversight or safeguards. For example, these companies offer facial recognition technology, predictive analytics, and surveillance software to regimes known for their authoritarian tendencies, effectively enabling mass surveillance and repression.
  • Exporting Biased and Unregulated Technologies: Participants emphasized that many AI tools developed by global tech giants are inherently biased due to the nature of their training data, which often reflects societal inequalities or prejudices. When these tools are exported to authoritarian regimes, the bias is compounded, resulting in the disproportionate targeting of marginalized communities, such as activists, minorities, and women. For instance, facial recognition systems used in the region have been shown to have higher error rates for individuals with darker skin tones, increasing the likelihood of wrongful arrests or detentions.
  • Lack of Accountability: Major tech companies often operate without sufficient transparency regarding how their AI tools are deployed by governments. Participants expressed concern that these companies do not adequately monitor or limit how their technologies are used once sold. For example, in the context of Israel and Palestine, some American technology companies were found to provide tools that facilitated mass surveillance and military targeting. However, there is little evidence to suggest that these companies take steps to ensure their products are not used to violate human rights.
  • Economic Power and Influence: Participants also discussed how the economic clout of these companies gives them significant leverage in shaping policies and laws to suit their interests. For instance, Microsoft and Google have actively lobbied governments in the region to pass legislation favorable to their operations, often at the expense of robust human rights protections. This raises serious ethical questions about the responsibility of these corporations in contexts where their technologies contribute to repression.
  • Need for Due Diligence: Experts underscored the lack of due diligence by major tech companies when entering markets in the region. They affirmed the need for stricter enforcement of international human rights guidelines, such as the United Nations’ Guiding Principles on Business and Human Rights, which outline the corporate responsibility to prevent and address adverse human rights impacts.

Recommendations

1. Developing Legal and Regulatory Frameworks

  • National Legislation for Data Protection: Governments across the region must draft and implement comprehensive national legislation to regulate AI and data protection. This should include clear definitions of personal and biometric data, along with strict guidelines for its collection, storage, and use. Such laws should ensure transparency in how governments and private organizations handle citizen data. These regulations must also include penalties for misuse and mechanisms for redress in cases of rights violations.
  • Preventing AI Misuse: Laws should specifically target the misuse of AI for purposes like mass surveillance, political manipulation, and discrimination. For instance, restrictions could be placed on the use of facial recognition and predictive analytics in contexts that could harm marginalized communities. These laws should be complemented by robust oversight mechanisms, such as independent regulators or ombudsmen, to monitor compliance and enforce ethical practices.
  • International Collaboration for Ethical AI Standards: Recognizing the transnational nature of AI technologies, it is crucial to foster international collaboration to establish clear, enforceable standards for the ethical use and export of AI. This could include agreements on banning the sale of certain AI tools, like predictive surveillance systems, to authoritarian regimes. Multilateral frameworks, possibly under the United Nations or other international bodies, could ensure that AI technologies align with universal human rights principles.

2. Empowering Civil Society

  • Supporting Regional and International Organizations: Civil society organizations (CSOs) play a critical role in raising awareness about the ethical implications of AI. Governments, international donors, and private stakeholders should provide financial and technical support to CSOs engaged in digital rights advocacy. This support should also aim to foster collaboration among organizations in different countries to share resources, strategies, and best practices.
  • Equipping Activists with AI Tools: To counter authoritarian misuse of AI, activists and human rights defenders need access to the same advanced tools. For example, tools that analyze social media data could help monitor and document human rights abuses, while others could track corruption or environmental damage. Training activists to use these tools effectively would not only level the playing field but also empower them to hold governments and corporations accountable.
  • Promoting Grassroots Advocacy: Civil society must also mobilize grassroots communities to demand greater transparency and accountability in AI deployment. This can include public campaigns to raise awareness about how AI is being misused and advocating for stronger legal protections for citizens.

3. Promoting Regional Research

  • Investment in Local AI Projects: Governments, academic institutions, and private entities should invest in locally developed AI solutions that prioritize social justice, equity, and inclusivity. For example, AI projects could focus on improving public health, enhancing education systems, or ensuring equitable resource distribution in underprivileged communities. These projects could also create job opportunities and build local expertise in AI.
  • Ethical and Political Research: Funding for academic studies on AI’s ethical and political implications is critical. Such research could explore the biases inherent in AI algorithms, their societal impacts, and ways to mitigate harm. Collaboration between universities, think tanks, and international organizations could provide a deeper understanding of how AI technologies intersect with issues like authoritarianism, surveillance, and human rights.
  • Establishing Regional Research Hubs: To encourage collaboration and innovation, regional hubs for AI research could be established. These hubs could bring together experts from diverse fields—such as law, ethics, computer science, and human rights—to work on interdisciplinary projects that address the region’s specific challenges.

4. Awareness and Training

  • Training Programs for Key Stakeholders: Targeted training programs should be developed for activists, journalists, lawyers, and civil society organizations to equip them with the knowledge and skills needed to understand and leverage AI technologies. These programs could cover topics such as how to detect and combat algorithmic bias, use AI for monitoring government actions, investigate war crimes, and understand the ethical dimensions of AI.
  • Raising Awareness About AI Risks: Awareness campaigns should focus on educating the general public about the risks and potential harms of unregulated AI applications. This includes highlighting cases where AI has been misused, such as in mass surveillance or political manipulation, and explaining how individuals can protect their digital rights. Engaging content, such as videos, infographics, and workshops, could be used to make these campaigns more accessible.