Mariami Atanelishvili *
“When technology is developed with just one perspective,
it’s like looking at the world half-blind” [1]
Women have faced discrimination throughout the centuries, and unfortunately, the 21st century is no exception. Even today, modern digital tools tend to discriminate against women and make gender-biased decisions, placing women in difficult positions. What causes this discrimination? Is the AI algorithm learning the wrong patterns and effectively filtering out women? Is AI shaped primarily by men, or is it being taught through male-dominated data? If so, how can this be avoided?
There is no doubt that technology has had a significant positive impact on society; however, it also presents challenges, as technological tools remain far from flawless. It is important to remember that AI is designed and trained by humans, and, like humans, it can make mistakes and take subjective decisions. Consequently, its actions and outputs are inevitably influenced by human biases.
AI systems are often trained on historical data, which may contain existing biases and discriminatory patterns, and when such data is used, AI can cause harm at different levels: first, individual harm occurs when a person is directly affected by an AI system’s biased decision, second, collective harm arises when groups of individuals are systematically excluded from certain opportunities due to biased or misleading data and third, social harm affects society as a whole, as everyone has an interest in living in a society that does not discriminate and treats all individuals equally.[2] (Unia)
It is noted that AI systems may fail to properly evaluate an individual’s performance based on their actual work and may instead rely on patterns that reflect historical trends.[3] (Ovchinnikov, 2020). It means that AI may undermine individuals simply because they do not fit the profile it was previously given.This problem becomes especially visible in hiring processes, because the algorithms used are often heavily biased. In other words, if the dataset is largely made up of past male applicants who were hired, the algorithm will learn to favor similar profiles, ultimately reproducing and reinforcing existing gender distinction. Even if two candidates are otherwise identical, AI may favor the male candidate simply because that pattern reflects what it has learned from previous data.[4] (Nuseir et al., 2021).
The same pattern was seen at Amazon, where the company’s automated hiring algorithm inherited gender biases from its training data, prompting an internal audit and ultimately leading to the system’s discontinuation.[5] (Dastin, 2018).
A further structural concern arises when AI discriminates against women on the basis of age, resulting in older women being disproportionately harmed by such systems.
Substantial evidence demonstrates that older women experience intersecting forms of bias based on both gender and age.[6] Policy reports, media coverage and workplace interviews indicate that older women experience discrimination in hiring and promotion across various industries, a phenomenon commonly referred to as “gendered ageism.”[7] This reflects a broader statistical bias that associates women with expectations of youthfulness. Across contexts ranging from entertainment media to the workplace, women are subject to persistent pressure to appear young, resulting in a “beauty tax” that entails significant financial and time costs.[8] This bias is also reflected in everyday language. Women in both academia and industry are more likely than men to be described using infantilizing terms, such as “girls.”[9] These patterns indicate that age-related gender expectations may constitute a pervasive, culture-wide statistical bias that shapes perceptions of individuals across society.[10]
In the educational field, AI tools may also favour male applicants over female ones when reviewing university applications and this can happen because AI predictions often label women as having a higher dropout risk, and since a university’s reputation is closely tied to its dropout rates, institutions may try to minimise risk by offering female applicants fewer places.[11] (Ho et al. (2025)) Hence, AI not only generates misleading data on gender-specific issues, but it also jeopardizes future opportunities for women, because it relies on general patterns rather than a meticulous understanding of the actual situation.
In healthcare systems, AI also manages to downplay women. The research revealed that when Google’s AI tool “Gemma” was used to generate and summarise identical case notes, terms such as “disabled,” “unable,” and “complex” appeared significantly more frequently in descriptions of men than of women, another study conducted by the London School of Economics and Political Science (LSE), also found that women with comparable care needs were more likely to have those needs downplayed or omitted altogether.[12] Dr Sam Rickman, the report’s lead author and a researcher at LSE’s Care Policy and Evaluation Centre, warned that the use of AI could lead to unequal care provision for women: “We know these models are being used very widely and what’s concerning is that we found very meaningful differences between measures of bias in different models,” he said “Google’s model, in particular, downplays women’s physical and mental health needs in comparison to men’s. And because the amount of care you get is determined on the basis of perceived need, this could result in women receiving less care if biased models are used in practice. But we don’t actually know which models are being used at the moment.” (The Guardian, 2025)[13]
The problem does not end here, if you ask ChatGPT to provide examples of strong and courageous role models, it will mostly respond with male figures, because these traits are stereotypically associated with men. In contrast, emotional or sensitive qualities are often linked to female leaders, reflecting long-standing gender stereotypes.[14](Newstead et al., 2023) And this problem stems directly from the fact that AI systems learn from human-generated content – content that already reflects society’s biases.
What is the actual reason behind such an unequal ecosystem even within the AI sphere? Why does AI make unequal decisions based on gender?
An alarming challenge that contributes to the creation of biased AI mechanisms is the lack of diversity within development teams. In 2018, just 10–15% of AI developers and designers at major technology firms were women. (Whittaker et al., 2018, pp. 1–62), in 2019, women held only 26% of data and AI roles across the workforce (World Economic Forum, 2019). This gender imbalance has persisted, as evidenced by the 2022 Stack Overflow Developer Survey, in which 93% of professional developers identified as male (Stack Overflow, 2022).
According to above-mentioned studies many AI teams are predominantly male and this imbalance influences the analytical approaches, priorities and decisions embedded in AI models, and as a result, AI systems developed in such environments may replicate existing gender differences in decision-making, reinforcing gender bias and overlooking perspectives that are essential for fair and inclusive outcomes.[15] (Ho et al. (2025))
What should be done to prevent this?
First and foremost, gender equality in the digital space begins with gender equality in real life. If equality is not achieved in an actual society, it cannot be fully realised online or in digital environments. This means that gender equality must be strengthened in the workplace. In large tech-development companies, the environment should be more gender-inclusive, not only for gender equality, but for diverse perspectives as well. When women developers are included, they contribute valuable viewpoints that help create more balanced, fair and neutral outcomes. Second, AI systems require constant monitoring, improvement and oversight. Developers must keep a close eye both when creating new systems and when updating existing ones. Consequently, if an AI model begins to incorporate gender-discriminatory content, the problem can be identified immediately and addressed without delay.
Recognizing that gender bias in AI systems can lead to discriminatory outcomes, the Council of the European Union calls on member states to address this issue by employing clear, representative and high-quality datasets, implementing human oversight and ensuring that AI systems comply with non-discrimination regulations and sector-specific AI legislation.[16] (Council of the EU)
The EU has implemented several regulations in the areas of digitalization and gender equality that form the basis of current conclusions, Regulation 2024/1689, known as the ‘AI Act,’ represents the first-ever comprehensive legal framework governing artificial intelligence additionally, Regulation 2022/2065, the ‘Digital Services Act,’ was enacted to create a safer online environment for EU users, addressing illegal content and promoting transparency.[17] (Council of the EU)
Conclusion
If the future is inevitably tied to AI technologies, then humans have a responsibility to ensure gender neutrality – both online and offline – so that historical biases are not carried into digital systems. This means creating real-world environments and digital platforms that actively challenge long-standing gender stereotypes instead of reinforcing them. Big tech companies must ensure their digital ecosystems are gender-friendly, while Member States should play a crucial role in establishing and enforcing regulations that promote gender neutrality and prevent the persistence of historical biases. Only through coordinated efforts can future AI tools remain fair, impartial and inclusive.
Bibliography
1.UN Women, https://www.unwomen.org/en/articles/explainer/artificial-intelligence-and-gender-equality
- UNIA, https://www.unia.be/en/dossiers/artificial-intelligence-discrimination
3.. Ethics and ai: The 2020 international baccalaureate grading scandal, A. Ovchinnikov
4.. Gender discrimination at workplace: Do artificial intelligence (AI) and machine learning (ML) have opinions about it, M.T. Nuseir, Al Kurdi, M.T. Alshurideh, H.M. Alzoubi
- J. Dastin, Insight—Amazon scraps secret AI recruiting tool that showed bias against women
- Guilbeault, D., Delecourt, S. & Desikan, B.S. Age and gender distortion in online media and large language models.Nature646, 1129–1137 (2025). https://doi.org/10.1038/s41586-025-09581-z
- Itzin, C. & Phillipson, C. inGender, Culture and Organizational Change(eds Itzen, C. & Newman, J.) 84–95 (Routledge, 1995). / Spedale, S., Coupland, C. & Tempest, S. Gendered ageism and organizational routines at work: the case of day-parting in television broadcasting. Organ. Stud. 35, 1585–1604 (2014).
- Ramati-Ziber, L., Shnabel, N. & Glick, P. The beauty myth: prescriptive beauty norms for women reflect hierarchy-enhancing motivations leading to discriminatory employment practices.J. Pers. Soc. Psychol.119, 317–343 (2020).
- MacArthur, H. J., Cundiff, J. L. & Mehl, M. R. Estimating the prevalence of gender-biased language in undergraduates’ everyday speech.Sex Roles82, 81–93 (2020). / Miller, K. L. I’m a manager, but to my boss and colleagues, I’m a ‘girl’. Washington Postwww.washingtonpost.com/business/economy/im-a-manager-but-to-my-boss-and-colleagues-im-a-girl/2019/05/10/d18fb3ea-71d0-11e9-9eb4-0828f5389013_story.html (2019).
- Ridgeway, C. L. inFramed by Gender: How Gender Inequality Persists in the Modern World(ed. Ridgeway, C. L.) 32–55 (Oxford Univ. Press, 2011). / Martin, A. E., Guevara Beltran, D., Koster, J. & Tracy, J. L. Is gender primacy universal? Proc. Natl Acad. Sci. USA 121, e2401919121 (2024).
- Gender biases within Artificial Intelligence and ChatGPT, Ho, J. Q. H., Hartanto, A., Koh, A., & Majeed, N. M. (2025), https://www.sciencedirect.com/science/article/pii/S2949882125000295#bib158
- The Guardian , 2025 https://www.theguardian.com/technology/2025/aug/11/ai-tools-used-by-english-councils-downplay-womens-health-issues-study-finds
- The Guardian , 2025 https://www.theguardian.com/technology/2025/aug/11/ai-tools-used-by-english-councils-downplay-womens-health-issues-study-finds
- How AI can perpetuate–or help mitigate–gender bias in leadership, T.Newstead,B. Eager, S. Wilson
- Gender biases within Artificial Intelligence and ChatGPT, Ho, J. Q. H., Hartanto, A., Koh, A., & Majeed, N. M. (2025), https://www.sciencedirect.com/science/article/pii/S2949882125000295#bib158
- Council of the European Union, https://www.consilium.europa.eu/en/press/press-releases/2025/06/19/council-calls-for-targeted-efforts-to-advance-gender-equality-in-the-ai-driven-digital-age/
- Council of the European Union, https://www.consilium.europa.eu/en/press/press-releases/2025/06/19/council-calls-for-targeted-efforts-to-advance-gender-equality-in-the-ai-driven-digital-age/