Researcher Yasaman Yousefi deals with the question of how algorithmic systems and artificial intelligence can be made fairer. She will be a guest in Luxembourg at Erwuessebildung next week. woxx spoke to her about the AI hype.
Dieses Interview ist auch auf Deutsch verfügbar. Follow this link for more information about the event Algorithmic Discrimination: Reasons & Consequences (29.04.2025).

Yasaman Yousefi researches fairness in algorithmic decision-making. (Photo: private)
What is algorithmic discrimination?
Algorithmic discrimination is a phenomenon that happens when automated systems start treating people unfairly because of certain characteristics they might have. Algorithms make correlations based on those characteristics. These could for example be the race, gender, or socioeconomic status of a person, or even correlated categories, like “sad teenagers” or “dog owners”. This is what algorithms do: they find common points and group characteristics. That kind of grouping, which is harmless per se, can turn into discrimination because the data or the historical practices these algorithms have been trained with might be biased. This can lead to an unequal access to opportunities and services, or even information in some cases. For example, there was the case of Amazon’s hiring algorithm. It wasn’t specifically designed in that way, but it did end up being biased against women. Why? Because the algorithm had been trainedon historical data from Amazon’s hiring practices of the past 20 years, when more men had been hired than women. So, the algorithm decided that if the word “woman” was mentioned in a candidate’s CV, it would ignore the CV and not pass it on to HR. That’s how algorithmic discrimination happens, even if it’s not necessarily intentional.
A more subtle form of how algorithms can influence our lives is throughsocial media or shopping sites. How does this manipulation work?
The algorithms on these sites and apps are designed to optimize engagement and profit, especially when it comes to shopping. On social media, they prioritize content that triggers a stronger reaction. Usually, that means polarizing views or even misinformation. In the case of online shopping, whether through social media or general recommendation systems, the algorithms influence our choices because they personalize recommendations. By looking at our data, they show us targeted ads. I’ll give you an example: I’m a cat owner. All of my Instagram, as you can imagine, is full of cat content – that’s exactly the way it works. Sometimes you might feel like the algorithm’s reading your mind; if you just thought about an item or you were talking about something with your friends, and soon after that you see an ad for it. But the algorithms correlate what you did online with other factors, such as your age group, income level, interests, etc. They nudge our behaviour without us even being fully aware of it.
We interact, knowingly or unknowingly, with a lot of algorithmic systems on a daily basis – without often realizing that they have the potential to amplify societal inequalities. How does this happen?
The problem is not algorithms as such, it’s the fact that they’re trained on data. Data reflects a lot of inequalities. There was a case in the US where an algorithm was used in judicial decision makings to predict the possibility of a criminal repeating a crime. It started favouring white people over Black people, even if the crime had been the same. It started recommending to the judges to give Black people more time in prison. This shows us how biases can be reinforced and magnified by algorithms. These biases are not new, nor something the algorithms create – they amplify the biases that are already present in the data. This can lead to problems everywhere: marginalized communities like women or different racial and ethnic backgrounds could receive worse credit offers or worse mortgage offers, or face less job visibility or stricter controls in airport security controls.
There is a trend of anthropomorphising algorithmic systems – not only with AI, but also with social media. For example, when we discuss our social media strategy, people tend to say “Instagram doesn’t like X, Instagram likes Y”. Is this hiding who is really accountable for the actions of these algorithms?
This is a very good point. By humanizing algorithms we are giving the illusion that these platforms have a will of their own. In reality, the decisions about how these algorithms work, and how they’re made, are human decisions and reflect all the power balances that exist in society. The idea that it is an algorithm or an app that likes or doesn’t like something is really manipulative because it takes the focus away from the people behind the tech: engineers, designers, developers – and does not hold them accountable. I think this narrative of ‘Siri says so’, or ‚ChatGPT thinks so‘ is what we should really be afraid of.
With so-called AI, this tendency is even bigger. A chatbot is seen as if it had intelligence, or even agency. Is that a clever ruse to avoid responsibility then?
Absolutely. The first time I ever thought about my research topic, I was still a master’s student and had read an article about Siri’s voice being female. This triggered the question of why is it even female? Who chose it to be female? Siri is meant to be your assistant and you are the boss. So it was designed with a female voice to sound subordinate – this builds on and reinforces the stereotype that women are good for assistant roles. So this tendency to provide AIs with a ‘personality’ is a way to avoid scrutiny about design choices, about data sources and about power structures that shape all of these systems. In a way, the people behind the AI are creating this beautiful loophole, so you end up blaming issues on the tech, but not the policy behind its design.
What risks could this be hiding?
Let me just be clear that I don’t think AI is all risks. With good education, ethical design and a solid legislation to protect users, AI could actually be very beneficial to society. Besides the perpetration of social biases and discrimination, though, one of the biggest risks for me is the lack of transparency and the issue of black boxes. These bring the potential for erosion of human oversight. We could fall into the trap of automation bias and over-rely on AI recommendations, which would let power dynamics stay the same, but without us noticing. New forms of AI discrimination can arise that we aren’t prepared for. There is also the matter of privacy and data protection, which isn’t solved, despite EU legislation like the GDPR and the AI Act. The latter is a good start to tackle these issues, but the law is always evolving slower than technology developments.
You mentioned these black boxes, do you think companies could be forced to open these and provide transparency on how their algorithms work?
I think opening the black boxes entirely is too big an ask, because even engineers sometimes don’t understand why a decision was made by such a system. Laws could push for better transparency and accountability, and this is what they’re doing. For example, the AI act stipulatesrequirements for explainability, transparency and human oversight. These maybe don’t open the black box, but do allow to show the reasoning behind it. We should also consider that the more complex these systems become, the more difficult it is to open them. Another issue to raise here is that the human mind is kind of a black box too: we can’t look inside a brain and see exactly how it works, human decisions aren’t always explainable. So I think with both humans and AI, it’s important to continue questioning and ask for the reasoning behind a decision.
At the moment, there seems to be a gigantic hype around everything that has to do with AI. The tech industry, but also governments are spending millions to build “AI factories”. Do you think it’s possible to deploy these technologies in an ethical way?
By adopting ethical design and inclusive development methods, by engaging civil society and introducing strong regulations, we could actually have ethically designed technologies. This is what many organisations are trying to achieve now. For example, I work as a consultant with a multidisciplinary team at a company called DEXAI, which aims to build ethical AI systems. Especially with the AI Act in the picture, the obligations on ethical AI deployment are stricter now. I think next to the focus on ethical design, we should also ask ourselves this important question: Is AI really necessary here? In this field of deployment, do we really need it orcan we do without? In some cases, we can really do without. There is this book called “From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech”, which includes an interesting example of this happening across several countries in Africa. There is a big investment for AI systems to combat the poaching of wildlife. These systems alert the rangers, so they can get there and intervene. But some of the rangers say that if the money had been invested on better equipment, they’d be able to do their work – without the help from AI. We should use AI up until it’s useful, and not just push for more and more, simply because there is money in it. But that’s unfortunately what’s happening.
Are there any rays of hope?
On a more positive note, I think technology in general, if given equal access to, could, in some cases, really be the saviour for a lot of societies and communities. I look at it from a personal perspective: where I am today is because of technology. The languages I learned to speak I learned because of my access to technology. Equal access should be given to marginalized communities for maximizing the benefits of technology. I see a light of hope for people in communities where inequalities are way worse than in Europe. I see them reaching out to the rest of the world and speaking their mind. I think we should focus more on these positive sides and not just on the negative. Equal access to and inclusive design of tech could even correct some human actions, which could lead to a brighter future.
About Yasaman Yousefi
Dr. Yasaman Yousefi has a PhD in Law, Science, and Technology (LAST-JD) from the University of Bologna, in collaboration with the University of Luxembourg. She recently defended her doctoral dissertation: The Quest for AI Fairness: Ethical, Legal, and Technical Solutions. Her research takes an interdisciplinary approach to fairness in algorithmic decision-making, bridging ethical, legal, and technical perspectives. Yasaman is now working as a postdoctoral fellow at the University of Bologna, where she is researching the risks of General-Purpose AI systems. She also works as a consultant and researcher on the ethical management and legal compliance of AI systems in several EU-funded Horizon projects, in collaboration with DEXAI.
Das könnte Sie auch interessieren:
- Am Bistro mat der woxx #233 – Wat sinn d’Problemer bei Bild- an Textgeneratoren, déi mat kënschtlecher Intelligenz funktionéieren?
- Podcast: Am Bistro mat der woxx #138 – Kënschtlech Intelligenz an hir Problemer zu Lëtzebuerg
- Am Bistro mat der woxx #315 – Die Illusion von Gesellschaft
- Am Bistro mat der woxx #314 – Wisou de Computer domm bléift an kënschtlech Intelligenz virun allem Hype ass
- Künstliche Intelligenz: Hype mit Vorurteilen