Ethical Dilemmas of AI: Let’s Have an Open Discussion

A scale with a human and a robot standing behind itArtificial Intelligence (AI) allows for tremendous advancements. At the same time, these developments bring to light interesting ethical issues and dilemmas. These challenges, sometimes feeling a tad uncomfortable, aid us in understanding the potential impact of AI on our daily lives. During my Business Administration studies, I followed the course ‘Artificial Intelligence, Business and Consumers’ with great interest, where we deeply discussed these issues under the guidance of Prof. Dr. Stefano Puntoni. While AI undoubtedly offers wonderful opportunities for businesses and consumers, we must take ethical implications seriously.

Dynamic Pricing

Let’s consider this interesting dilemma: Apple users, often willing to pay for quality products, are easily identifiable on the internet because of their use of Safari, Apple’s standard web browser. It’s well known that they often spend more and likely have more to spend. The question then arises, would it be acceptable to adjust the prices of an online store when an Apple user comes shopping, based on their browser choice? This concept, better known as dynamic pricing, raises an ethical issue at the heart of the AI-driven economy.

Farshad Bashir
Farshad Bashir

Dynamic pricing is not a strange concept and certainly not one exclusively born out of AI, but rather a concept that has been among us for a while. Consider situations dealing with groups such as illiterates, children, and the elderly - groups that might have challenges making cost-benefit analyses for various reasons. Should companies, according to the logic of dynamic pricing, silently charge these groups more for the same products?

It is easy to argue that these situations are not comparable. Most people would find it immediately unjust to charge more from the elderly, children, and illiterate individuals. But what if we apply the same ethical question to people who, like Apple users, are clearly willing to spend more? This calls for more research into ethical boundaries.

Greek Tragedy

Artificial Intelligence can sometimes confront us with ethical dilemmas that feel almost Greek-tragic, where heroes are faced with seemingly impossible choices. Think about driving assistance systems that are now standard in new cars. This technology promises a safer future on the road but may sometimes be forced to decide in a fraction of a second who should be harmed in an emergency: an animal or a human, a man or a woman, a child or an older person? The answers to these questions lie not with the AI system itself, but with the programmers shaping the underlying algorithms. Their decisions will influence the future of our mobility, but are we discussing those decisions enough?

This dilemma also impacts the insurance sector. What if an AI system cannot distinguish between an adult and a child? Should we base decisions on cost to minimize financial damage? Or should we opt for a more humane approach, regardless of the cost?

These provocative questions emphasize the importance of a broad and urgent debate on ethical aspects in AI within our society. Without such, there’s a risk that we allow programmers to make decisions about fundamental human values and ethics without giving it proper thought.


Deep learning, although astounding in its capabilities, has a long way to go before it can adequately grasp the subtleties of human interaction. For example, when we say: “Let’s meet at 6 PM”, humans interpret the intention and arrive on time. But if a party starts at 6 PM, most people won’t arrive exactly at that time. Understanding these kinds of unwritten social rules is extremely difficult for AI. The same goes for sarcasm, or even answering philosophical questions like “What is the meaning of life?”. Recognizing human behavior and intention poses a complicated challenge for AI.

While AI struggles to understand human nuances, people often display a deep-rooted mistrust towards technology. People often display skepticism towards algorithms, even those operating based on thorough data collection and analysis. Human preference for the judgment of their own kind over computer logic leads to interesting questions.

What’s behind this resistance? Where does this mistrust originate? There are many factors contributing to our skepticism towards algorithms. Bias undoubtedly plays a role, but there’s also an element of mystery and unfamiliarity - the idea that algorithms operate as a ‘black box’, making decisions based on criteria we don’t fully understand. Is an algorithm’s judgment fair, or is it just an inscrutable outcome of invisible calculations?

To increase trust in algorithms, we must strive for transparency and fair criteria. This doesn’t just mean explaining how the systems work. It’s also ensuring that the systems are fair, just, and in line with our values.

A Sizeable Assignment: Ethical Dialogue and Transparency

While it’s certainly exciting and challenging to be part of a world developing at such a fast pace, we must not forget that we have the responsibility to thoroughly ponder and discuss ethical considerations at every step of this journey. Artificial intelligence presents us with challenges that go considerably beyond technical and functional limitations. More questions and dilemmas will need to be addressed in the future, and for that, much debate is required. Transparency and accountability are important in this process.

About the author: Farshad Bashir combines his passion for entrepreneurship with tax advice at Taksgemak to assist businesses and individuals with the complex world of tax regulations. He simplifies the complicated and ensures his clients stay on track. Before diving into the consulting world, he was a member of the Dutch Parliament. This combination of political experience and tax knowledge makes him an excellent partner for anyone.

A scale with a human and a robot standing behind it
Balancing between human values and machine capabilities