chevron_left Back to insights

AI and ethics. Do’s, don’ts and considerations.



Reading time

5 minutes


Jessica Peetermans


Artificial Intelligence (AI), just like all emerging technologies, has the potential to profoundly change the world. But with this progress some ethical questions arise. This blogpost explores and discusses some of these questions, associated with the rise of AI. In short: AI and ethics: do's, don'ts and considerations.

Let’s start off with a conversation that happened during a recent family gathering at my Grandma's.

Picture us - aunts, uncles and cousins - sitting at Grandma's living room table, savoring slices of pie. It was a peaceful scene, until the conversation took an unexpected turn towards AI. We delved into the endless opportunities it presents, along with the unease it instills in some of us. The discussion touched upon the – somewhat unsettling - capability to create convincing videos of individuals speaking in languages they don't comprehend, all without the viewer realizing it's a fabrication.

"For the love of God, why would you create such a thing?" grandma exclaimed with disappointment. I found myself pondering the same question: Why indeed? Are we perhaps pushing the boundaries a bit too much, allowing the speed technological evolution to outpace our morals? And what are other things to consider regarding the ethics of AI?

I cannot be the only one asking these questions, am I?


First things first. Let's clarify what we mean by ethics.

Ethics is a branch of philosophy concerned with questions of morality, distinguishing between right and wrong conduct, and determining how individuals and societies should behave. It entails the examination and assessment of moral principles, leading to the formulation of guidelines for ethical behavior. Common themes in ethics include justice, fairness, virtue and the moral responsibilities of both individuals and institutions. In various contexts, ethics provide a framework for making moral decisions and navigating complex moral dilemmas. 

In this article, we will focus on 3 topics:

  • Bias in AI
  • The (lack of) transparency
  • Responsible decision making.

It's no coincidence that these topics should be taken into account when you integrate AI in your business.

Biases in AI

Bias in artificial intelligence occurs when AI systems generate prejudiced results that mirror and reinforce societal biases, encompassing historical and contemporary social inequalities. This bias can manifest in the initial training data, the algorithm itself, or the predictions generated by the algorithm. Unchecked bias not only hinders individuals' involvement in the economy and society but also diminishes the potential of AI. Systems producing distorted outcomes contribute to mistrust among people of color, women, individuals with disabilities, the LGBTQ community, and other marginalized groups. The consequence? Limiting the effectiveness of AI in various domains.

IBM mentions some examples of real life biases in AI in one of their articles:

"Academic research found bias in the AI art generation application Midjourney. When asked to create images of people in specialized professions, it showed both younger and older people, but the older ones were always men, reinforcing gendered bias of the role of women.

Ai and ethics: bias in AI

Our advice

▷ It is important to continually work on reducing biases in AI systems and ensure that these technologies are fair and inclusive. Additionally, mechanisms should be in place to identify and correct discrimination when it occurs after all.

The lack of transparency

Certain machine learning systems are described as "black box". This means we don’t really know how they work and how they come to certain results. This lack of transparency poses a challenge when AI is employed to make decisions with real-world impact on individuals. The right of individuals to understand the mechanisms behind critical decisions, such as loan approvals, parole determinations, and hiring processes, has prompted a call for more transparent AI.

The way AI works, and the way it fails, are foreign to us. […] That is the old irony of AI - the best systems happen to be the ones that are least explainable today.

Allah Nourbakhsh, Professor of RoboticsCarnegie Mellon University

But it’s a knife that cuts both ways. Some argue that too much transparency brings risks as well. “Transparency can create security risks. Too much transparency may lead to leaking of privacy-sensitive data into the wrong hands. Or the more that is revealed about the algorithms and the data, the more harm a malicious actor can cause. Algorithms can be hacked, and information may make AI more vulnerable to intentional attacks. Entire algorithms can also be stolen based simply on their explanations alone.” - The Ethics of AI- Chapter 4: Should we know how AI works

Our advice

▷ We advise you to always weigh up the performance and need for transparency. Not sure what's best or what's needed in a certain case? Ask your experienced partner.

Responsible decision making

Another dilemma in the evolving landscape of AI is the extent of autonomy we (want to) grant to machines. There are questions surrounding the decision-making authority of AI systems: where lies the responsibility when errors occur? Users and stakeholders affected by the decisions made by AI systems should have a clear understanding of the processes and rationales guiding those decisions.

governance framework

This principle aligns closely with the concept of good governance, where decisions in both public and private sectors must adhere to the fundamental tenet of non-arbitrariness. Non-arbitrariness, in this context, necessitates the provision of justifications for decisions that have ethical or legal implications for individuals.

Moreover, in the realm of public governance, the ability to contest and appeal decisions becomes pivotal, acting as a safeguard against potential injustices and reinforcing the demand for corrective actions.

As we navigate the complexities of integrating AI into various facets of our lives, establishing a framework that upholds ethical principles, accountability, and transparency becomes paramount for a responsible and equitable future.

Our advice

▷ You, as an organization, are responsible for where and how you integrate AI in your processes and the availability for your users. Always consider if a human intermediate step is desirable to interpret the AI advice, before choices are made.


In conclusion, the rapid advancement of Artificial Intelligence brings a multitude of ethical considerations, that demand our thoughtful examination. The anecdote from my family gathering serves as a reminder of the unease and ethical dilemmas associated with the capabilities of this technology.

AI continues to shape our world and a lot of these (and other) ethical questions remain unanswered. The needs and our advice in a nutshell:

ai and ethics

As we embark on the journey of integrating AI into our lives, a commitment to ethical principles is key.

Besides taking into account ethical aspects regarding AI, there are also regulatory requirements just around the corner. Consider getting external expert advice to navigate the ethical and legal complexities. We are happy to guide you to answer these questions.

about the author

Jessica Peetermans

Jessica Peetermans is Functional Analyst and mentor at The Value Hub. Her strengths lie in a frontend oriented analysis approach, where she focuses on UX, wireframing and the psychology behind. In her role as a Functional Analyst, she’s responsible for building connections with the business teams and optimizing the dynamics in those teams. She has a soft spot for innovation, change and growth. Helping others grow and making people feel good at work is what drives her as a mentor.

Discover other insights