Image's source
09 May 2018

An AI Ethical Conundrum

tl;dr Artificial Intelligence, AI, enables more and more processing, a lot without consequences to users. Recent evolutions are enabling algorithms that will remove unfair bias, like race, present in their data. The unfairness of keeping the bias in the model is first increasing the error and second very cultural and politically correct. How legitimate would be a person asking to be judged under the light of a biased factor?


A few months ago at Data Science Luxembourg, Chris Hammerschmidt discussed about why and how building inspectable model. I will not wander on the "how" but I would like to investigate further the "why".

Two of the given covered examples were use cases where major decision: credit or legal records, were impacted by a racial factor.

I use the term "factor" rather than "bias". For it is easy to agree that it is unfair to use something that an individual has not chosen as a statistical element for anything to grant them. But this has an impact on the observed phenomenon, hence it is more a factor than a bias. You make a trade-off by removing this attribute in your models or your data for reasons which are not related to model performance, but model acceptability.

The frontier is not always clear about what can be tolerated and what can not be. Political correctness agrees those days that being black can not be considered as a factor in criminality, but nobody seems to complain when we observe that males are more likely to be violent offenders.

It is often observed that race is a proxy to other attributes, like education or income. And it is more legitimate to be denied a credit because of a low income.

Side note: does this mean that it feels ok to be denied going out of prison because you do not have a degree? Yet it is part of the decision process.

It could also play in the other direction, if you come from background that would predict low income and no academic degree, having those income and degrees could be a predictor of good future performance on those same elements.


Using humans to make the same decisions was easier because humans are not accountable for their actual decision process. We ask our AI systems to explain themselves, and this cannot be the kind of made up explanations that humans routinely do.

Judges have to be plausibly fair whereas AI systems have to be fair, because it is very easily auditable.


Ponder the following situation, one day you go in prison for a minor drug offense after a few months, a judge is considering a potential release. This judge is assisted by a software which advices to keep you locked in because you have a high likelihood of being a repeating offender. This decision comes out because of your education and income.

But if this model had considered your religion, you would have been granted a release (because the models would have predicted differently).

That is what I call the unethical ethical model paradox. We design acceptable underperforming models, but the slighest underperformance might cost a lot to some people.

On one hand there is the legitimacy to draw a line between acceptable and unacceptable. Auditable models are an unprecedented opportunity to achieve fairness in justice and in society in general

On the other hand, if we accept statistics to be a form of truth. How fair is it to lock someone up when we know that his full data tells that this is not useful?

One could also consider the judge's goal to be public good, to contemplate a release under the eye of the potential recidivism. But what sense does it make to refuse a better performing model? Do we weight more fighting discrimination than locking up individuals?

Extending the topic, can we consider that if is fair for our models to be judging an individual from the experience accumulated with other individuals? Isn't it judging one persons from others' deeds? Are we judging crimes or have statistical profiles replaced the real crime?

I have no strong opinion on the paradox, but I think that those are moral problems worth discussing.


Fräntz Miccoli

This blog is wrapping my ideas and opinions about innovation and entrepreneurship.

Since a bit of time now, I am the happy cofounder, COO & CTO of Nexvia.

Ideas expressed are here to be challenged.

About me The Dark Side Twitter