Qwoted is a free expert network: we help reporters connect with experts & we help those same experts build relationships with top reporters.
Location | TBA |
Region | All |
The fairness of AI algorithms is a growing field of research that arises from the general need for decisions to be free from bias and discrimination. Fairness also applies to AI-based decision tools, where the European White Paper on AI provides a framework in which AI or algorithmic decision-making needs to be carefully considered.
For the sake of simplicity, let us use a hypothetical case: an AI model used by a bank to predict whether or not an individual will receive a loan based on the risk of default. Some critical elements in the European White Paper on AI play together when this type of AI model assesses an individual, namely: the person’s rights not to be subject to an automated decision in the first place, their right to get an explanation of the decision, and their right to non-discrimination.
This frame requires AI practitioners to produce models and workflows that – by design – take care of possible discrimination (fairness), are explainable to the user with a high degree of clarity (interpretability), and are reproducible through the whole AI model workflow (transparency). Examples of research efforts and products in this direction can be found at Google and IBM (see references).
In the described scenario, it is necessary to consider different definitions of fairness to evaluate the decision, and one would need adequate information about the AI model to analyze it. In this case, a set of questions can help highlight some aspects of fairness and its importance without the specifics.