Relying On, but Not Trusting Algorithms to Prevent Bias

    One critical aspect of understanding how to treat algorithms is to understand the nature of our relationship with them. Algorithmic logic is math at its core. Given the objective nature of math, we tend to associate its objectiveness and describe algorithms as objective as well. Nevertheless, algorithms are not objective by nature. Quite the opposite, they are opinions written in code, glued together with math. In other words, we cannot claim a slice of apple pie to be fresh and nutritious as fruit just because it was made with apples, no matter how hard some of us may wish it to fall into that category.

AI finds it hard to mimic human lack of interest

Figure 1. Computers vs Humans. (Source: xkcd.com, n.d.)

    The result of this, sometimes unfortunate, instinct to define by association may build a false sense of trust in algorithms. These logics are great tools to guide us through processes, but we must be careful with concluding a relationship of trust from those interactions. As algorithms get more complex, we move from human-developed algorithms to those developed by Machine Learning or Artificial Intelligence. While researching these more complex algorithms that better mimic human interactions, I found a very interesting paper written by Dr. Mark Ryan on our relationship with Artificial Intelligence. Ryan says:

“One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI (…) one needs to either change ‘trustworthy AI’ to ‘reliable AI’ or remove it altogether. The rational account of reliability does not require AI to have emotion towards the trustor (affective account) or be responsible for its actions (normative account).
One can rely on another based on dependable habits, but placing a trust in someone requires they act out of goodwill towards the trustor. This is the main reason why human-made objects, such as AI, can be reliable, but not trustworthy, according to the affective account.” (2020).

    Because of the non-human nature of algorithms, we should see it less for what its behavior mimics and more for what it is at the source. We can rely on its analysis but be careful with extending that dependability into trust. Trust would require a more complete sentient entity on which we not only can rely but with which we establish a relationship beyond the task discussed and from which we could demand responsibility.

References

Ryan, M. (2020). In AI We Trust: Ethics, Artificial Intelligence, and Reliability. Science and Engineering Ethics, 26(5), 2749–2767. https://doi.org/10.1007/s11948-020-00228-y

Computers vs Humans. (n.d.). [Cartoon]. Xkcd. https://xkcd.com/1875/

Comments

Popular posts from this blog

Could the FTC Order to Delete Biased Algorithms in the Near Future?

AI Bias: Human Bias in Artificial-Intelligence-Developed Algorithms

Algorithmic Bias: Is Perfectly Imperfect Good Enough?