Algorithmic Fairness

From P2P Foundation
Jump to navigation Jump to search

Description

Anjana Susarla:

"Researchers have long been concerned about algorithmic fairness. For instance, Amazon’s AI-based recruiting tool turned out to dismiss female candidates. Amazon’s system was selectively extracting implicitly gendered words–words that men are more likely to use in everyday speech, such as “executed” and “captured.”

Other studies have shown that judicial algorithms are racially biased, sentencing poor black defendants for longer than others.

As part of the recently approved General Data Protection Regulation in the European Union, people have “a right to explanation” of the criteria that algorithms use in their decisions. This legislation treats the process of algorithmic decision-making like a recipe book. The thinking goes that if you understand the recipe, you can understand how the algorithm affects your life.

Meanwhile, some AI researchers have pushed for algorithms that are fair, accountable, and transparent, as well as interpretable, meaning that they should arrive at their decisions through processes that humans can understand and trust." (https://www.fastcompany.com/90336381/the-new-digital-divide-is-between-people-who-opt-out-of-algorithms-and-people-who-dont?)