Algorithmic Injustice [...]

Even if code is modified with the aim of securing procedural fairness, however, we are left with the deeper philosophical and political issue of whether neutrality constitutes fairness in background conditions of pervasive inequality and structural injustice. Purportedly neutral solutions in the context of widespread injustice risk further entrenching existing injustices. As many critics have pointed out, even if algorithms themselves achieve some sort of neutrality in themselves, the data that these algorithms learn from is still riddled with prejudice. In short, the data we have—and thus the data that gets fed into the algorithm—is neither the data we need nor the data we deserve. Thus, the cure for algorithmic bias may not be more, or better, algorithms. There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them. (Source)

Also,

Developers cannot just ask, “What do I need to do to fix my algorithm?” They must rather ask: “How does my algorithm interact with society at large, and as it currently is, including its structural inequalities?”

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on