Algorithmic Injustice


> Even if code is modified with the aim of securing procedural fairness, however, we are left with the deeper philosophical and political issue of whether neutrality constitutes fairness in background conditions of pervasive inequality and structural injustice. Purportedly neutral solutions in the context of widespread injustice risk further entrenching existing injustices. As many critics have pointed out, even if algorithms themselves achieve some sort of neutrality in themselves, the data that these algorithms learn from is still riddled with prejudice. In short, the data we have—and thus the data that gets fed into the algorithm—is neither the data we need nor the data we deserve. Thus, the cure for algorithmic bias may not be more, or better, algorithms. There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them. [source](https://bostonreview.net/science-nature-politics/annette-zimmermann-elena-di-rosa-hochan-kim-technology-cant-fix-algorithmic)

Also,
>Developers cannot just ask, “What do I need to do to fix my algorithm?” They must rather ask: “How does my algorithm interact with society at large, and as it currently is, including its structural inequalities?”


Leave a Reply

Your email address will not be published. Required fields are marked *

css.php