To stop algorithmic bias, we first have to define it (Emily Bembeneck, Rebecca Nissan, and Ziad Obermeyer, Brookings)

In sectors as diverse as health care, criminal justice, and finance, algorithms are increasingly used to help make complex decisions that are otherwise troubled by human biases. Imagine criminal justice decisions made without race as a factor or hiring decisions made without gender preference. The upside of AI is clear: human decisionmakers are far from perfect, and algorithms hold great promise for improving the quality of decisions. But disturbing examples of algorithmic bias have come to light. Our own work has shown, for example, that a widely-used algorithm recommended less health care to Black patients despite greater health needs. In this case, a deeply biased algorithm reached massive scale without anyone catching it—not the makers of the algorithm, not the purchasers, not those affected, and not regulators.

To stop algorithmic bias, we first have to define it (brookings.edu)

Marco Emanuele
Marco Emanuele è appassionato di cultura della complessità, cultura della tecnologia e relazioni internazionali. Approfondisce il pensiero di Hannah Arendt, Edgar Morin, Raimon Panikkar. Marco ha insegnato Evoluzione della Democrazia e Totalitarismi, è l’editor di The Global Eye e scrive per The Science of Where Magazine. Marco Emanuele is passionate about complexity culture, technology culture and international relations. He delves into the thought of Hannah Arendt, Edgar Morin, Raimon Panikkar. He has taught Evolution of Democracy and Totalitarianisms. Marco is editor of The Global Eye and writes for The Science of Where Magazine.

Latest articles

Related articles