Artificial Intelligence: Lucchini and Bartoletti’s research on fairness as a guiding criterion for ethics

In the book “Applied Ethics in a Digital World”, the publisher IGI Global entrusts two Italian researchers with the complex question of ethical choices to be made when it is not the human mind but a computer that operates

by Jerome Lagersie

Fairness as a guiding criterion in the implementation of ethical rules for Artificial Intelligence (AI). This is the central theme of the research carried out by Lucia Lucchini, Cyber Risk Manager at Deloitte UK, and Ivana Bartoletti, Global Privacy Officer at Wipro, and included in the book that Igi Global, an international publisher specialising in academic titles, has dedicated to the ethical issues affecting the digital world. The brainchild of two digital science gurus and popularisers, Ingrid Vasiliu Feltes and Jane Thomason, “Applied Ethics in a Digital World” is a study entrusted to several experts from different parts of the world in order to have a polyphonic and multicultural vision of such a central theme as ethics in the digital world.

Each research chapter therefore covers a different dimension that the respective researchers proposed to explore independently. The final idea was to integrate different themes and perspectives with the further possibility of exploring possible future research areas for each of the topics covered. In chapter two, the two Italian authors, Lucchini and Bartoletti, decided to specifically explore artificial intelligence and the apparent debate between technocratic and socio-political solutions and the ramifications that these have on the debate that many companies face on the issue of “responsible business” and its meaning from a digital implementation and digital transformation perspective. Lucchini and Bartoletti in particular focus on the role of equity in AI and how this is the key to implementing ethical issues on a larger scale.

Lucia Lucchini

As artificial intelligence (AI) is increasingly employed in almost every aspect of our daily lives, the discourse around the pervasiveness of algorithmic tools and automated decision-making seems to be almost trivial. The chapter written by the two Italian researchers investigates the limits and opportunities within the existing debates and examines the rapidly evolving legal landscape and recent court cases. The authors suggest that a viable approach to fairness, which ultimately remains a choice that organisations must make, could be rooted in a new measurable and accountable responsible business framework.

“AI solutions,” Lucchini and Bartoletti write in the introduction, “are now driving resource allocation, as well as shaping the news and products individuals are exposed to: from credit scoring to facial recognition, predictive technologies to accurately identify fraudsters, youth crime prevention tools, and algorithm-driven advertising… how far AI can go is already a reality we all live with on a daily basis.

Ivana Bartoletti

“While public calls for regulatory oversight are on the rise and dominating the headlines,” the two researchers explain, “we have yet to define how agencies, governments, as well as private sector organisations can provide

meaningful notice on an algorithmic decision-making output. This has led to the deployment of imperfect automation, the consequences of which end up damaging trust in technology and hindering human rights, limiting and/or blocking individuals from services and equal opportunities.”

“This article,” Lucchini and Bartoletti conclude in their introduction, “argues that since opting for equity may not be the optimal financial solution for an organisation, its formalisation lies in the responsible business that is gaining momentum amidst consumer demands for greater equity and transparency.

(Associated Medias) – All rights are reserved