MITB Banner

Can Human Rights Be AI-Centered?

Share

Human rights must govern artificial intelligence (AI) in order to address the phenomenon of the lack of transparency of algorithms and the data that this technology feeds on.

The New York-based research center, Data & Society, recently released a document titled Governing Artificial Intelligence: Upholding Human Rights and Dignity, in which it presents some examples of how artificial intelligence can violate people’s human rights, in addition to offering an overview of what the members of society who come into contact with this technology can do about it: companies, civil society, governments, the United Nations, intergovernmental institutions and academia.

“For AI to benefit the common good, at least its design and deployment must prevent damage to fundamental human values. International human rights provide a solid and comprehensive formulation of these values.”, it is mentioned in the document.

Examples of the consequences of not putting human rights at the center of the development of artificial intelligence projects are not few. Recently, Amazon had to get rid of an artificial intelligence assistant who made hiring its human resources easier because it began to manifest clear discriminatory biases against women.

The organization ProPublica, and especially the researcher Julia Angwin, have been in charge of revealing how the algorithms are “black boxes” that can introduce racist biases in judicial decisions and in an article published on October 21, 2018, the MIT Technology Review exposed how various groups of human rights activists in the United States have spoken out against the creation of “a massive database containing the names and personal data of at least 17,500 people suspected of being involved in criminal gangs.”

The Data & Society document presents examples of how, through artificial intelligence projects developed and implemented by companies like Facebook, governments like China and even universities like Stanford, they can involve systematic human rights violations such as the right to non-discrimination, to privacy, political participation and freedom of expression. “Stanford University researchers trained a deep neural network to predict the sexual orientation of their study subjects, without obtaining their consent, using a set of images collected from online dating websites.

Beyond several methodological shortcomings, the research showed how a lack of respect for the right to privacy increases the risks of algorithmic surveillance, with which the data that is collected and analyzed threatens to reveal personal information about users. This may put individuals and groups at risk, particularly those living under regimes that would use such information to repress and discriminate, ”the report explains.

In what way are companies and organizations worldwide addressing this problem?

One of the companies that are implementing this kind of best practices and recommendations around artificial intelligence systems is IBM, through its Trust and Compliance program, which is available to customers of the company’s cloud service and which has the ability to deliver an indicator report that will tell you how its artificial intelligence platform is behaving from two points of view: on the side of the creation and development of algorithms and from the data set that has been used to train these algorithms.

Although Trust and Compliance are only aimed at customers of the IBM cloud service and universities, this company program stands out against the measures taken by other technology companies such as Google, which also faced claims from its own workers who agreed to discontinue the work in an artificial intelligence project called Project Maven.

“Just a few months later, many Google employees feel that those principles have been pushed aside with an offer for a $ 10 billion Defense Department contract. A recent study was done at North Carolina State University also found that asking software engineers to read a code of ethics does nothing to change their behavior, ”says the MIT Technology Review report.

If AI researchers, developers, and designers work to protect and respect fundamental human rights, they could open the way for broad societal benefit. To ignore human rights would be to close that path.


References

HAO, K., Establishing an AI code of ethics will be harder than people think, MIT Technology Review, https://www.technologyreview.com/s/612318/establishing-an-ai-code-of-ethics-will-be-harder-than-people-think/

LATONERO, M., Governing artificial intelligence: upholding human rights & dignity https://datasociety.net/wp-content/uploads/2018/10/DataSociety_Governing_Artificial_Intelligence_Upholding_Human_Rights.pdf

Share
Picture of Dr. Raul V. Rodriguez

Dr. Raul V. Rodriguez

Dean at Woxsen School of Business. He is a registered expert in Artificial intelligence, Intelligent Systems, Multi-agent Systems at the European Commission, and has been nominated for the Forbes 30 Under 30 Europe 2020 list.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.