- Created by the Eticas Foundation, the Observatory of Social Impact Algorithms, OASI, collects information from dozens of algorithms used by Public Administrations and companies around the world to learn more about their social impact.
- The aim is to give the public access to information about government and business algorithms, and to know who uses them, who develops them, what threats they pose and whether they have been audited, among other characteristics.
- Algorithm bias and discrimination usually occurs based on age, gender, race, or disability, among other values, but due to the general lack of transparency it is still not possible to know all its consequences on the groups concerned.
The Eticas Foundation, a non-profit organization that promotes the responsible use of algorithms and artificial intelligence (AI) systems, created the Observatory of Social Impact Algorithms (OASI). This Observatory features a search engine to learn more about the tools that make important automated decisions about citizens, consumers and users around the world.
Currently, both businesses and governments automate decisions using algorithms. However, its development and commissioning do not follow external quality controls, nor is it as transparent as it should be, which leaves the population unprotected. With this search engine, everyone can find out more about these algorithms: who developed them, who uses them, their scope, if they have been audited, their objectives or their social impact and the threats they represent. .
At present, OASI collects 57 algorithms, but hopes to reach 100 in the following months. Of these, 24 are already applied in the United States by the government and big tech companies. For example, ShotSpotter, an algorithm tool deployed by the Oakland Police Department to combat and reduce gun violence through sound monitoring microphones, and an algorithm to predict potential child abuse and neglect used by Allegheny County, Pennsylvania. Another business example is Rekognition, Amazon’s facial recognition system, which was audited by the MIT Media Lab in early 2019, and was found to be significantly less successful at identifying the gender of a individual whether female or darker skinned.
The most common discrimination is based on age, sex, race or disability, unintentionally produced by developers who lack the socio-economic skills to understand the impact of this technology. In this sense, these engineers design the algorithms purely on the basis of technical skills, and since there are no external controls and it seems to work as intended, the algorithm continues to learn from deficient data.
Faced with the lack of transparency on the operation of some of these algorithms, the Eticas Foundation, in addition to the launch of OASI, is developing an external audit project. The first is VioGén, the algorithm used by the Spanish Interior Ministry to assign a risk to women seeking protection after experiencing cases of domestic violence. Eticas will carry out an external audit by reverse engineering and administrative data, interviews, reports or design scripts, in order to collect the results on a large scale. All this in order to detect opportunities for improvement in the protection of these women.
“Despite the existence of algorithmic control and audit methods to ensure that the technology respects the regulations in force and fundamental rights, the Administration and many companies continue to turn a deaf ear to citizens’ demands for transparency. and institutions, â€said Gemma Galdon, founder of the Eticas Foundation. . “In addition to the OASI, after several years during which we have developed more than a dozen audits for companies such as Alpha Telefónica, the United Nations, Koa Health or the Inter-American Development Bank, we have also published a Guide to Algorithmic Auditing so that anyone can perform them. The aim is always to raise awareness, bring transparency and restore confidence in technology, which in itself should not be harmful. “
In this sense, algorithms that are trained with machine learning techniques use a large amount of historical data to “teach” them to choose based on past decisions. Usually, these data are not representative of the socio-economic and cultural reality to which they are applied, but on many occasions they reflect an unfair situation which is not intended to be perpetuated. In this way, the algorithm would technically make “correct†decisions based on its training, even if the reality is that its recommendations or predictions are biased or discriminating.
About the Eticas Foundation
The Eticas Foundation works to translate into technical specifications the principles that guide society, such as equal opportunities, transparency and non-discrimination found in the technologies that make automated decisions about our lives. It seeks a balance between the evolution of social values, the technical possibilities of the latest advances and the legal framework. To do this, it audits the algorithms, checks that the legal guarantees are applied to the digital world, in particular to Artificial Intelligence, and carries out intense work to raise awareness and disseminate the need for responsible and quality technology.