For an anti-racist artificial intelligence: meet Aqualtune Lab’s work

The use of AI in Brazil primarily affects the black population; the organization offers online training on the subject throughout the country

13.06.24

To alert the population about the relationship between technology and racism, the legal collective Aqualtune Lab has been promoting online training on the effects of using Artificial Intelligence on the lives of black people throughout Brazil. From facial recognition to filters used on social media, to autonomous weapons, everything is done by algorithms that simulate human reasoning based on learned patterns. 

For this reason, it is necessary to be alarmed: algorithms can reproduce prejudiced behaviours, influencing decisions that particularly harm the black population. The training is aimed at third-sector organizations, providing information from both the legal and information technology fields. For example, Aqualtune Lab is part of the Coalizão Negra por Direitos (‘Black Coalition for Rights’) and the Coalizão Direitos na Rede (‘Rights on the Net Coalition’). 

“When we talk about technology, most people don’t object, they think technology is inherently good. But what will set the tone for the use of technology are social relations. It’s an issue that conveys a neutrality that doesn’t exist and that mainly affects the most vulnerable black population,” says lawyer and co-director Clarissa França.

Technology is not neutral

Aqualtune Lab’s training sessions demonstrate that racism is a social phenomenon that is updated through new technologies and can be very subtle. For example, on social media, filters that alter images in the virtual realm have consequences in the real world.

“The filters whiten people; there is no filter for blackening. This reinforces the idea that making someone beautiful means having a younger appearance and thinner features, updating to concepts we were already on the path to overcoming. It’s a way to maintain hierarchy among people, what’s good and what’s bad, what’s beautiful and what’s ugly”, says Clarissa.

In addition to the training, the organization also produces the so-called Documento Preto (‘Black Document’), an analysis of Artificial Intelligence and racism aimed at guiding the construction of laws regulating technology. Among the recommendations are the banning of facial recognition by public security, the exclusion of autonomous weapons, and caution in using AI to determine access to healthcare services.

“Our police force is one of the deadliest in the world. And most of the time, racism victimizes the black population. That’s why we’re against facial recognition and the release of autonomous weapons. Using AI to determine who will have access to a hospital bed, medication, etc., also entails great risks for this population”, she warns.

Aqualtune Lab

Clarissa França is a lawyer specialising in Health Law. Her activism stems from her involvement in the black student movement. A native of Sergipe, a state in the Northeast of the country, she entered the first class of racial quotas in Brazil at the Faculdade de Direito da Universidade Estadual do Rio de Janeiro (UERJ) (‘Law School of the State University of Rio de Janeiro’). “UERJ wasn’t ready to receive us, so we formed a collective of black students called Denegrir. That was in 2003. I graduated and returned to Aracaju. Since then, I’ve always been involved in the black women’s movement”.

She explains that shortly after the publication of the Lei Geral de Proteção de Dados Pessoais (LGPD) (‘General Data Protection Law’) in 2018, she realized that the discussion on the subject was limited to a small portion of the population and did not include the black population. “I thought: why aren’t we discussing this? So I gathered with some people and started talking. We created a group named Proteção de Dados e Racismo (‘Data Protection and Racism’) and ended up meeting people like Tarcízio Silva and Bianca Kremer, specialists in the debate”.

“That was in 2020. It was a moment of pandemic, with many political problems, several setbacks and a lot being weakened by the logic of the internet and the use of data. And at that moment, just like today, there were pragmatic issues. The black population had to survive, work, eat… But increasingly, all these issues involving fundamental rights go through technology. That’s when we set up Aqualtune Lab”, she recalls.

Image: reproduction

Data Protection and Internet Access 

For Clarissa, the lack of information about the use of technologies, the way generated data is used, and the lack of universal internet access in Brazil are some of the problems associated with technology that need to be addressed. It is necessary, for example, to regulate the right of the population to be informed about the use of images and data.

Um caso conhecido é quando damos o CPF para ter um desconto na farmácia e esse dado é vendido sem consentimento para empresas que fazem publicidade direcionada. Ou quando andamos em uma rua vigiada por câmeras sem saber de sua existência. Ou ainda, quando essas câmeras prometem garantir segurança, mas na verdade, filmam hábitos de consumo da população que depois serão vendidos. 

A well-known case is when we provide our CPF (‘Cadastro de Pessoas Físicas’, the CPF number is the Brazilian individual taxpayer registry) to get a discount at the pharmacy, and this data is sold without consent to companies engaged in targeted advertising. Or when we walk on a street monitored by cameras without knowing of their existence. Or still, when these cameras promise to ensure security, but in reality, they film the population’s consumption habits, which will later be sold. 

“Today, the economy is based on data and we are putting all our data on platforms of foreign companies. We have no control over what is done with this data and it will probably be used to create new products and sold back to us,”, she warns. 

Artificial Intelligence and Racism 

According to the lawyer, it has become increasingly common for overt surveillance to be conducted by AI in the name of security, but with no effective results. “Violence rates stem much more from inequality and living conditions than effectively from repression technologies. So, the promise of security is a fallacy. In fact, it only violates the population’s right to privacy and freedom”. 

Clarissa points out that facial recognition errors, for example, occur most of the time with black individuals. This is because the technology has a high error rate, as it was developed based on databases of more uniform populations. “Brazil did not develop this technology; we imported a technology made in Germany”. 

She recalls a case that occurred at a football stadium in Sergipe, where she lives. “A black guy was arrested in the middle of the game by facial recognition, which later turned out to be flawed. He was handcuffed in the middle of the game and escorted out by police, but had committed no crime. Detail: at no time was the crowd informed that people were being monitored”, highlights the co-director of Aqualtune.

Want to support this cause? 

Aqualtune Lab’s headquarters are in Rio de Janeiro, but the organization operates in a network, with engaged individuals throughout Brazil. To donate or volunteer, visit the website and follow social media on Instagram, LinkedIn and Facebook.

    Black Coalition for Rights (Coalizão Negra por Direitos)
    Black Coalition for Rights (Coalizão Negra por Direitos)
    Need: Donations Volunteers
    Contact
    Support
Maira Carvalho
Journalist and Anthropologist, Maíra is responsible for reporting and writing articles for Lupa do Bem.
Share:
Notícias relacionadas