||

Publications >

Vía Libre Foundation at EACL 2023

EDIA

Can artificial intelligence have biases and stereotypes?

Yes. That’s why we created EDIA, a tool that enables you to uncover these biases without needing technical knowledge.

The AI ethics team presented their work at the 17th EACL (European chapter of the Association for Computational Linguistics). Laura Alonso Alemany and Luciana Benotti demonstrated methods and tools aimed at simplifying the auditing process for bias and discriminatory behaviors in Natural Language Processing tools, considering the unique characteristics of our region. They showcased the outcomes of the EDIA project, including a prototype, a library, and several collaborations with interdisciplinary teams. These efforts shed light on strategies to tackle such issues, with a particular emphasis on addressing local needs. Additionally, Luciana Benotti conducted a tutorial alongside the ethics committee leadership of the Association for Computational Linguistics (ACL).

The Association for Computational Linguistics (ACL) is a worldwide organization divided into three chapters: the Americas, led by Luciana Benotti from Vía Libre as the President of the Executive Committee; the European chapter, where the conference was held; and the Asian chapter, which was established three years ago and is experiencing rapid growth. The EACL (European Conference of the Association for Computational Linguistics) has evolved into a pivotal gathering for academics and professionals in the field of natural language processing. This field, one of the largest within AI, encompasses various applications including chatbots, machine translation, and internet search engines.

At Vía Libre Foundation, we believe in the importance of engaging in conferences to raise awareness about the realities faced by the global south and developing countries, and to highlight the potential impacts of AI, particularly in task automation and our economies. These conferences are where decisions are made regarding which topics to invest in and which areas to investigate. Conducting research is crucial for understanding the impacts and risks associated with these technologies. Furthermore, given the rapid pace of advancements in the field, it is beneficial to network and exchange ideas with colleagues who share similar backgrounds. This facilitates the acquisition of knowledge not only through academic papers but also through informal conversations, allowing for more open sharing of insights and reflections without the constraints of formal records.

The EACL conference took place over four days in a hybrid format. The first two days featured the main conference with a poster and presentation dynamic. On the third day, the Cross Cultural Considerations in NLP (C3NLP) workshop was held, featuring presentations from various researchers from different parts of the world. This workshop was co-organized by Luciana Benotti, Vinod Prabhakaran and Sunipa Dev from Google Research, Dirk Hovy from the University of Milan, and David Adelani from the University of London. It lasted a full day.

Laura Alonso Alemany was in charge of the EDIA presentation.

“It was very well received, and the participation was very fruitful. It allowed us to revisit topics, delve deeper, converse, and reconsider how they intervene in our problems, such as the agency of informants (or speakers) in developments involving natural language: the different roles, the hierarchy between machine and person, as well as the cultural factors that bias or do not bias. And we received feedback on our paper and panel presentation,” Laura shared with the rest of the Foundation team.

After the presentation, questions and discussions were opened. Several revolved around how bias interacts with nationality in terms of cultural baggage. After her return, Luciana shared:

“One question that I found very challenging is: How do you incentivize people to use tools like EDIA? I shared our method, through workshops using a snowball method: connecting with people we know. So, those who explore the tool in those workshops are specialists in different areas, for example, in social sciences, but also in nutrition, for instance, with researchers particularly interested in obesity. Also, experts in other areas such as youth, unemployment, the queer community, and non-binaries. We found a community that was motivated by what we were working on, and that’s how they participated in exploring the tool. I think one of the major issues in the field of natural language processing is the employment of people from precarious conditions for data collection. These individuals are poorly compensated and lack adequate incentives. This is an invisible labor behind AI, the so-called crowd workers. They don’t clearly understand the purpose of the work and ultimately produce lower-quality data.”

On the final day of the EACL congress, Luciana Benotti conducted a tutorial on crafting ethical considerations sections in scientific articles related to natural language processing and on reviewing articles from an ethical perspective within the field of natural language processing. She collaborated with the ethics management committee of the ACL, where she represented the Americas. Yulia from the University of Washington, Karen Fort from France (La Sorbonne), and Min Yen Kan from the University of Singapore also took part. The tutorial took place between 9am to 1pm and featured specific case studies that presented ethical reviews of articles on natural language processing. Other primary interests included group discussions on what was considered ‘right’,  and the remaining factors to be taken into account, like what was still missing.

Throughout the conference, the team interacted with other researchers who perceived that the area was experiencing a crisis in evaluation. Laura Alonso Alemany pointed:

“So far we’ve been evaluating in only one way, but because of how the field is evolving, this way is no longer sufficient or adequate for the types of problems we’re facing, such as this dialogue agent: ChatGPT(which isn’t the only one). So, it was good to talk to researchers and validate that indeed, negative results or non-propositional ones are more difficult to socialize. Informal conversations allowed us to confirm that we are indeed facing an evaluation crisis, which also entails an objectives crisis in our field, because objectives must necessarily be accompanied by contrastable evaluation. The methodology we’re using is very clear in that sense. We’re facing an evaluation crisis. We saw several proposals, none definitive, but different, for moving forward with other ways of evaluating these new contexts we find ourselves in.”

Luciana added:

“This tutorial that we organized through the Computational Linguistics Association proceded a survey we conducted with different members of the Association. Among about 8000 people worldwide, more than half had not taken an ethics course or thought about the risks of these technologies. We believe that those researching in these areas must get involved in describing what the risks are, because only by knowing what the limitations are, can action be taken. In something concrete like: what percentage of women is reflected in a dataset used to train a model? Having precise information about demographic or cultural dimensions of the data being trained, as well as specifying the algorithms used, is crucial for understanding and predicting which individuals these systems will make errors on. Because systematic mistakes are discrimination. That’s the definition of discrimination we work with in AI. So, initiating training for researchers in the field of natural language processing is crucial. This ensures that those familiar with the intricacies of the technology and its constraints engage in a dialogue regarding risks with the developers of these systems.”

Being able to participate in this Congress, as ethics representatives in AI, is essential. Global resistance exists, with minimal willingness to advocate for an understanding of the social impact of these technologies. We know it’s not something that’s naturally discussed at these meetings, so it’s our right, but also our obligation, from the ethics team of the Foundation, to bring these discussions to the table.

 

E.D.I.A

Can artificial intelligence have biases and stereotypes?

Yes. That’s why we created EDIA, a tool that enables you to uncover these biases without needing technical knowledge.

Scroll to Top