Timnit Gebru, a fired Google artificial intelligence researcher, thinks a new law is needed

When she co-led Google’s Ethical AI team, Timnit Gebru was a leading insider voice challenging the tech industry’s approach to artificial intelligence.

This was before Google GOOG,
pushed her out company more than a year ago. Now Gebru is trying to make changes from the outside as the founder of the Distributed Artificial Intelligence Research Institute, or DAIR.

Born to Eritrean parents in Ethiopia, Gebru recently told The Associated Press how Big Tech’s AI priorities — and its AI-powered social media platforms — are serving Africa and beyond. The new institute focuses on AI research from the perspective of the places and people most likely to suffer harm.

She is also a co-founder of the group Black in AI, which promotes black employment and leadership in the field. And she is known for co-authoring a landmark 2018 study that uncovered racial and gender bias in facial recognition software. The interview has been edited for length and clarity.

Q: What was the impetus for DAIR?

A: After I got fired from Google, I knew I would be blacklisted by a whole bunch of big tech companies. The ones that I wouldn’t be — it would just be very difficult to work in that kind of environment. I just wasn’t going to do that anymore. When I decided to (create DAIR), the very first thing that came to mind was that I wanted it distributed. I’ve seen how people in some places just can’t influence the actions of tech companies and the course that AI development takes. If there is AI to build or research, how well do you do it? You want to involve communities that are usually on the margins so that they can benefit. When there are cases where it shouldn’t be built, we can say, “Well, that shouldn’t be built.” We are not approaching the issue from the perspective of technological solutionism.

Q: What are the most concerning AI applications that deserve further investigation?

A: What’s so depressing to me is that even the apps where so many people now seem to be more aware of the harms — they’re going up instead of down. There has been talk for a long time about facial recognition and surveillance based on this technology. There are some victories: a number of cities and municipalities have banned the use of facial recognition by law enforcement, for example. But then the government uses all these technologies that we warned against. First, in the war, then to prevent refugees – because of this war – from entering. So, at the US-Mexico border, you’ll see all kinds of automated things you’ve never seen before. The main way we use this technology is to keep people out.

Q: Can you describe some of the projects that DAIR is pursuing that might not have happened elsewhere?

A: One of the things we focus on is the process by which we do this research. One of our first projects is to use satellite imagery to study space apartheid in South Africa. Our researcher (Raesetje Sefala) is someone who grew up in a township. She is not the one who studies in another community and launches into it. She is the one who does things that are relevant to her community. We are working on visualizations to understand how to communicate our results to the general public. We think carefully about who do we want to reach.

Q: Why the focus on distribution?

A: Technology is affecting the whole world right now and there is a huge imbalance between those who produce it and influence its development, and those who feel its damage. Speaking of the African continent, it is paying a huge cost for the climate change it did not cause. And then we use AI technology to keep climate refugees out. It’s just a double jeopardy, isn’t it? In order to reverse this, I think we need to make sure that we advocate for people who are not at the table, who are not leading this development and influencing its future, to have the opportunity to do so. .

Q: What got you interested in AI and computer vision?

A: I didn’t make the connection between being an engineer or a scientist and, you know, wars or labor issues or anything like that. For much of my life, I was just thinking about subjects that I liked. I was interested in circuit design. And then I also liked music. I played the piano for a long time so I wanted to combine a number of my interests. And then I found the audio group at Apple. And then, when I was coming back to do a master’s and a doctorate, I took a course on image processing that touched on computer vision.

Q: How has your Google experience changed your approach?

A: When I was at Google, I spent a lot of my time trying to change people’s behavior. Like, they were having a workshop and they only had men – like 15 of them – and I would just email them, “Look, you can’t have a workshop like this.” I now spend more energy thinking about what I want to build and how to support people who are already on the right side of a problem. I can’t spend all my time trying to reform others. There are a lot of people who want to do things differently, but just aren’t in a position of power to do so.

Q: Do you think what happened to you at Google prompted a closer look at some of your concerns about language learning models? Could you describe what they are?

Q: Part of what happened to me at Google was related to an article we wrote about big language models — a type of language technology. Google Search uses it to rank the queries or Q&A boxes you see, machine translation, autocorrect, and a whole bunch of other stuff. And we were seeing this rush to adopt bigger and bigger language models with more data, more computing power, and we wanted to warn people about this rush and think about the potential negative consequences. I don’t think the newspaper would have made waves if it hadn’t fired me. I am happy that he has drawn attention to this issue. I think it would have been hard to get people to think about big language models if it wasn’t for that. I mean, I wish I didn’t get fired, obviously.

Q: In the United States, are you looking for action from the White House and Congress to reduce some of the potential harm from AI?

A: At the moment, there are simply no regulations. I wish some sort of law would force tech companies to prove to us that they are doing no harm. Every time they introduce new technology, the onus is on citizens to prove something is harmful, and even then we have to fight to be heard. Many years later, we could talk about regulation – and then the tech companies moved on. This is not how pharmaceutical companies operate. They wouldn’t be rewarded for not looking (the potential damage) – they would be punished for not looking. We need to have that kind of standard for tech companies.

Comments are closed.