Timnit Gebru, the co-leader of Google’s ethical artificial intelligence team, said she was fired for sending an e-mail that management deemed “not in line with the expectations of Google managers.”
The email and dismissal are the last week of the debate over the company’s request that Gebru withdraw the AI ethics paper she wrote with six other people (including four Google employees), which has been submitted to an industry conference next year for consideration. , Gebru said in an interview on Thursday. If she is unwilling to withdraw this document, Google at least hopes to delete the names of Google employees.
Gebru explained to Megan Kacholia, vice president of Google’s research department, and told her that she planned to resign after the transition period without more discussion and handling on the document. She also wanted to make sure that she knew what would happen to similar research projects that her team might conduct in the future.
She said: “We are a team called moral AI, of course we will write articles about AI issues.”
At the same time, Gebru participated in an email group for company researchers called Google Brain Women and Allies, and commented on a report that others were working on, which involved the small number of women employed by Google during the pandemic. Gebru said that based on her experience, such documents are unlikely to be effective because no one at Google will be held accountable. She mentioned her experience with the AI paper submitted for the conference and linked it to the report.
“Please don’t write a document, because this has no effect,” Gebru wrote in an email, which was obtained by Bloomberg News. “Nothing can be achieved without more documentation or more conversations.” The email was posted earlier by technical writer Casey Newton.
The next day, Gebru said she was expelled by email, and Kacholia sent a message that Google could not meet her request and respected her decision to leave the company. The email continued: “Some aspects of the email sent to the non-managers of the brain team last night reflected behaviors that were inconsistent with the expectations of Google managers,” Gebru posted in a tweet that Jeff Di, head of Google’s AI department En (Jeff Dean) participated in her removal.
I got fired @JeffDean Send me an email to Brain Woman and Allies. My company account has been cut off. So I was fired immediately 🙂
-Timnit Gebru (@timnitGebru) December 3, 2020
“This is the most fundamental silence,” Gebru said in an interview about Google’s actions on its papers. “You don’t even have a scientific voice.”
A Google representative based in Mountain View, California did not respond to multiple requests for comment.
Google’s protest group Google Walkout For Real Change published a petition in support of Gebru on Medium. The site has collected more than 400 signatures from the company’s employees and more than 500 signatures from academics and industry professionals.
Questionable research papers deal with possible ethical issues of large language models, and OpenAI, Google and other companies are working on this area. Gebru said that she didn’t know why Google was worried about the paper. She said that the paper had been approved by her manager and submitted to others at Google for comments.
Gebru said that Google requires all publications by researchers to be approved in advance, and the company told her that this article did not follow proper procedures. The report was submitted to the ACM Fairness, Accountability and Transparency Conference co-founded by Gebru in March.
A copy of the document states that the paper pointed out the dangers of using large language models to train algorithms that might, for example, write tweets, answer trivia, and translate poetry. The paper stated that these models are essentially trained by analyzing language from the Internet, which does not reflect the majority of the global population that has not yet been online. Gebru emphasized the risk that the model will only reflect the worldview of people who are privileged enough to be part of the training data.
Gebru, an alumnus of the Stanford Artificial Intelligence Laboratory, is one of the leading voices in the ethical use of AI. She is known for the landmark study conducted in 2018 that showed how facial recognition software incorrectly identified dark-skinned women up to 35% of the time, and the technology was highly accurate on white men .
She also bluntly criticized the lack of diversity and unequal treatment of black workers in technology companies, especially on Alphabet’s Google. She said that she believes that the dismissal is to convey a message to other Google employees, not to speak out.
Under the fire
The tension in Google’s research department is already high. After the death of George Floyd, a black man arrested by white police in Minneapolis in May, the department held an all-hands meeting where black Google employees shared their experiences at the company. Those who attended the meeting said that many people broke the cry.
Gebru revealed the dismissal in a series of Wednesday evening tweets. With the support of some of her Google colleagues and others in the field, Gebru was welcomed.
Gebru is “the reason many next-generation engineers, data scientists, and more are inspired to work in technology,” wrote Rumman Chowdhury. He used to be the head of AI at Accenture, and now runs a company she founded called Parity.
Google, as well as other American technology giants, including Amazon and Facebook, have been criticized by the government for prejudice and discrimination, and have been questioned at several committee hearings in Washington.
A year ago, Google fired four employees for breaching its data security policy. This dismissal highlights the escalation of tensions between a company’s highly regarded management and radical workers for its open corporate culture. At that time, Gebru went on Twitter to support those who were unemployed.
For the Internet search giant, Gebru was accused of dismissal when the company was facing complaints from the National Labor Relations Commission regarding illegal surveillance, interrogation or suspension of work.
Earlier this week, Gebru asked on Twitter if anyone was enacting regulations to protect ethical AI researchers, similar to whistleblower protection measures. “With a lot of scrutiny and intimidation targeting specific populations, how can anyone believe that any real research in this field can be conducted?” she wrote on Twitter.
Is anyone working on similar regulations to protect ethical AI researchers as whistleblower protection? Because with a lot of censorship and intimidation targeting specific groups of people, will anyone believe that any real research in this field will happen?
-Timnit Gebru (@timnitGebru) December 1, 2020
Gebru is a rare public criticism within the company. In August, Gebru told Bloomberg News that vocal black Google employees have also been criticized, even if the company insists they are a model of its commitment to diversity. She described how colleagues and managers tried to maintain their tone, make excuses for harassment or racist behavior, or ignore her concerns.
When Google considered letting Gebru manage another employee, she said in an interview in August that her outspokenness about diversity was not good for her, and there were concerns about whether she could manage other people if she was so unhappy. She said at the time: “People don’t know the severity of the problems because you can’t talk about them, and when you do, you’re the problem.”
-With the assistance of Helene Fouquet and Gerrit De Vynck.
©2020 Bloomberg (Bloomberg LP)
The iPhone 12 Pro series is amazing, but why is it so expensive in India? We discussed it on the weekly technical podcast Orbital, you can subscribe via Apple Podcast, Google Podcast or RSS, download the episode, or click the play button below.