Internal communications and interviews with participating researchers show that Alphabet’s Google this year strengthened its control over scientists’ papers by launching “sensitive topics” reviews. In at least three cases, authors were required to avoid negative reviews of technology. jobs.

According to an internal page explaining the policy, Google’s new review process requires researchers to consult with the legal, policy, and public relations team before seeking topics such as face and sentiment analysis and classification of race, gender, or political affiliation.

One of the researchers’ pages stated: “Technological advancements and the increasingly complex situation of our external environment have increasingly led to seemingly offensive projects to raise ethical, reputation, regulatory or legal issues.” Reuters could not determine the date of the position, although Three current employees stated that the policy began in June.

Google declined to comment on this matter.

Eight current and former employees said that the “sensitive subject” process adds a round of review to Google’s standard review of papers. These flaws include pitfalls such as disclosing trade secrets.

For some projects, Google officials intervened at a later stage. According to an internal newsletter read to Reuters, shortly before publication this summer, a senior Google manager reviewed a study on content recommendation technology and told the author “to be extra careful to generate a positive tone.”

The manager added: “This does not mean that we should avoid the actual challenges that software brings.”

Subsequently, the correspondence between the researcher and the reviewer indicated that the author “has been updated to remove all references to Google products.” A draft seen by Reuters mentions YouTube, which is owned by Google.

See also  The mining difficulty of the Bitcoin network is expected to show the largest increase in more than 2 months – Mining Bitcoin News

Four researchers, including senior scientist Margaret Mitchell (Margaret Mitchell), said they believe that Google is beginning to interfere with critical research on potential technological harm.

Mitchell said: “If we rely on our expertise to study the right things and do not allow the publication of the paper on grounds that are inconsistent with high-quality peer review, then we will face severe review issues.”

Google stated on its public-facing website that its scientists have “full” freedom.

After scientist Timnit Gebru suddenly withdrew, the tension between Google and some of its employees disappeared this month. Scientists Timnit Gebru and Mitchell led a team of 12 people dedicated to the ethics research of artificial intelligence software (AI).

Gebru said that Google only fired her after questioning an order because she did not publish a study claiming that artificial intelligence that mimics voice may not benefit marginalized people. Google said it has accepted and accelerated her resignation. It is impossible to determine whether Gebru’s paper has been reviewed for “sensitive topics”.

Google’s senior vice president Jeff Dean said in a statement this month that Gebru’s paper only discusses potential hazards, and does not discuss ongoing efforts to address these hazards.

Dean added that Google supports AI ethics scholarships and “is actively improving our paper review process because we know that too much checks and balances can become troublesome.”

“sensitive topics”

The surge in AI research and development throughout the technology industry has prompted authorities in the United States and elsewhere to propose usage rules. Some people cited scientific research to show that facial analysis software and other AI can perpetuate prejudice or erode privacy.

See also  Fundamentals still show that the bull market continues, Bobby Lee says "Don't panic" – Regulate Bitcoin News

In recent years, Google has integrated AI in its entire service, using the technology to interpret complex search queries, determine suggestions on YouTube and automatically complete sentences in Gmail. Dean said that last year, its researchers published more than 200 papers on the responsible development of AI, for a total of more than 1,000 projects.

According to internal pages, research on bias in Google services is a “sensitive subject” under the company’s new policy. Among the dozens of other “sensitive topics” listed include the oil industry, China, Iran, Israel, COVID-19, family safety, insurance, location data, religion, autonomous vehicles, telecommunications, and recommended or personalized web content system.

The Google paper that requires authors to post a positive attitude discusses recommendation AI, and services such as YouTube can use AI to personalize user content feeds. The draft reviewed by Reuters includes “concerns” that the technology will promote “false information, discriminatory or other unfair results” and “insufficient content diversity” and lead to “political polarization”.

The final publication stated that these systems can promote “accurate information, fairness, and content diversity.” The release titled “What are you optimizing? Make the recommendation system consistent with human value” omitted the praise of Google researchers. Reuters could not determine the cause.

A person familiar with the matter said that this month’s paper on artificial intelligence aimed at understanding foreign languages ​​has softened people’s claims about how Google Translate products make mistakes based on the company’s reviewers. The published version stated that the author used Google Translate, and another sentence stated that part of the research method was “review and fix incorrect translations.”

See also  Facebook, Google will face new antitrust agency with strict competition rules in the UK

According to internal correspondence, in a paper published last week, a Google employee described the process as “long distance”, involving more than 100 email exchanges between researchers and reviewers.

Researchers have discovered that artificial intelligence can cough up personal data and copyrighted material, including pages excerpted from the “Harry Potter” novel, which has been withdrawn from the Internet to develop the system.

A person familiar with the matter said a draft describes how such disclosure infringes copyright or violates European privacy laws. After company review, the author eliminated the legal risk, and Google published the paper.

© Thomson Reuters 2020


Is the MacBook Air M1 the portable beast of the laptop you always wanted? We discussed it on the weekly technical podcast Orbital, you can subscribe via Apple Podcast, Google Podcast or RSS, download the episode, or click the play button below.