Almost three weeks after Black suddenly left Artificial intelligence protocol More details are coming out about the new policies that Google has developed for Timnit Kebru and his research team.
After reviewing internal contacts and talking to researchers affected by the rule change, Reuters The technology company announced on Wednesday that it had recently added a “critical headline” review process to its scientists’ paperwork, and on at least three occasions scientists have openly demanded that Google avoid putting a negative light on technology.
Under the new practice, scientists will have to meet with special legal, policy and public relations committees to pursue AI research related to so-called controversial topics, which may include facial analysis and classification of race, gender or political affiliation.
In an example reviewed by Reuters, scientists who researched AI’s recommendation to expand user feed on sites such as YouTube – Google-owned property – said technology could lead to “misinformation, discrimination or other unreasonable decisions” and “insufficient diversity of content”, as well as “political polarization”. After a senior manager’s review, the final release suggests that after research researchers can be given a more positive tone, systems can instead promote “accurate information, honesty and diversity of content”.
“The advances in technology and the growing complexity of our external environment are leading to increasingly dormant projects leading to situations that raise ethical, reputable, regulatory or legal issues,” says an internal web page outlining the policy.
In recent weeks – especially Departure of Kebru, A widely renowned researcher, has reportedly become more supportive of the elite after raising concerns about censorship within the research process – Google has faced increased research on potential dependencies in its internal research division.
Four staff researchers who spoke with Reuters confirmed Kebru’s claims, saying they also believe that Google is beginning to interfere in critical studies of its harmful potential.
“If we are exploring the relevant subject matter given to our expertise and we are not allowed to publish it on the basis of non-compliance with high quality peer review, we are in serious trouble for auditing,” said Margaret Mitchell, a senior scientist at the company.
In early December, Kebru said he had been fired by Google He backed down from the order not to publish the research, saying that AI, which is capable of reflecting speech, could turn marginalized people into a disadvantage.
“Beer practitioner. Pop culture maven. Problem solver. Proud social media geek. Total coffee enthusiast. Hipster-friendly tv fan. Creator.”