Google announced Thursday it would not allow its artificial intelligence software to be used in weapons or unreasonable surveillance efforts under new standards for its business decisions in the nascent field. The Alphabet Inc (GOOGL.O) unit said the restriction could help Google management defuse months of protest by thousands of employees against the company’s work with the US military to identify objects in drone video. Chief Executive Sundar Pichai said in a blog post: “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” such as cybersecurity, training, or search and rescue. Pichai set out seven principles for Googles application of artificial intelligence, or advanced computing that can simulate intelligent human behavior. He said Google is using AI "to help people tackle urgent problems" such as prediction of wildfires, helping farmers, diagnosing disease or preventing blindness, AFP reported. "We recognize that such powerful technology raises equally powerful questions about its use," Pichai said in the blog. "How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right." He added that the principles also called for AI applications to be "built and tested for safety," to be "accountable to people" and to "incorporate privacy design principles." The move came after potential of AI systems to pinpoint drone strikes better than military specialists or identify dissidents from mass collection of online communications has sparked concerns among academic ethicists and Google employees, according to Reuters. Several technology firms have already agreed to the general principles of using artificial intelligence for good, but Google appeared to offer a more precise set of standards.
مشاركة :