Google has removed a pledge from its artificial intelligence (AI) principles that said it would not use the technology to develop weapons.
The company’s guidelines for use of AI have been rewritten – but the part promising not to develop tech that may "cause harm" has been removed.
The principles now have a section on "responsible development and deployment", where Google says it will implement "appropriate human oversight" to stay aligned with international law.
Senior vice president James Manyika and Google AI chief Demis Hassabis said the change in guidelines is because they were published in 2018 and AI has "evolved rapidly" since then.
There is an ongoing debate about how AI should be monitored and regulated. International summits have seen countries and tech firms sign non-binding agreements to develop AI "responsibly", but no binding international law on the issue is yet in place.

Meanwhile, Google is also scrapping its goal to hire more employees from underrepresented groups and is reviewing some of its diversity, equity and inclusion (DEI) initiatives, The Wall Street Journal reported yesterday.
With this, Google joins a slew of US businesses that have been scaling back their diversity initiatives after President Donald Trump curbed DEI in government and federal contractors.
(Pic: Getty Images)