Markets

Google pledges to not use AI for military weapons, surveillance

Google pledges to not use AI for military weapons, surveillance

In his blog post, Pichai also made it clear on what sorts of applications that Google will not develop.

In his blog post, Pichai said the seven principles laid out Thursday "are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions".

After pressure from its employees, Google officially announced its AI technology will not be used in weapons. The company claims that they will not use their AI for surveillance that violates "internationally accepted norms", which means that there could be scenarios where Google could use their AI for surveillance purposes.

The mission statement comes after heavy criticism over Google's participation in a clandestine AI program for the Pentagon in the United States, which caused the resignation of multiple employees.

Google's involvement in US Department of Defense's Project Maven- a project that aimed to deploy AI to analyse footage captured by the military drones- had spurred a rebellion inside the company. In the final line, it now says: "And remember... don't be evil, and if you see something that you think isn't right - speak up!"

Google AI chief Jeff Dean and more than 3,000 employees signed a petition urging the company to commit to never create autonomous weapons.

Pichai set out seven principles for Google's application of artificial intelligence, or advanced computing that can simulate intelligent human behavior. The contract was reported to be worth less than $10 million to Google, but was thought to have potential to lead to more lucrative technology collaborations with the military.

Muguruza brushes off Sharapova to reach French Open semi-final
Halep went all out in the first set but her bold strategy backfired badly as she hit 30 unforced errors. That includes the French Open in 2014 and 2017, and the Australian Open in January.

And it won't work on anything that contravenes "widely accepted principles" of human rights, but it carries the disclaimer that "we will continue our work with governments in the military in other areas". Later, at least 12 Googlers quit the company over the project.

A recent report claimed that Google won't be renewing its Project Maven contract next year due to the outcry, though leaked emails reportedly revealed that Google's higher ups were eager for such contracts.

"These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue", he added.

As mentioned above, Google's policy means that it won't work on surveillance outside global norms, but the definition of "international norms" is open to interpretation, and depends on who's doing the interpreting.

The principles follow a conflict inside Google, pitting thousands of employees against management.

The document - entitled "Artificial Intelligence at Google: Our Principles" - the company sets out its objectives for the future of AI.