Google has made a significant change in the artificial intelligence principles it set in 2018. At that time, the company made a commitment not to use artificial intelligence in weapons systems and surveillance technologies. However, this commitment was removed with the latest update. In the old ‘apps we won’t pursue’ section, it was stated that Google would avoid weapons systems and unethical surveillance technologies that are not intended to directly harm humans.
Google can officially use artificial intelligence as a weapon
However, this commitment has now been removed and replaced by a broader and open-ended framework such as ‘responsible development and deployment’.
In its new guidelines, Google states that it will use appropriate human oversight, rigorous review processes and feedback mechanisms in the development and deployment of AI technologies. This change means that the company has abandoned its previous approach and adopted a more flexible and broad policy. While the company’s decision has brought ethical debates back to the agenda, it has caused significant reactions in the technology world.
Google’s stance against military projects in the past led to great internal resistance following the Project Maven controversy in 2018. However, in 2021, it signed a large-scale cloud computing agreement with the Pentagon and started to turn towards military projects again. As of the new year, news has spread that Google is working to increase its cooperation with the Israeli Ministry of Defence. These developments show that the company is adopting a more flexible approach to using artificial intelligence for military and security purposes.