https://sputnikglobe.com/20250205/google-ais-weaponization-could-help-trigger-flash-wars-escalating-too-quickly-to-stop-1121540251.html
Google AI’s Weaponization Could Help Trigger ‘Flash-Wars’ Escalating Too Quickly to Stop
Google AI’s Weaponization Could Help Trigger ‘Flash-Wars’ Escalating Too Quickly to Stop
Sputnik International
Google chipped further away at its defunct «Don’t be evil» motto this week, dropping a passage from its principles on AI development committing not to use the technology for weaponry or surveillance. A leading independent cybersecurity expert told Sputnik why the move is fraught with grave risks.
2025-02-05T17:11+0000
2025-02-05T17:11+0000
2025-02-06T16:08+0000
analysis
lars hilse
artificial intelligence
artificial intelligence (ai)
defense
https://cdn1.img.sputnikglobe.com/img/07e9/02/05/1121540418_134:0:2267:1200_1920x0_80_0_0_ba363454ab6854d8f7075a753ff28fe7.jpg
“AI systems may interact with other network-connected infrastructure in unpredictable ways,” veteran independent cybersecurity expert and digital strategy specialist Lars Hilse explained.That’s not to mention the proliferation threat, the analyst, who recently authored a book, ‘Dominance on the Digital Battlefield’, dedicated to these very issues, said.Why Did Google Do It?Google’s new policy means it will be able to participate in these “highly lucrative defense contracts and government surveillance projects and strengthen its position in the AI race, particularly against their Chinese competitors,” Hilse said.
https://sputnikglobe.com/20241116/dystopian-nightmare-meets-reality-us-media-cheerleads-killer-robots-filling-ukrainian-skies-1120904791.html
https://sputnikglobe.com/20240409/israel-uses-military-ai-in-gaza-tool-of-genocide-or-simply-a-database-1117831884.html
2025
News
en_EN
https://cdn1.img.sputnikglobe.com/img/07e9/02/05/1121540418_401:0:2001:1200_1920x0_80_0_0_f0cb6783b34a3db33b3d950bc546f202.jpg
will ai end humanity, could ai end humanity, should ai have access to weapons, should ai control weapons
will ai end humanity, could ai end humanity, should ai have access to weapons, should ai control weapons
Google chipped further away at its defunct «Don’t be evil» motto this week, dropping a passage from its principles on AI development committing not to use the technology for weaponry or surveillance. A leading independent cybersecurity expert told Sputnik why the move is fraught with grave risks.
“AI systems may interact with other network-connected infrastructure in unpredictable ways,” veteran independent cybersecurity expert and digital strategy specialist Lars Hilse explained.
This unpredictability “could potentially trigger flash-wars, which escalate too quickly for the human mind to comprehend, and for the human being to intervene,” Hilse said, highlighting the immense risks of handing defense-related issues over to AI to manage.
Humanity is only starting to understand the dangers and “unknown risks” associated with AI’s weaponization, Hilse said. “And particularly in a time where global conflict is imminent, we might want to resort to leaving that Genie in the bottle for now,” he urged.
But the observer isn’t surprised by Google’s policy shift, with the “recalibration” dictated by the need “to align with market realities and geopolitical demands” and the “insanely lucrative” nature of the defense market.
Google’s new policy means it will be able to participate in these “highly lucrative defense contracts and government surveillance projects and strengthen its position in the AI race, particularly against their Chinese competitors,” Hilse said.
“The policy shift indicates a broader realignment of Silicon Valley with national defense aspirations and may even suggest that previous ethical barriers to military AI development are being systematically removed industry-wide to make allow for quicker reaction to market shifts in this – again – extremely lucrative, and previously unexplored field of business,” the expert summed up.