I was digging into the issue of AI in modern weapons for a model UN project and stumbled upon something that left me speechless. Israel has developed a database that can identify individuals deemed terrorists from CCTV footage and automatically launch a drone strike against them with minimal human oversight.
It’s shocking to me that this isn’t getting more attention, especially when we’re so focused on AI-generated art and advertisements. Don’t get me wrong, those are important issues, but this is a matter of life and death.
The Gospel and Lavender systems, as they’re called, can already kill with minimal human intervention. An Israeli army official even admitted that their role was essentially just a ‘stamp of approval,’ taking only 20 seconds to review each target.
It’s frightening to think that we’re more concerned with AI-generated commercials than the potential for autonomous weapons to cause harm. It’s time to refocus our attention on the real risks AI poses to humanity.
We need to have a more nuanced conversation about AI and its applications, rather than just getting caught up in the flashy headlines. The future of warfare is changing, and we need to be aware of the implications.