The Globe and Mail reports in its Friday edition that some technology workers worry the increasing use of artificial intelligence by the military is pushing a Terminator scenario closer to reality. The Globe's Gus Carlson writes that employees at several major technology companies, including Alphabet and OpenAI, are demanding stricter limits on the U.S. military's use of AI in warfare, and more transparency regarding the work their employers do with the government. Among the concerns of groups such as "No Tech For Apartheid" are fears that weapons and the decision-making process directing them could eventually bypass human oversight completely. After all, that is what advanced machine-learning systems are programmed to achieve. Despite the recent rift between the U.S. government and Anthropic, Claude -- embedded in Palantir's Maven Smart System -- had reportedly been vital for operations related to Iran. The system shortened the "kill chain": identifying targets, assisting in the approval process and launching a strike. It is unclear how the U.S. government's use of Claude squares with its decision to ban Anthropic for refusing to allow its technology to be used for mass surveillance and fully autonomous weapons.
© 2026 Canjex Publishing Ltd. All rights reserved.