An AI built to help humanity is now accused of helping wage war

When an AI company founded on the promise of “benefiting humanity” signs a deal with the military, backlash is inevitable, and that’s exactly the firestorm Sam Altman now finds himself in.

OpenAI sparked outrage after announcing a new agreement with the U.S. Department of Defense to deploy its AI systems for military use.

Many users saw the move as a betrayal of the company’s original mission, accusing it of helping “train a war machine.” In protest, scores of ChatGPT subscribers publicly canceled their memberships and pledged to switch platforms.

The controversy was amplified by rival Anthropic, which reportedly refused similar Pentagon demands for unrestricted military access to its Claude AI. That stance earned Anthropic a wave of goodwill, with Claude briefly overtaking ChatGPT in App Store rankings.

Altman scrambled to contain the damage through a public Q&A, insisting OpenAI would refuse unconstitutional orders including mass domestic surveillance even if it meant jail time. Still, critics questioned his faith in government assurances and accused him of downplaying legitimate civil liberties concerns.

Ultimately, Altman admitted the deal was rushed and conceded that “the optics don’t look good.”

What began as a defense contract has quickly turned into one of OpenAI’s most serious public relations crises, raising deeper questions about the role of AI companies in modern warfare.

Leave a Comment

Your email address will not be published. Required fields are marked *