OpenAI just updated its policy.
👉 They’re scanning conversations for signs of violence
👉 If flagged, messages go to human reviewers
👉 If deemed a real threat, law enforcement can be alerted
They say self-harm conversations won’t trigger police reports. But the line isn’t clear. What exactly gets flagged? Who decides? And how far will this expand?
This comes after multiple tragedies tied to AI interactions. Now OpenAI is stepping in but at the cost of user privacy.
It raises big questions:
– Where’s the line between protection and surveillance?
– Could this create a chilling effect on what people feel safe to share?
– How much trust will users lose when their private chats might not be private?
Safety or control?
That’s the debate now surrounding AI.
