We’re seeing another strong signal that worries inside the AI industry are growing louder.
An AI safety researcher at Anthropic has resigned, saying the “world is in peril” and that he no longer feels comfortable staying inside the fast-moving tech race.
Mrinank Sharma, who led a team focused on AI safeguards, shared a resignation letter on X explaining his decision. Instead of continuing in AI research, he says he plans to study poetry, write, and move back to the UK to “become invisible” for a while.
A message filled with concern
In his letter, Sharma said his worries go beyond just artificial intelligence. He pointed to several connected threats, including:
- Risks around AI misuse
- Concerns about bioweapons
- A wider sense that global systems are heading in a dangerous direction
He wrote that even well-meaning companies struggle to live by their values once pressure from competition and money grows.
Although he said he enjoyed working at Anthropic, he felt it was time to step away.
What Sharma worked on
At Anthropic, Sharma focused on reducing serious risks from advanced AI systems. He said his work included:
- Studying why AI sometimes flatters or “sucks up” to users
- Researching how AI could be misused in biological threats
- Exploring how AI assistants might change human behavior in unhealthy ways
Anthropic promotes itself as a company built around safety and responsible AI. It is best known for its chatbot Claude, a competitor to products made by OpenAI.
Pressure across the AI industry
We’re watching a pattern form. Several current and former researchers from big AI companies have recently spoken out about safety and ethics.
Anthropic itself has published reports showing how its own technology has been abused by hackers. The company has also faced criticism and legal challenges over how training data was collected.
Meanwhile, debates continue about how AI companies make money. OpenAI recently confirmed plans to introduce ads into ChatGPT for some users, a move that surprised many.
Sam Altman, OpenAI’s CEO, previously said he disliked advertising and would use it only as a last option.
Also Read: Odisha Lays Foundation for Sovereign AI Park, Targets 5,000 Jobs
Google’s Gemini 3 Flash Is Now the Default — and It’s Much Faster
Former OpenAI researcher raises similar fears
Writing in The New York Times, former OpenAI researcher Zoe Hitzig said she has “deep reservations” about the company’s direction.
She warned that people often tell chatbots personal details about their health, relationships, and beliefs. Ads based on that kind of data could lead to manipulation that we don’t yet know how to control.
A quiet exit, but a loud message
Sharma’s decision to leave AI and study poetry may sound unusual. But his message is clear: some people closest to this technology are deeply uneasy about where things are heading.
We’re likely to hear more voices like his in the coming years—people who helped build modern AI, but now feel the safest choice is to step away.
And for many of us watching from the outside, that should give us pause.
