Back

AI Researchers Resign From Anthropic and OpenAI Citing Safety Concerns

At a glance

  • Mrinank Sharma resigned from Anthropic on February 9, 2026
  • Zoë Hitzig left OpenAI the same week
  • Both resignations were accompanied by public statements on safety risks

Two researchers from leading artificial intelligence companies resigned in early February 2026, each publishing statements that referenced safety and ethical concerns related to their work and the broader industry.

Mrinank Sharma, who led the Safeguards Research Team at Anthropic, left his position on February 9, 2026. In his resignation letter, Sharma stated that he would pursue poetry studies and described the world as being in peril.

Sharma posted his resignation letter on X, where he listed interconnected global crises, including artificial intelligence and bioweapons, as factors in his decision. He also indicated plans to focus on poetry and what he described as courageous speech.

During the same week, Zoë Hitzig, a researcher at OpenAI, also resigned. She published a guest essay in The New York Times stating that her departure coincided with the day OpenAI began testing advertisements in ChatGPT.

What the numbers show

  • Sharma resigned from Anthropic on February 9, 2026
  • Hitzig left OpenAI during the same week in February 2026
  • Both resignations were made public through personal statements and essays

In her essay, Hitzig stated that the introduction of advertisements into ChatGPT could create incentives to override ethical guidelines. She compared the move to Facebook’s approach to monetizing user information and said it could involve deeply personal disclosures.

Both resignations were accompanied by public warnings about the potential risks associated with current developments in artificial intelligence. The statements referenced concerns about safety, ethical considerations, and the direction of technology deployment.

The departures of Sharma and Hitzig occurred within a short period and were made public through online platforms and media essays. Their resignations highlighted procedural steps taken by researchers in response to developments at their respective organizations.

No official statements from Anthropic or OpenAI regarding these resignations were included in the available fact pack. The public records focus on the researchers' actions and their attributed concerns.

* This article is based on publicly available information at the time of writing.

Related Articles

  1. Peptide therapy is gaining traction for health benefits, but many products lack FDA approval and safety data, according to regulatory warnings.

  2. ECRI identified AI chatbot misuse as the top health tech hazard for 2026, with unsafe response rates between 5% and 13%, according to reports.

  3. Tech stocks saw significant volatility, with a $1 trillion market value loss amid major AI investment announcements, according to market data.

  4. Melatonin gummies often contain varying dosages from labels, according to consumer safety data. This raises concerns about their safety for children.

  5. A sub-group of Labour's NEC voted 8-1 against Burnham's candidacy for the upcoming by-election on 26 February 2026, according to reports.

More on Technology

  1. On February 12, 2026, Coinbase faced a brief trading disruption. Reports indicate the stock dropped over 7% during this period.

  2. Lunar habitats may utilize natural caves and advanced materials to withstand harsh conditions, focusing on local resources for sustainable living.

  3. A letter of intent has been signed between Inuit Development Company and AmForge, according to reports. No formal application has been submitted.

  4. Tech for Campaigns reported that 15.6 million voters were reached in 2024 through AI tools for outreach and fundraising, according to reports.

  5. A report highlights that AI-specific attacks frequently bypass conventional detection methods, according to industry analysis.