OpenAI seeks new head to track AI-related risks

The role will focus on monitoring emerging AI risks across computer security, biological use cases and the potential mental health effects of advanced models.

author-image
afaqs! news bureau
New Update
OpenAI

OpenAI is recruiting a new executive to lead its preparedness efforts, as the company continues to assess risks associated with increasingly capable AI systems.

The role, titled head of preparedness, will focus on identifying and tracking risks across areas such as computer security, biological capabilities and the impact of AI models on mental health. The position is responsible for executing OpenAI’s Preparedness Framework, which outlines how the company evaluates and responds to high-risk AI capabilities.

In a post on X, OpenAI CEO Sam Altman acknowledged growing concerns around advanced AI systems, writing that models are “starting to present some real challenges,” including the “potential impact of models on mental health,” as well as models that are “so good at computer security they are beginning to find critical vulnerabilities.”

“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” Altman wrote.

According to the job listing, compensation for the role is set at $555,000, excluding equity.

OpenAI first announced the formation of its preparedness team in 2023, stating that the group would study potential catastrophic risks linked to AI, ranging from phishing attacks to more speculative threats such as nuclear risks.

In 2024, the company reassigned its then head of Preparedness, Aleksander Madry, to a role focused on AI reasoning. Several other safety-focused executives have since left the company or moved into roles outside preparedness and safety.

The company has also updated its Preparedness Framework, noting that it may 'adjust' safety requirements if a competing AI lab releases a high-risk model without similar safeguards.

The hiring move comes amid growing scrutiny of generative AI tools and their impact on users’ mental health. Recent lawsuits have alleged that AI chatbots contributed to emotional harm, including reinforcing delusions and increasing social isolation. OpenAI has said it is working to improve its systems’ ability to identify distress and guide users towards real-world support.

OpenAI
afaqs! CaseStudies: How have iconic brands been shaped and built?
Advertisment