$555,000 salary with equity: OpenAI’s job offer for a Head of Preparedness
OpenAI is looking to hire a candidate for the position of Head of Preparedness within its Safety Systems team in San Francisco.
OpenAI is looking to hire a candidate for the position of Head of Preparedness within its Safety Systems team in San Francisco. OpenAI CEO Sam Altman has highlighted what the responsibilities and requirement expected from this position.
OpenAI CEO Sam Altman has said that preparedness is a core part of OpenAI’s safety strategy and focuses on tracking and preparing for frontier AI capabilities that could introduce risks of severe harm. The Head of Preparedness will get $555,000 in compensation along with equity, reflecting the senior leadership responsibility tied to overseeing OpenAI’s preparedness and safety efforts.
Here is what OpenAI CEO Sam Altman tweeted,
We are hiring a Head of Preparedness. This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we…
— Sam Altman (@sama) December 27, 2025
Responsibilities of the Head of Preparedness
According to Sam Altman, the Head of Preparedness will have the role of leading the technical strategy and execution of OpenAI’s Preparedness framework. The Head of Preparedness will have to create preparedness programme, building and coordinating capability evaluations, and establishing threat models. He will have to oversee mitigations that form a coherent, rigorous, and operationally scalable safety pipeline.
Moreover, the role of Head of Preparedness requires deep technical judgment, clear communication, and the ability to guide complex work across multiple risk domains. You will lead a small, high-impact team to drive core Preparedness research, while partnering broadly across Safety Systems and OpenAI for end-to-end adoption and execution of the framework, as written in OpenAI.
He will also be at the forefront in the development process of frontier capability evaluations. Another key responsibility is ensuring that evaluation results directly inform model launch decisions, internal policy choices, and formal safety cases. The framework will need to be refined continuously as new risks, capabilities, or external expectations emerge.
Role of Head of Preparedeness
- Own OpenAI’s preparedness strategy end-to-end by building capability evaluations, establishing threat models, and building and coordinating mitigations.
- Lead the development of frontier capability evaluations, ensuring they are precise, robust, and scalable across rapid product cycles.
- Oversee mitigation design across major risk areas (e.g., cyber, bio), ensuring safeguards are technically sound, effective, and aligned with underlying threat models.
- Guide interpretation of evaluation results and ensure they directly inform launch decisions, policy choices, and safety cases.
- Refine and evolve the preparedness framework as new risks, capabilities, or external expectations emerge.
- Collaborate cross-functionally with research, engineering, product teams, policy monitoring and enforcement teams, governance, and external partners to integrate preparedness into real-world deployment.
The role is aimed at candidates with deep technical expertise in machine learning, AI safety, evaluations, security, or related risk domains. Experience with high-rigor evaluations, threat modeling, cybersecurity, biosecurity, or similar frontier-risk areas is listed as a plus.
The OpenAI CEO also addressed the impact of AI on mental helath and wrote, . “ The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities.”

