OpenAI is trying to rent a brand new govt accountable for learning rising AI-related dangers in areas starting from laptop safety to psychological well being.
In a post on X, CEO Sam Altman acknowledged that AI fashions are “beginning to current some actual challenges,” together with the “potential affect of fashions on psychological well being,” in addition to fashions which can be “so good at laptop safety they’re starting to seek out vital vulnerabilities.”
“If you wish to assist the world work out the way to allow cybersecurity defenders with leading edge capabilities whereas making certain attackers can’t use them for hurt, ideally by making all techniques safer, and equally for a way we launch organic capabilities and even acquire confidence within the security of operating techniques that may self-improve, please think about making use of,” Altman wrote.
OpenAI’s listing for the Head of Preparedness role describes the job as one which’s accountable for executing the corporate’s preparedness framework, “our framework explaining OpenAI’s method to monitoring and making ready for frontier capabilities that create new dangers of extreme hurt.”
Compensation for the function is listed as $555,000 plus fairness.
The corporate first announced the creation of a preparedness team in 2023, saying it could be accountable for learning potential “catastrophic dangers,” whether or not they had been extra rapid, like phishing assaults, or extra speculative, reminiscent of nuclear threats.
Lower than a yr later, OpenAI reassigned Head of Preparedness Aleksander Madry to a job centered on AI reasoning. Different security executives at OpenAI have additionally left the company or taken on new roles outdoors of preparedness and security.
Techcrunch occasion
San Francisco
|
October 13-15, 2026
The corporate additionally not too long ago updated its Preparedness Framework, stating that it’d “alter” its security necessities if a competing AI lab releases a “high-risk” mannequin with out comparable protections.
As Altman alluded to in his publish, generative AI chatbots have confronted rising scrutiny round their affect on psychological well being. Recent lawsuits allege that OpenAI’s ChatGPT strengthened customers’ delusions, elevated their social isolation, and even led some to suicide. (The corporate mentioned it continues working to enhance ChatGPT’s means to acknowledge indicators of emotional misery and to attach customers to real-world help.)

