Strategic Threat Intelligence Analyst
Identify and analyze emerging patterns of abuse and misuse in AI systems, develop actionable intelligence reports, and inform safety strategies.
About the Team
The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analyzing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to OpenAI's overarching goal of developing AI that benefits humanity.
About the Role
As a Strategic Threat Intelligence Analyst on OpenAI’s Intelligence & Investigations team, you will play a central role in defending against AI and online misuse and shaping the safety landscape of generative technologies. This position is ideal for professionals who excel at uncovering complex threat patterns, thrive in fast-moving environments, and are motivated to ensure frontier AI is deployed responsibly.
We’re looking for people who combine online safety experience and risk management with outstanding analytical and problem-solving skills. The ideal candidate will be able to clearly articulate complex abuse concepts and collaborate effectively with diverse technical and business teams. We value professionals dedicated to promoting the safe use of AI. Adaptability and a commitment to continuous learning in the rapidly evolving field of AI are also key traits for this role.
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Identify and analyze emerging patterns of abuse and misuse in AI systems through structured intelligence and risk analysis.
Develop actionable intelligence reports and risk assessments to inform product, policy, and enforcement decisions.
Synthesize complex qualitative and quantitative abuse signals into clear insights that influence safety and business strategy.
Support the development of scalable safety analytics tools through contributions in Python and SQL, enhancing automation and insight generation.
You might thrive in this role if you:
7+ years of experience in strategy, operations, trust and safety, intelligence, or international policy roles.
Have experience in performing in-depth qualitative and quantitative analysis to inform safety strategies, particularly within online content and AI applications.
Demonstrate a deep understanding of AI and generative AI technologies and the potential abuse scenarios, with a proactive approach to identifying and addressing emerging threats.
Communicate with precision and clarity, especially when translating complex threats into product or policy recommendations.