Research Scientist, Frontier Risk Evaluations
Scale Labs, Research Scientist — Frontier Risk Evaluations
As the leading data and evaluation partner for frontier AI companies, Scale plays an integral role in understanding the capabilities and safeguarding AI models and systems. Building on this expertise, Scale Labs has launched a new team focused on policy research, to bridge the gap between AI research and global policymakers to make informed, scientific decisions about AI risks and capabilities.
Our research tackles the hardest problems in agent robustness, AI control protocols, and AI risk evaluations to help governments, industry, and the public understand and mitigate AI risk while maximizing AI adoption. This team collaborates broadly across industry, the public sector, and academia and regularly publishes our findings. We are actively seeking talented researchers to join us in shaping this vision.
As a Research Scientist focused on Frontier Risk Evaluations, you will design and create evaluation measures, harnesses and datasets for measuring the risks posed by frontier AI systems. For example, you might do any or all of the following:
- Design and build harnesses to test AI models and systems (including agents) for dangerous capabilities such as security vulnerability exploitation, CBRN uplift, and other high-risk activities;
- Work with government agencies or other labs to collectively scope and design evaluations to measure and mitigate risks posed by advanced AI systems;
- Publish evaluation methodologies and write technical reports for policymakers.
Ideally you’d have:
- Commitment to our mission of promoting safe, secure, and trustworthy AI deployments in the industry as frontier AI capabilities continue to advance.
- Practical experience conducting technical research collaboratively. You should be comfortable building and instrumenting ML pipelines, writing evaluation harnesses, and quickly turning new ideas from the research literature into working prototypes.
- A track record of published research in machine learning, particularly in generative AI.
- At least three years of experience addressing sophisticated ML problems, whether in a research setting or in product development.
- Strong written and verbal communication skills to operate in a cross-functional team.
Nice to have:
- Experience in crafting evaluations and benchmarks, or a background in data science roles related to LLM technologies.
- Experience with red-teaming or adversarial testing of AI systems.
- Familiarity with AI safety policy frameworks (e.g., NIST AI RMF, EU AI Act, Korea AI Basic Act).