Staff Software Engineer, Inference
Staff Software Engineer needed to build and maintain systems serving Claude to millions of users, optimizing compute efficiency and enabling AI research.
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems.
About the role:
Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide.
The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models.
As a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research.
Strong candidates may also have experience with:
High-performance, large-scale distributed systems
Implementing and deploying machine learning systems at scale
Load balancing, request routing, or traffic management systems
LLM inference optimization, batching, and caching strategies
Kubernetes and cloud infrastructure (AWS, GCP)
Python or Rust
You may be a good fit if you:
Have significant software engineering experience, particularly with distributed systems
Are results-oriented, with a bias towards flexibility and impact
Pick up slack, even if it goes outside your job description
Want to learn more about machine learning systems and infrastructure
Thrive in environments where technical excellence directly drives both business results and research breakthroughs
Care about the societal impacts of your work
Representative projects across the org:
Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators
Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads
Building production-grade deployment pipelines for releasing new models to millions of users
Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage
Contributing to new inference features (e.g., structured sampling, prompt caching)
Supporting inference for new model architectures
Analyzing observability data to tune performance based on real-world production workloads
Managing multi-region deployments and geographic routing for global customers
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The expected base compensation for this position is below.
Annual Salary:
€295.000—€355.000 EUR
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.**
Location-based hybrid policy:** Currently, we expect all staff to be in one of our offices at least 25% of the time.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate.