Training Performance Engineer
Drive efficiency improvements across distributed training stack by analyzing training runs, optimizing GPU utilization, and collaborating with engineers.
**About the Team**Training Runtime designs the core distributed machine-learning training runtime that powers everything from early research experiments to frontier-scale model runs. With a dual mandate to accelerate researchers and enable frontier scale, we’re building a unified, modular runtime that meets researchers where they are and moves with them up the scaling curve.
Our work focuses on three pillars: high-performance, asynchronous, zero-copy tensor and optimizer-state-aware data movement; performant, high-uptime, fault-tolerant training frameworks (training loop, state management, resilient checkpointing, deterministic orchestration, and observability); and distributed process management for long-lived, job-specific and user-provided processes.
We integrate proven large-scale capabilities into a composable, developer-facing runtime so teams can iterate quickly and run reliably at any scale, partnering closely with model-stack, research, and platform teams. Success for us is measured by raising both training throughput (how fast models train) and researcher throughput (how fast ideas become experiments and products).
**About the Role**As a Training Performance Engineer, you’ll drive efficiency improvements across our distributed training stack. You’ll analyze large-scale training runs, identify utilization gaps, and design optimizations that push the boundaries of throughput and uptime. This role blends deep systems understanding with practical performance engineering — analyzing GPU kernel performance, collective communication throughput, investigating I/O bottlenecks, and sharding our models so we can train them at massive scale.
You’ll help ensure that our clusters are running at peak performance, enabling OpenAI to train larger, more capable models with the same compute budget.
This role is based in San Francisco, CA. We use a hybrid work model of three days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Profile end-to-end training runs to identify performance bottlenecks across compute, communication, and storage.
Optimize GPU utilization and throughput for large-scale distributed model training.
Collaborate with runtime and systems engineers to improve kernel efficiency, scheduling, and collective communication performance.
Implement model graph transforms to improve end to end throughput.
Build tooling to monitor and visualize MFU, throughput, and uptime across clusters.
Partner with researchers to ensure new model architectures scale efficiently during pre-training.
Contribute to infrastructure decisions that improve reliability and efficiency of large training jobs.
You might thrive in this role if you:
Love optimizing performance and digging into systems to understand how every layer interacts.
Have strong programming skills in Python and C++ (Rust or CUDA a plus).
Have experience running distributed training jobs on multi-GPU systems or HPC clusters.
Enjoy debugging complex distributed systems and measuring efficiency rigorously.
Have exposure to frameworks like PyTorch, JAX, or TensorFlow and an understanding of how large-scale training loops are built.
Are comfortable collaborating across teams and translating raw profiling data into practical engineering improvements.
Nice to have:
Familiarity with NCCL, MPI, or UCX communication libraries.
Experience with large-scale data loading and checkpointing systems.
Prior work on training runtime, distributed scheduling, or ML compiler optimization.