Software Engineer, Inference – AMD GPU Enablement
Software Engineer needed to scale and optimize OpenAI’s inference infrastructure across emerging GPU platforms, focusing on AMD accelerators.
**About the Team**OpenAI’s Inference team ensures that our most advanced models run efficiently, reliably, and at scale. We build and optimize the systems that power our production APIs, internal research tools, and experimental model deployments. As model architectures and hardware evolve, we’re expanding support for a broader set of compute platforms - including AMD GPUs - to increase performance, flexibility, and resiliency across our infrastructure.
We are forming a team to generalize our inference stack - including kernels, communication libraries, and serving infrastructure - to alternative hardware architectures.
**About the Role**We’re hiring engineers to scale and optimize OpenAI’s inference infrastructure across emerging GPU platforms. You’ll work across the stack - from low-level kernel performance to high-level distributed execution - and collaborate closely with research, infra, and performance teams to ensure our largest models run smoothly on new hardware.
This is a high-impact opportunity to shape OpenAI’s multi-platform inference capabilities from the ground up with a particular focus on advancing inference performance on AMD accelerators.
In this role, you will:
Own bring-up, correctness and performance of the OpenAI inference stack on AMD hardware.
Integrate internal model-serving infrastructure (e.g., vLLM, Triton) into a variety of GPU-backed systems.
Debug and optimize distributed inference workloads across memory, network, and compute layers.
Validate correctness, performance, and scalability of model execution on large GPU clusters.
Collaborate with partner teams to design and optimize high-performance GPU kernels for accelerators using HIP, Triton, or other performance-focused frameworks.
Collaborate with partner teams to build, integrate and tune collective communication libraries (e.g., RCCL) used to parallelize model execution across many GPUs.
You can thrive in this role if you:
Have experience writing or porting GPU kernels using HIP, CUDA, or Triton, and care deeply about low-level performance.
Are familiar with communication libraries like NCCL/RCCL and understand their role in high-throughput model serving.
Have worked on distributed inference systems and are comfortable scaling models across fleets of accelerators.
Enjoy solving end-to-end performance challenges across hardware, system libraries, and orchestration layers.
Are excited to be part of a small, fast-moving team building new infrastructure from first principles.
Nice to Have:
Contributions to open-source libraries like RCCL, Triton, or vLLM.
Experience with GPU performance tools (Nsight, rocprof, perf) and memory/comms profiling.
Prior experience deploying inference on other non-NVIDIA GPU environments.
Knowledge of model/tensor parallelism, mixed precision, and serving 10B+ parameter models.