AI Inference Engineer
Develop APIs, benchmark, and optimize AI inference systems for real-time model serving at scale.
We are looking for an AI Inference to join our growing team. Our current stack is Python, C++, TensorRT-LLM, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference.
Responsibilities
- Develop APIs for AI inference that will be used by both internal and external customers
- Benchmark and address bottlenecks throughout our inference stack
- Improve the reliability and observability of our systems and respond to system outages
- Explore novel research and implement LLM inference optimizations
Qualifications
- Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)
- Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)
- Experience with deploying reliable, distributed, real-time model serving at scale
- (Optional) Understanding of GPU architectures or experience with GPU kernel programming using CUDA