Senior ML Solutions Architect - Token Factory
The role
We seek an experienced Senior ML Solutions Architect to support customers leveraging Nebius Token Factory's serverless inference platform for open-source LLMs across multiple modalities. In this role, you will be collaborating with clients to design and implement customized LLM-based solution and architect scalable AI applications using our served models, and working together with our backend team to improve our platform to match the clients' needs.
You’re welcome to work remotely from Europe.
Your responsibilities will include:
- Design and implement LLM-based solutions using Nebius Token Factory’s inference services to drive business value and support customer goals
- Build production-ready applications leveraging our serverless LLM APIs, including multimodal models (text, vision, audio) and domain-specific models
- Provide technical expertise in prompt engineering, RAG architectures, model selection, and inference optimization
- Collaborate with product and engineering teams to surface customer feedback and shape the platform roadmap
- Guide customers in scaling from POC to production with a focus on performance, reliability, and cost efficiency
We expect you to have:
- 5+ years of experience in ML/AI systems, with at least 2 years focused on LLMs and generative AI
- Deep knowledge of the LLM ecosystem, including model architectures and fine-tuning approaches
- Hands-on experience with:
- Prompt engineering and LLM pipeline development, including evaluation
- Agentic frameworks such as Langchain, Langsmith, smolagents, or equivalent
- Vector databases and RAG implementation patterns
- Deploying LLM-powered applications using APIs from OpenAI, Anthropic, or open-source models
- Strong Python programming skills
- Excellent communication skills, with the ability to clearly explain technical concepts to diverse audiences
It would be an added bonus if you have:
- Experience with inference frameworks and libraries (e.g., vLLM, SGLang, TensorRT-LLM, Transformers)
- Familiarity with inference optimization techniques such as quantization, batching, caching, and routing
- Work with multimodal AI models (e.g., vision-language, speech)
- Proficiency with DevOps tools (Docker, Kubernetes)
- Contributions to open-source ML/AI projects
Preferred technical stack:
- Programming Languages – Python
- ML Frameworks and Libraries – vLLM, SGLang, TensorRT-LLM, Transformers, OpenAI/Anthropic SDKs
- Frameworks for Agentic Pipelines – Langchain / Langsmith / smolagents / equivalent
- API and Web Frameworks – FastAPI, Flask
- MLOps and DevOps tools – Kubernetes (K8s), Docker, Git
- Cloud Platforms – AWS (SageMaker, Bedrock), GCP (Vertex AI), Azure (Azure ML)