AI Governance Specialist
Role Overview
AI and Observability Specialist will be responsible for the definition and maintenance of the AI Governance standards and relative guidelines and playbooks, including the oversight of tooling and processes, and will drive the corresponding best practice across the company. Most specifically, the role oversees that AI systems are developed, deployed, classified, used and maintained responsibly, ethically, and in line with regulations and with a risk mitigation approach. This role blends Data and AI technical understanding with policy, ethics, and risk management. The new joiner will have autonomy over the delivery of their work, including requirements gathering, contribution to the solution design, overseeing tooling selection, implementation, integration, roll-out and adoption across business units in a collaborative manner.
Key Responsibilities
- Design and develop AI governance and AIMS frameworks, including AI Governance standards and playbooks, aligned with organizational values and global norms
- Contribute to the design and implementation of AI Governance related tools
- Ensure PIAs, impact and risk assessments are timely and shared with Leadership to enable effective risk mitigation (e.g., EU AI ACT Risk Assessment, Data Protection Impact Assessment completion)
- Maintaining classification and risk registers for AI use cases
- Contribute to the KPIs and adoption metrics, for the AI Governance standards across Nebius, unlocking innovation and business value through leveraging data and AI
- Gather, analyze and manage requirements, coming from various stakeholders (e.g., Cybersecurity, Privacy, Legal and respective Business Units)
Key Qualifications & Experience
Must-have requirements:
- Certified lawyer
- 5+ years’ experience in AI, with a strong understanding of AI/ML technologies (grasping how models work, are trained, mitigate for bias, drift, coding etc.)
- prefund knowledge with AI-related regulations (e.g., EU AI Act, GDPR, EU Digital Omnibus, DORA, US legislation) and applicable standards (e.g ISO 42001)
- Knowledge of data lifecycle (collection, storage, processing, deletion)
- Ability to apply ethical and risk management frameworks to AI use cases
- Knowledge of human rights impacts of AI (ability to contribute to FRIAs)
- Understanding of fairness, accountability, transparency, and explainability (FATE)
- Strong analytical capabilities to assess complex technical and ethical risks.
- Familiarity with AI quality and observability, including architecture and key metrics
- Experience in conducting impact and risk assessments (e.g., EU AI ACT risk Assessment, Data Protection Impact Assessment)
- Experience in working with data scientists, legal teams, compliance, IT, cybersecurity, product owners and other stakeholders
Preferred qualifications:
- Experience in effective training programs to foster a culture of ethics and governance in AI use cases
- Excellent communication skills to convey complex technical concepts to non-technical audiences and build effective relationships across departments.
Competencies & Behavioral Traits
- Excellent communication and stakeholder-management skills.
- A proactive, detail-oriented mindset with the ability to lead cross-functional initiatives.
- A collaborative, friendly, and approachable communicator who builds strong relationships and fosters a positive working atmosphere.
- Comfortable working in an international environment, collaborating with people from diverse backgrounds, cultures, and time zones.