Nebius AI Logo

Nebius AI

Senior Technical Product Manager (ML/AI)

🌎

Amsterdam, Netherlands

1h ago
👀 10 views
📥 1 clicked apply

Job Description

About Nebius

Launched in November 2023, the Nebius platform provides high-end infrastructure and tools for training, fine-tuning and inference. Based in Europe with a global footprint we aspire to become the leading AI cloud for AI practitioners around the world.

Nebius is built around the talents of around 400 highly skilled engineers with a proven track record in developing sophisticated cloud and ML solutions and designing cutting-edge hardware. This allows all the layers of the Nebius cloud – from hardware to UI – to be built in-house, differentiating Nebius from the majority of specialized clouds. As a result, Nebius customers get a true hyperscaler-cloud experience tailored for AI practitioners.

As an NVIDIA preferred cloud service provider, Nebius offers the latest NVIDIA GPUs including H100, L40S, with H200 and Blackwell chips coming soon.

Nebius owns a data center in Finland, built from the ground up by the company’s R&D team. We are expanding our infrastructure and plan to add new colocation data centers in Europe and North America already this year, and to build several greenfield DCs in the near future.

Our Finnish data center is home to ISEG, the most powerful commercially available supercomputer in Europe and the 19th most powerful globally (Top 500 list, June 2024). It also epitomizes our commitment to sustainability, with energy efficiency levels significantly above the global average and an innovative system that recovers waste heat to warm 2,000 residential buildings in the nearby town of Mäntsälä.

Nebius is headquartered in Amsterdam, Netherlands, with R&D and commercial hubs across North America, Europe and Israel.

The role

We are seeking a Senior Technical Product Manager, ML/AI Lifecycle Services to join our team. In this role, you will oversee the planning and prioritization of services across the ML/AI lifecycle, including data preparation, training, fine-tuning, experiments, monitoring and inference. You will deliver products for leading AI companies, utilizing thousands GPU within one cluster with cutting-edge hardware. We also provide room for creativity, empowering you to take the initiative and build what you think is best.
 
 
Responsibilities:
  • Be a center of ML/AI expertise for both dev and business teams.
  • Own the backlog of 1–3 AI/ML products.
  • Make technical requirements for IaaS and PaaS teams that are essential for your products.
  • Introduce and promote products to the market in collaboration with cross-functional teams.
  • Make materials and onboarding guides for Solution Architect teams and Sales.
  • Be an internal customer for a Marketplace and Solution Architects teams to build E2E scenarios using our products.
 
 
Requirements:
  • We expect the candidate to be the best user of the product they manage, so technical expertise is mandatory.
  • Have solid experience as an ML Engineer/MLOps Engineer/AI Engineer with one or more domains from the following list:
Distributed training that utilizes at least dozens of hosts using Slurm, Ray Cluster, MosaicML
Organizing ML infrastructure using best MLOps practices with instruments like MLflow, W&B, MosaicML, Kubeflow, Apache Airflow, ClearML, AzureML, SageMaker, VertexAI
Maintaining and optimizing a large inference cluster with KServe, vLLM, Triton, RunAI, Seldon
Experience using of data preparation tools like Databricks and Apache Spark 
Building a product on top of LLMs that leverages techniques such as RAG, fine-tuning, and function calling, with an understanding of continuous eval of the quality
  • Product management experience is not required but willingness to learn is essential.
 
Ideal Candidate:
  • You have experience as an ML engineer, specializing in developing large generative AI models. You are now eager to shift your focus toward creating tools and instruments that enhance the efficiency of such teams.
  • You have worked as an MLOps, Solution Architect or DevOps engineer, providing infrastructure for ML teams and delving deeply into ML specifics. You are keen to share your expertise through product development and know how to build MLOPS based on  serverless GPU services such as Modal, Cerebrius and Google Cloud Run.
  • You have a background as an ML engineer and transitioned to product management, with a proven track record of delivering complex products for tech customers.

We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

More Jobs at Nebius AI