Nebius AI Logo

Nebius AI

Senior Support Engineer L2

🌎

USA

2h ago
👀 15 views
📥 1 clicked apply

Job Description

Remote

About Nebius

Launched in November 2023, the Nebius platform provides high-end infrastructure and tools for training, fine-tuning and inference. Based in Europe with a global footprint we aspire to become the leading AI cloud for AI practitioners around the world.

Nebius is built around the talents of around 400 highly skilled engineers with a proven track record in developing sophisticated cloud and ML solutions and designing cutting-edge hardware. This allows all the layers of the Nebius cloud – from hardware to UI – to be built in-house, differentiating Nebius from the majority of specialized clouds. As a result, Nebius customers get a true hyperscaler-cloud experience tailored for AI practitioners.

As an NVIDIA preferred cloud service provider, Nebius offers the latest NVIDIA GPUs including H100, L40S, with H200 and Blackwell chips coming soon.

Nebius owns a data center in Finland, built from the ground up by the company’s R&D team. We are expanding our infrastructure and plan to add new colocation data centers in Europe and North America already this year, and to build several greenfield DCs in the near future.

Our Finnish data center is home to ISEG, the most powerful commercially available supercomputer in Europe and the 19th most powerful globally (Top 500 list, June 2024). It also epitomizes our commitment to sustainability, with energy efficiency levels significantly above the global average and an innovative system that recovers waste heat to warm 2,000 residential buildings in the nearby town of Mäntsälä.

Nebius is headquartered in Amsterdam, Netherlands, with R&D and commercial hubs across North America, Europe and Israel.

The role

We are looking for the Senior Support Engineer L2. You will handle complex issues escalated from L1 support and Technical Account Managers, bringing advanced expertise in Linux, networking, Kubernetes and scripting for high-level troubleshooting. This role demands exceptional problem-solving, clear communication and a customer-centric approach to ensure service reliability.

You’re welcome to work remotely from the USA.

Your responsibilities will include: 

1. Advanced issue resolution

  • Diagnose and resolve escalated issues with high proficiency in Linux, networking, Kubernetes and data storage, minimizing downtime.
  • Lead complex troubleshooting efforts and document solutions for use across teams.

2. Technical expertise and leadership

  • Apply advanced Linux skills for efficient OS management and problem resolution.
  • Utilize in-depth networking knowledge to troubleshoot and optimize network configurations.
  • Manage containerized applications within Kubernetes environments, handling complex deployments and ensuring service continuity.
  • Use advanced Python and Bash scripting to automate tasks, streamline workflows, and improve team efficiency.
  • Demonstrate deep understanding of data storage concepts to diagnose storage issues and optimize data management practices.

3. Mentoring

  • Collaborate with internal teams and provide guidance to L1 support to enhance overall service quality.
  • Foster a supportive team environment, promote continuous learning and drive efficiency.

4. Customer communication

  • Ensure clear, professional updates to customers, explaining complex issues in a user-friendly way.
  • Oversee escalations to higher-level support or engineering teams, ensuring adherence to escalation protocols.

5. Documentation and process improvement

  • Create, update and oversee technical documentation, troubleshooting guides and knowledge base articles.
  • Identify recurring issues, recommend improvements, and implement best practices to enhance service reliability and team efficiency.

We expect you to have: 

  • Bachelor’s degree in Computer Science, Information Technology or related field preferred.
  • 7+ years in technical support with advanced skills in Linux and networking; experience managing and mentoring a support team of 5+ engineers.
  • Advanced expertise in Linux administration and troubleshooting.
  • Strong networking knowledge, including protocols, IP configurations and diagnostics.
  • Knowledge of Docker (for packaging ML workflows) and Kubernetes (for scaling and managing GPU workloads in cloud environments).
  • Proficient in Python and Bash for complex automation and task management.
  • In-depth understanding of data storage principles, types and management.
  • An understanding of how GPUs accelerate ML workloads.
  • The ability to assist with resource provisioning, scaling, and integration within ML workflows.
  • Familiarity with CUDA, Tensor Cores, and distributed training across multiple GPUs.
  • The ability to troubleshoot memory errors, driver/library mismatches, and GPU utilization bottlenecks.
  • The ability to debug common errors during model training (e.g., OOM errors, version compatibility issues).

We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

More Jobs at Nebius AI