Moveworks Logo
Moveworks
Staff Machine Learning Engineer
🌎Bengaluru, India
1w ago
👀 3 views
📥 0 clicked apply

Job Description

Who We Are 

Moveworks is the universal AI copilot for search and automation across all your business applications. We give employees one place to go to find information and get support while reducing costs for your business. The Moveworks Copilot is powered by an industry-leading Reasoning Engine that uses a combination of public and proprietary language models to understand employee queries, then build and execute multi-step plans that achieve them. It does this by linking into systems (like ITSM, HRIS, ERP, identity management, and more) with native and custom-built integrations that turn natural language into powerful automations for employees.  

The world’s most innovative brands like Databricks, Broadcom, Hearst, and Palo Alto Networks trust Moveworks to eliminate repetitive support issues, deliver instant knowledge, and empower employees to work faster across applications.

Founded in 2016, Moveworks has raised $315 million in funding, at a valuation of $2.1 billion, thanks to our award-winning product and team. In 2023, we were included in the Forbes Cloud 100 list as well as the Forbes AI 50 for the fifth consecutive year. We were also recognized by the 2023 Edison Awards for AI Optimized Productivity, and were included on Fast Company's Most Innovative Companies list for 2024! 

Moveworks has over 500 employees in six offices around the world, and is backed by some of the world's most prominent investors, including Kleiner Perkins, Lightspeed, Bain Capital Ventures, Sapphire Ventures, Iconiq, and more.

Come join one of the most innovative teams on the planet!

What You Will Do

We are looking for a Machine Learning Engineer to help build cutting edge ML infrastructure for building and serving LLM’s  at Moveworks. This role will be critical in building, optimizing and scaling end-to-end machine learning systems. The ML infra team covers a variety of responsibilities including distributed training and inference pipeline for large language models(LLM), model evaluation and monitoring framework, LLM latency optimization, etc. These frameworks serve as a strong foundation for our hundreds of ML and NLP models in production serving hundreds of millions of enterprise employees. We are solving many challenges on scalability of services as well as optimization of core algorithms. 

Your work will impact the way our customers experience AI. Put another way, this role is absolutely critical to the long term scalability of our core AI product and ultimately the company. You will be responsible for building and productionizing ML infrastructure that runs state of the art models. If you are looking for a high-impact, fast-moving role to take your work to the next level, we should have a conversation.  

  • Design, build and optimize scalable machine learning infrastructure to support training, evaluation, and deployment of large language models.
  • Build abstractions to automate various steps in different ML workflows
  • Collaborate with cross functional teams of engineers, data analytics, machine learning experts, and product to build new features.

What You Bring To The Table

  • 5+ years of industry experience in Machine Learning, Infrastructure or related fields
  • Experience with deep learning frameworks such as Pytorch or Huggingface or LLM serving frameworks such as vLLM or TensorRT-LLM.
  • Experience with building and scaling end-to-end machine learning systems
  • Experience building scalable micro services and ETL pipelines
  • Expertise in Python and experience with performant language such as C++ or GoLang
  • Bachelor's in Computer Science, Computer Engineering, Mathematics, or equivalent field.
  • A love of research publications in the machine learning and software engineering communities
  • Effective communicator with experience collaborating cross-functionally with other teams

Nice To Have

  • Experience with ML Inference optimization using TensorRT. 
  • Experience with distributed training  frameworks such as Deepspeed.
  • Experience in managing and scaling GPU Inference services via Kubernetes

*Our compensation package includes a market competitive salary, equity for all full time roles, exceptional benefits, and, for applicable roles, commissions or bonus plans. 
Ultimately, in determining pay, final offers may vary from the amount listed based on geography, the role’s scope and complexity, the candidate’s experience and expertise, and other factors.

Moveworks Is An Equal Opportunity Employer
*Moveworks is proud to be an equal opportunity employer. We provide employment opportunities without regard to age, race, color, ancestry, national origin, religion, disability, sex, gender identity or expression, sexual orientation, veteran status, or any other characteristics protected by law.