Anthropic Logo

Anthropic

Team Manager, Alignment Finetuning

🌎

San Francisco, CA

4h ago
👀 1 views
📥 0 clicked apply

Job Description

Hybrid

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

Team Manager, Alignment Finetuning 

About Anthropic 

Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role: 

You want to enable and support a team that is developing and implementing new alignment techniques for language models, aimed at improving model values, honesty, and character. As a Team Manager on the Alignment Finetuning team, you'll partner with a Technical Research Lead to drive the execution of critical alignment initiatives, support the growth and development of your team members, and ensure smooth collaboration across Anthropic's research organization.

Our team works on implementing and scaling techniques like synthetic data generation and training models to assist in model training. You'll help create an environment that enables technical excellence while maintaining focus on our core mission of making AI systems more reliable and aligned with human values.

Note: This role is expected to be based in San Francisco, with at least 3 days per week in office.

Representative projects:

  • Partner with the research lead to develop and execute the team’s roadmap
  • Build and improve processes for evaluating the effectiveness of the team’s alignment interventions
  • Coordinate cross-functional collaboration between Alignment Finetuning and other teams like T&S, Applied Finetuning, and Alignment Science
  • Support the development and growth of researchers and engineers working on novel alignment techniques
  • Drive recruiting efforts to grow the team while maintaining high standards

You may be a good fit if you:

  • Have 5+ years of technical experience in software engineering, ML/AI, or related field
  • Have 2+ years of experience managing technical teams
  • Are an excellent listener and communicator
  • Take ownership over your team's overall output and performance
  • Have experience supporting and enabling research teams
  • Build strong relationships across various stakeholder groups
  • Have a demonstrated ability to understand and support technical work
  • Care deeply about AI safety and alignment

Strong candidates may also:

  • Have experience with ML/AI projects and understanding of fundamental concepts
  • Have background working with research organizations
  • Have experience managing research or exploratory projects
  • Have experience with org design and process improvement
  • Have experience recruiting for and managing teams through periods of growth
  • Have familiarity with reinforcement learning and language models

 

The expected salary range for this position is:

Annual Salary:
$315,000$560,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.

Location-based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.

More Jobs at Anthropic