Google Deepmind Logo
Google Deepmind
External Safety Testing Manager
🌎London, UK
4d ago
👀 0 views
📥 0 clicked apply

Job Description

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

About us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and responsibility are the highest priority.

Snapshot

As an External Safety Testing Manager in the Responsible Development and Innovation (ReDI) team, you’ll be integral to the delivery and scaling of our external safety partnerships and evaluations on Google DeepMind’s most groundbreaking models.   

You will work with teams across Google DeepMind, including Product Management, Research, Legal, Engineering, Public Policy, and Frontier Safety and Governance, to deliver external safety evaluations which are a key part of our responsibility and safety best practices, helping Google DeepMind to progress towards its mission.

The role

As an External Safety Testing Manager working in ReDI, you’ll be part of a team who partners with external expert groups to conduct safety evaluations across various domains and modalities on our frontier models. In this role, you’ll work in collaboration with other members of this critical program, responding to needs of the business in a timely manner and prioritising accordingly. 

Key responsibilities

  • Leading on drawing out key themes from testing and the collation of findings across domain areas to understand holistic external safety findings for frontier models 
  • Leading engagement with internal safety policy experts to assess the severity of the findings from external groups  
  • Leading engagement with internal frontier modelling teams to feed back testing findings for visibility and mitigation 
  • Leading engagement broader ReDI team to ensure visibility of testing findings as part of the responsibility and safety governance process   
  • Leading on efforts to optimise and scale the program to support the growing needs of the business 
  • Acting as the relationship manager for various external testing partnerships 
  • Identifying new partners with relevant skillsets to undertake external safety testing 
  • Working with legal and partnerships on contracting for new external testing partnerships 
  • Working with engineering on technical requirements to meet external safety testing timelines 
  • Onboarding external testing partners onto new frontier models for testing  
  • Working with internal experts to identify specific areas for red teaming and evaluation by external expert groups 
  • Working collaboratively alongside a team of multidisciplinary specialists to deliver the program 
  • Communicating with wider stakeholders across ReDI, GDM and external testing partners 
  • Supporting improvements to how evaluation findings are visualised to key stakeholders and leadership

About you

In order to set you up for success as an External Safety Testing Manager in the ReDI team, we look for the following skills and experience:

  • Previous experience working in a fast-paced and complex environment, either in a start-up, tech company, or consulting organisation      
  • Familiarity with safety considerations of generative AI, including (but not limited to) frontier safety (such as chemical and biological risks), content safety, and sociotechnical risks (such as fairness)  
  • Ability to thrive in a fast-paced, live environment where decisions are made in a timely fashion 
  • Strong communication skills and demonstrated ability to work in cross-functional teams, foster collaboration, and influence outcomes 
  • Strong project management skills to optimise existing processes and create new processes  
  • Expertise in analytical and statistical skills, data curation and data collection design
  • Significant experience presenting and communicating findings to non-data science audiences, including senior stakeholders 

In addition, the following would be an advantage: 

  • Experience of working with sensitive data and access controls 
  • Prior experience working with product development or in similar agile settings would be advantageous 
  • Subject matter expertise in generative AI safety considerations, including (but not limited to) frontier safety (such as chemical and biological risks), content safety, and sociotechnical risks (such as fairness)  
  • Experience designing and implementing audits or evaluations of cutting edge AI systems

Application deadline: 6pm GMT, Friday 6th December 2024 

Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.