We are hiring for this role in London, Zurich, New York, Mountain View or San Francisco. Please clarify in the application questions which location(s) work best for you.
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Our team is responsible for enabling AI systems to reliably work as intended, including identifying potential risks from current and future AI systems, and conducting technical research to mitigate them. As a Research Scientist, you will design, implement, and empirically validate approaches to alignment and risk mitigation, and integrate successful approaches into our best AI systems.
Conducting research into any transformative technology comes with responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at Google DeepMind investigates questions related to evaluations, reward learning, fairness, interpretability, robustness, and generalisation in machine learning systems. Proactive research in these areas is essential to the fulfilment of the long-term goal of Google DeepMind: to build safe and socially beneficial AI systems.
Research Scientists work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating risks, in close collaboration with other AI research groups within and outside of Google DeepMind.
We’re a dedicated scientific community, committed to ‘solving intelligence’ and ensuring our technology is used for widespread public benefit.
We’ve built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don’t set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals.
We constantly iterate on our workplace experience with the goal of ensuring it encourages a balanced life. From excellent office facilities through to extensive manager support, we strive to support our people and their needs as effectively as possible.
Our list of benefits is extensive, and we’re happy to discuss this further throughout the interview process.
We are seeking research scientists for our Gemini Safety and AGI Safety & Alignment (ASAT) teams.
Gemini Safety is seeking research scientists to contribute to the following areas:
In this role you will investigate new techniques to improve the safety behavior of Gemini via pretraining interventions.
You will conduct empirical studies on model behavior, analyze model performance across different scales, experiment with synthetic datasets, data weighting, and related techniques. You should enjoy working with very large scale datasets and have an empirical mindset.
This role is focused on post training safety. You will be part of a very fast paced, intense effort at the heart of Gemini to improve safety and helpfulness for the core model, and help adapt the model to specific use cases such as reasoning or search.
In this role, you will build and apply automated red teaming via our most capable models, find losses and vulnerabilities in our Gen AI products, including Gemini itself, reasoning models, image and video generation, and whatever else we are building.
You may also work to improve resilience to jailbreaks and adversarial prompts across models and modalities, driving progress on a fundamentally unsolved problem with serious implications for future safety.
This role is about safety for image and video generation, including Imagen, Veo, and Gemini. You will design evaluations for safety and fairness, improve the safety behavior of the relevant models working closely with the core modeling teams, and design mitigations outside the model (e.g. external classifiers).
Our AGI Safety & Alignment team is seeking research scientists to contribute to the following areas.
The focus of this role is to put insights from model internals research into practice on both safety in Gemini post-training and dangerous capability evaluations in support of our Frontier Safety Framework.
Key responsibilities:
In this role you will advance AGI safety & alignment research within one of our priority areas. Candidates should have expertise in the area they apply to. We are also open to candidates who could lead a new research area with clear impact on AGI safety & alignment. Areas of interest include, but are not limited to:
You have extensive research experience with deep learning and/or foundation models (for example, a PhD in machine learning).
In addition, any of the following would be an advantage:
At Google DeepMind, we want employees and their families to live happier and healthier lives, both in and out of work, and our benefits reflect that. Some select benefits we offer: enhanced maternity, paternity, adoption, and shared parental leave, private medical and dental insurance for yourself and any dependents, and flexible working options. We strive to continually improve our working environment, and provide you with excellent facilities such as healthy food, an on-site gym, faith rooms, terraces etc.
We are also open to relocating candidates to Mountain View and offer a bespoke service and immigration support to make it as easy as possible (depending on eligibility).
The US base salary range for this full-time position is between $136,000 - $245,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
Application deadline: 12pm PST Friday 28th February 2025