Google DeepMind

Research Scientist, Interpretability of LLMs

Seattle, WA, US

2 days ago
Save Job

Summary

Snapshot

Advance research in understanding large language models by considering controllability as a dual problem with understanding.

About Us

We are a team in Google DeepMind, with a mission to build AI responsibly to benefit humanity. Our team pursues practical interpretability; interpretability not for the sake of explanation, but for it to be directly useful for something. Our current focus for this ‘something’ is controllability. We use controllability as a way to evaluate interpretability; if we can use explanations to do something useful with the model, chances are that the explanation bears some faithfulness.

The Role

Research scientist role to advance research in understanding large language models by considering controllability as a dual problem.

Key responsibilities:

  • Conduct research in understanding the model with practicality as a focus.
  • Translate research findings into useful internal applications.
  • Publish and present findings at internal and external venues to influence the field.

About You

In order to set you up for success as a Research Scientist at Google DeepMind, we look for the following skills and experience:

  • PhD in machine learning, statistics or related fields.
  • Strong publication record
  • Strong record in LLMs and foundation models and related engineering skills

In addition, the following would be an advantage:

  • Strong end-to-end system building and prototyping skills

The US base salary range for this full-time position is between $141,000 - $202,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job