Google

Group Product Manager, AI Security

Sunnyvale, CA, US

9 days ago
Save Job

Summary

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Sunnyvale, CA, USA; New York, NY, USA.Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 10 years of experience in product management or related technical role.
  • 5 years of experience taking technical products from conception to launch.
  • 5 years of experience in building security, privacy, or trust related products or features.
  • 2 years of experience in building products with Generative AI or Large Language Models.

Preferred qualifications:

  • Master's degree in a technology or business related field.
  • 7 years of experience working cross-functionally with engineering, UX/UI, sales finance, and other stakeholders.
  • 5 years of experience in software development or engineering.
  • 5 years of experience in building products for developers or technical users.
  • Experience with AI agents and AI workflow tools.
  • Experience with AI security.

About The Job

At Google, we put our users first. The world is always changing, so we need Product Managers who are continuously adapting and excited to work on products that affect millions of people every day.

In this role, you will work cross-functionally to guide products from conception to launch by connecting the technical and business worlds. You can break down complex problems into steps that drive product development.

One of the many reasons Google consistently brings innovative, world-changing products to market is because of the collaborative work we do in Product Management. Our team works closely with creative engineers, designers, marketers, etc. to help design and develop technologies that improve access to the world's information. We're responsible for guiding products throughout the execution cycle, focusing specifically on analyzing, positioning, packaging, promoting, and tailoring our solutions to our users.

The Secure AI Framework (SAIF) initiative is working to make security and privacy the default in Google’s AI systems and tools for responsible AI development.

In this role, you will focus on the security priorities for SAIF. You will work with Google's leading experts on AI and security to foresee new AI threats, and develop product strategies, requirements, and roadmaps to deliver industry-leading AI security solutions. Your work will advance key risk metrics tracked by Google SVPs and the Alphabet board.

The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.

The US base salary range for this full-time position is $227,000-$320,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .

Responsibilities

  • Introduce our AI security solutions to Google's Product Area and to the industry.
  • Synthesize inputs from Google product teams, executive stakeholders, industry partners, regulators, users, and the open source ecosystem to inform future AI security goals and strategy.
  • Partner with teams to define requirements and build security features into AI development tools to improve security (while also improving developer efficiency).
  • Deliver substantial progress on key risk-reduction metrics tracked by the Alphabet board and Core leadership.
  • Drive strategy and roadmaps for critical AI security risks to Google's AI models, systems, and products. Key risks include exfiltration of Google AI intellectual property, tampering or poisoning of AI assets, and rogue actions by AI agents.


Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: