Principal Security Research Manager
Explicitly calls out vibe-coding and hands-on generative AI work; role involves agentic systems, model benchmarking/fine-tuning, and building AI-centered wargames.
About the Role
Lead a team that builds large-scale simulated network environments and capture-the-flag (CTF) challenges to train and evaluate AI red-team and blue-team agents, advancing generative-AI-driven defensive capabilities. Drive design, implementation, and collaboration across research and engineering to improve agentic wargames and self-driven learning for security outcomes.
Job Description
Role
Lead the simulation team responsible for designing and implementing large-scale virtual environments and capture-the-flag (CTF) challenges used to train and evaluate AI red-team and blue-team agents. Partner with research and engineering to create agentic wargames and self-driven learning pipelines that accelerate adversarial AI research and defensive capabilities.
Key Responsibilities
- Lead a multi-disciplinary team of security researchers, applied scientists, and engineers to build realistic virtual network environments, breach paths, and benign traffic patterns.
- Design and seed end-to-end CTF challenges that exercise attacker and defender behaviors within simulations.
- Collaborate with research and engineering teams to implement agentic wargames, continuous learning workflows, and benchmarks for AI red and blue teams.
- Oversee simulation environment architecture, challenge design, and integration with production or evaluation pipelines.
- Engage with cross-functional partners and external stakeholders as needed to validate scenarios and use cases.
Requirements
Minimum Qualifications
- Doctorate in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 3+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection.
- OR Master’s degree AND 4+ years of relevant experience.
- OR Bachelor’s degree AND 6+ years of relevant experience.
- OR equivalent experience.
- 1+ year of people management experience.
- Ability to meet Microsoft and customer security screening requirements (including Microsoft Cloud background check).
Preferred Qualifications
- Advanced degree with additional years of industry experience (detailed in posting) or equivalent experience.
- 3+ years people management and/or informal/indirect leadership experience.
- 8+ years computer security industry experience with knowledge of adversary tradecraft, security operations, incident response, and threat hunting.
- 6+ years researching, prototyping, and authoring threat detection or remediation in production environments.
- Hands-on experience with generative AI including building agentic systems, vibe-coding, or model benchmarking/fine-tuning.
- Code fluency in C/C++, Java, Python, or Rust.
- Experience designing large-scale simulation environments or CTF challenges.
- Hands-on experience deploying and maintaining cloud environments, tenants, or subscriptions in Azure.
Compensation
- Typical U.S. base pay range: USD 139,900 - 274,800 per year.
- San Francisco Bay Area & New York City metropolitan area base pay range: USD 188,000 - 304,200 per year.
Notes
- Candidates must be able to pass required security/background screenings. Certain roles may be eligible for benefits and additional compensation as noted by the employer.
Tech Stack
Skills
Experience Level
Salary
USD 139,900 - 274,800/year