Principal AI Architect and Security Strategist
The posting explicitly mentions "vibe coding" among GenAI/LLM tooling experience and emphasizes rapid prototyping with LLMs, RAG, and agent/tool integration.
About the Role
Lead the technical vision and multi-year architecture for AI-first security products at Microsoft, driving zero-to-one incubation and one-to-many platform scale. Provide hands-on technical leadership—prototyping, designing RAG and model routing solutions, and aligning cross-functional teams to deliver secure, responsible AI security experiences.
Job Description
Role
Microsoft Security NEXT (MSECAI) is hiring a Principal AI Architect and Security Strategist to define the multi-year product vision, technical strategy, and architecture for AI-native security solutions. The role spans zero-to-one incubation and one-to-many platformization across Microsoft Security products (e.g., Security Copilot, Defender, Sentinel, Entra, Intune, Purview, and Azure AI), with an emphasis on rapid prototyping, responsible AI, and secure scale-out to GA.
Key Responsibilities
- Define technical vision, architecture, and roadmap for AI-native security incubation initiatives and align stakeholders across product teams.
- Lead 0→1 incubation through MVP/private preview and drive 1→N platformization and scaling to General Availability, balancing quality, latency, reliability, cost, and safety.
- Provide hands-on technical leadership: prototype in code, review designs and PRs, define APIs/data contracts, and build well-architected systems.
- Design AI-first security components: select between LLMs and classical ML, design retrieval-augmented generation (RAG) pipelines, implement grounding, model routing/fallbacks, and safety guardrails.
- Ensure security-centric and Responsible AI practices: design privacy/security guardrails, coordinate reviews, abuse prevention, compliance checks, and incident readiness.
- Lead and mentor virtual teams to cultivate a high-velocity engineering culture and engage with enterprise customers and field teams to co-design solutions and demonstrate progress to executives.
Requirements
Required
- Bachelor’s degree in Computer Science or related field AND 8+ years of technical engineering experience coding in languages such as C, C++, C#, Java, JavaScript, or Python, or equivalent experience.
- Ability to meet Microsoft and customer security screening requirements (Microsoft Cloud background check on hire and every two years).
Preferred
- Advanced degree (Master’s) or equivalent with substantial senior engineering experience (examples in posting: Master’s + 8+ years, or Bachelor’s + 12+ years).
- 6+ years driving complex cross-functional initiatives and 3+ years with ML/AI systems (LLMs, generative AI, RAG, model serving, experimentation platforms, data pipelines), including evaluation metrics and model quality improvements.
- Proven track record shipping cloud-based AI or security services at scale (multi-tenant, high-throughput).
- Security domain expertise (threat detection/response, SIEM/SOAR, identity, endpoint, cloud security) and familiarity with analyst workflows.
- Experience with GenAI/LLM techniques and tooling: prompt engineering, retrieval/vector stores, agents/tool integration, content safety and guardrails, offline/online evaluation frameworks, and vibe coding.
- Hands-on coding in one or more languages (e.g., Python, C#, C++, Rust, JavaScript/TypeScript); comfortable prototyping and reviewing PRs.
- Industry thought leadership (patents, papers, talks, community leadership).
Compensation
- Typical U.S. base pay range listed for this role: USD 163,000 - 296,400 per year. Higher ranges apply in specific locations (e.g., San Francisco Bay Area and New York City: USD 220,800 - 331,200).
Other
- Position requires meeting Microsoft security screening requirements. The posting notes ongoing acceptance of applications until filled and Microsoft’s equal employment opportunity statement.
Tech Stack
Skills
Experience Level
Salary
USD 163,000 - 296,400/year