(USA) Group Director, Software Engineering, AI Governance
Explicitly calls out vibe coding as an emerging pattern to govern and lists risks introduced by vibe coding and hybrid builder workflows.
About the Role
Lead enterprise-wide AI governance at Walmart, designing and operationalizing lifecycle controls for models, agents, and AI-powered decision systems. Build platform-embedded guardrails, an AI red team, and partner across Security, Legal, and business teams to enable safe, auditable, and scalable AI adoption.
Job Description
Role
The Group Director, AI Governance will define and operationalize enterprise-wide governance for models, agents, and AI-powered decision applications. This leader will design lifecycle governance (registration, evaluation baselines, monitoring, cost transparency, explainability, and auditability), set guardrails for autonomous behavior, and embed controls into developer platforms to enable safe and scalable AI adoption.
Key Responsibilities
- Establish and operationalize model and agent lifecycle governance, including registration, evaluation baselines, monitoring for drift, bias, and degradation, and cost transparency.
- Define guardrails for autonomous agents, escalation boundaries, and expectations for deterministic-first architectures where appropriate.
- Build and lead an AI red team/adversarial testing capability covering models, agents, and orchestration flows.
- Identify and mitigate risks from emerging development patterns (e.g., vibe coding, hybrid builder workflows), including prompt injection, data leakage, insecure API chaining, shadow deployments, and uncontrolled cost exposure.
- Partner closely with Security, Legal, Risk, and engineering teams to align AI threat modeling and governance controls with enterprise standards.
- Treat governance as a product by embedding controls (guardrails, logging, cost controls, approval pathways) directly into developer platforms.
- Measure success by adoption of governance standards, registration coverage, evaluation baselines, reduction of unmanaged AI deployments, and positive builder sentiment.
Requirements
Required Qualifications
- 12+ years of experience in AI, machine learning, MLOps, model risk management, cybersecurity, or large-scale analytics platforms.
- Demonstrated ownership of governance, control frameworks, or risk programs for production AI or analytics systems.
- Deep expertise in model lifecycle management, evaluation methodologies, monitoring, and observability.
- Experience operating at enterprise scale and partnering across engineering, security, legal, and business teams.
- Strong executive communication skills to translate technical risk into clear, actionable decisions.
- Minimum qualifications: either a Bachelor’s degree in a relevant field plus 8 years’ software engineering experience, or 10 years’ software engineering experience; and 5 years’ supervisory experience.
Preferred Qualifications
- Experience building or leading an AI Red Team or adversarial testing capability.
- Hands-on experience governing LLM-based systems, agents, and prompt-driven architectures.
- Familiarity with AI-related regulatory and compliance considerations in large enterprises.
- Experience embedding governance into developer platforms rather than relying on manual review processes.
- Background in retail, supply chain, or other large-scale operational environments.
Compensation & Location
- Bentonville, Arkansas: $195000 - $370000 annually (USD).
- Sunnyvale, California: $254000 - $481000 annually (USD).
- Bellevue, Washington: $234000 - $444000 annually (USD).
- Additional compensation may include annual or quarterly performance bonuses and stock.
Notes
- The role emphasizes cross-functional leadership, platform integration of governance, and adversarial testing. The posting lists a primary location in Bentonville, AR and includes other site-specific salary ranges.