Explicitly requires vibe coding skills and heavy use of AI code assistants (GitHub Copilot, Gemini, Claude) to accelerate development and prototyping.
About the Role
Lead AI Engineer to drive Equifax's technology transformation by architecting, building, and operating scalable, production-grade AI agents and cloud-native services on Google Cloud Platform while leading and mentoring a cross-functional engineering team. The role focuses on applying agentic AI frameworks, MLOps, and AI-assisted development to deliver reliable, observable systems at scale.
Job Description
Role
Lead AI Engineer at Equifax responsible for architecting and deploying cloud-native AI systems and leading a cross-functional engineering team to build scalable, production-grade AI agents and services on Google Cloud Platform.
Key Responsibilities
- Design, build, and deploy complex AI agents using LangChain and LangGraph to automate decision-making within the claims lifecycle.
- Design, test, and refine prompts and contextual data frameworks for reliable agent performance (prompt & context engineering).
- Identify, prototype, and integrate foundational models, RAG techniques, and agentic frameworks to solve business problems.
- Engineer for production scale on Google Cloud Platform with focus on reliability, observability, and performance.
- Establish and lead MLOps best practices for reliability, versioning, monitoring, and observability of agentic systems (e.g., using Langfuse).
- Use AI-powered code assistants (e.g., Gemini, GitHub Copilot, Claude) to accelerate development, documentation, testing, and monitoring practices.
- Build, manage, and mentor a cross-functional team of software, quality, and reliability engineers.
- Define and report on engineering metrics (SLA, SLO, SLI) and ensure DevSecOps and FinOps best practices.
- Collaborate with product managers, architects, SREs, data scientists, and business partners to define technical strategy and roadmaps.
- Lead troubleshooting and incident resolution, participate in agile ceremonies, and produce technical documentation and runbooks.
Requirements
- Bachelor’s degree or equivalent experience.
- 7+ years in software engineering with a strong record of technical leadership and shipping complex, scalable systems.
- 2+ years in a dedicated AI/ML role with hands-on model integration and MLOps experience.
- 1+ years architecting and building solutions with LangChain, LangGraph, or similar agentic AI frameworks.
- 2+ years of experience with Google Cloud Platform and its AI/ML services (e.g., Vertex AI).
- 3+ years of experience running Kubernetes workloads.
- Proficiency in Python, JavaScript/TypeScript, and/or Java; working knowledge of a modern front-end framework (Angular, React, or Vue).
- Hands-on experience with LLM observability tools (e.g., Langfuse).
- Experience with containerization (Docker), orchestration (Kubernetes), Infrastructure as Code (Terraform or CloudFormation), and CI/CD tools (Github Actions, Argo CD, Jenkins).
- Strong database experience with SQL (e.g., Spanned DB, Alloy DB, PostgreSQL, MySQL) and NoSQL (e.g., MongoDB, DynamoDB, Firestore).
- Strong problem-solving, communication, mentoring, and metrics-driven engineering practices.
Preferred / Differentiators
- Strong expertise in Generative AI (e.g., Gemini, ChatGPT, Claude, Llama).
- Proven experience deploying AI agents to production and leveraging AI code assistants to increase development velocity and quality.
- History of architecting elegant solutions for ambiguous, complex technical challenges and mentoring teams.
Environment & Tools
LangChain, LangGraph, Langfuse, Google Cloud Platform (GCP), Vertex AI, Python, JavaScript, TypeScript, Java, Angular, React, Vue, Kubernetes, Docker, Terraform, CloudFormation, Github Actions, Argo CD, Jenkins, Spanned DB, Alloy DB, PostgreSQL, MySQL, MongoDB, DynamoDB, Firestore, GitHub Copilot, Gemini, Claude, ChatGPT, Llama.