Explicitly calls out vibe coding and regular use of AI code assistants (GitHub Copilot, Gemini, Claude) to accelerate development and prototyping.
About the Role
Equifax is hiring a Senior AI Engineer to lead architecture and delivery of cloud-native, production-grade AI agents and platforms on Google Cloud. The role combines hands-on development, MLOps, and technical leadership to build scalable, reliable microservices and agentic AI systems using AI code-assistants and modern cloud-native tooling.
Job Description
Role
Lead the design, implementation, and production deployment of agentic AI systems and cloud-native platforms. Own technical vision, mentor a cross-functional engineering team, and ensure scalable, reliable, and observable AI services on Google Cloud.
Key Responsibilities
- Design, build, and deploy complex AI agents and agentic workflows using LangChain and LangGraph.
- Lead prompt and context engineering to maximize agent accuracy and reliability.
- Research, prototype, and integrate foundational models, RAG techniques, and agentic frameworks.
- Engineer scalable production systems on Google Cloud Platform (GCP) including Vertex AI.
- Establish MLOps practices for reliability, versioning, monitoring, and observability (e.g., Langfuse).
- Use AI-powered code assistants (GitHub Copilot, Gemini, Claude) to accelerate development, documentation, testing, and monitoring.
- Build and operate containerized microservices using Docker and Kubernetes; apply IaC and CI/CD best practices.
- Define and report on engineering metrics (SLA, SLO, SLI) and enforce DevSecOps and FinOps practices.
- Collaborate with product managers, data scientists, SREs, and stakeholders to translate business needs into technical solutions.
- Lead troubleshooting, incident resolution, and agile team activities; produce technical documentation and presentations.
Requirements
- Bachelor’s degree or equivalent experience.
- 5+ years in software engineering with a track record of technical leadership and shipping scalable systems.
- 1+ years in a dedicated AI/ML role with hands-on model integration and MLOps experience.
- 1+ years architecting and building solutions with LangChain, LangGraph, or similar agentic frameworks.
- 2+ years using Google Cloud Platform, specifically AI/ML services like Vertex AI.
- 3+ years working with Kubernetes workloads.
- Proficiency in Python, JavaScript/TypeScript and/or Java; working knowledge of a modern front-end framework (Angular, React, or Vue).
- Experience with LLM observability tools such as Langfuse.
- Strong experience with cloud platforms (AWS, Google Cloud, or Azure), Docker, Kubernetes, Terraform or CloudFormation, and CI/CD tools (Github Actions, Argo CD, Jenkins).
- Database experience with SQL and NoSQL systems (examples listed: Spanned DB, Alloy DB, PostgreSQL, MySQL, MongoDB, DynamoDB, Firestore).
- Strong problem-solving, communication, mentorship, and documentation skills.
Preferred / Differentiators
- Hands-on experience with GenAI models (Gemini, ChatGPT, Claude, Llama) and deploying AI agents to production.
- History of architecting solutions for ambiguous, large-scale problems and delivering production-grade, reliable systems.
- Familiarity with DevSecOps, FinOps, SLAs/SLOs/SLIs, and observability for agentic systems.
Tech & Tools (explicitly mentioned)
- LangChain, LangGraph, Langfuse
- Google Cloud Platform (GCP), Vertex AI
- GitHub Copilot, Gemini (incl. Gemini code assists), Claude (incl. Claude code), ChatGPT, Llama
- Python, JavaScript, TypeScript, Java
- Angular, React, Vue
- Docker, Kubernetes
- Terraform, CloudFormation
- Github Actions, Argo CD, Jenkins
- Databases: Spanned DB, Alloy DB, PostgreSQL, MySQL, MongoDB, DynamoDB, Firestore