Mentions "vibe coding" as a nice-to-have; role also leverages Snowflake Cortex and AI/ML pipelines, so may involve AI-assisted workflows.
About the Role
Hands-on Data Engineering & Integration Engineer responsible for building scalable, production-grade data platforms on Snowflake and AWS. Design and optimize ETL/ELT pipelines, integrate diverse data sources, and enable analytics and AI-driven use cases.
Job Description
Role
We are hiring a Data Engineering & Integration Engineer to build scalable, high-performance data platforms using Snowflake and AWS. The role focuses on designing robust ETL/ELT pipelines, integrating diverse data sources, and enabling analytics and AI-driven use cases through production-grade data systems.
Key Responsibilities
- Design, build, and optimize ETL/ELT pipelines using Snowflake and AWS
- Integrate data from APIs, files, logs, and streaming sources
- Develop advanced SQL, stored procedures, and Snowflake tasks
- Optimize Snowflake performance (query tuning, clustering, cost optimization)
- Implement secure, scalable cloud architectures on AWS
- Partner with stakeholders to translate business needs into data solutions
- Perform exploratory data analysis and support analytics/reporting use cases
- Build and maintain CI/CD pipelines for data workflows
- Automate testing, deployment, and monitoring of data pipelines
- Leverage Snowflake Cortex and AI capabilities where appropriate
Requirements
- 2–3 years of relevant experience
- Strong experience with Snowflake Data Cloud
- Advanced SQL and strong Python programming skills
- Hands-on experience with AWS
- Expertise in building scalable ETL/ELT pipelines
- Solid understanding of data modeling and warehousing
- Experience with CI/CD and DevOps practices
Nice to Have
- Experience with AI/ML pipelines and vector databases
- Familiarity with data quality, governance, and observability frameworks
- Experience or interest in “vibe coding”