Kochi
3 days ago
Lead II - Software Engineering

Job Role: Data Engineer – Snowflake, SQL, Python
Job Location: Any UST

Role Overview

We are seeking an experienced Data Engineer with strong hands-on expertise in Snowflake, SQL, and Python. The role involves designing, building, and optimizing data pipelines as we transition multiple data sources and workloads to Snowflake Cloud Data Platform. You will work with modern data engineering frameworks, contribute to data modelling efforts, and ensure high-quality, scalable, and well-documented data solutions.

Key Responsibilities

Design, develop, and maintain ETL/ELT pipelines for large-scale data ingestion and transformation. Work extensively with Snowflake Cloud Data Warehouse, including schema design, performance optimization, and data governance. Develop data processing scripts and automation workflows using Python (pandas/dask/vaex) and Airflow or similar orchestration tools. Implement data modelling best practices (3NF, star schema, wide/tall tables) and metadata management processes. Optimize SQL queries across different database engines and manage performance trade-offs. Contribute to data quality, lineage, and governance integration (e.g., Collibra). Collaborate with business stakeholders to gather requirements, translate into technical specifications, and deliver end-to-end solutions. Support agile ways of working, participate in ceremonies, and maintain relevant documentation and artifacts. Work with source control (GitHub) and follow best practices for shared codebase and CI/CD workflows. Contribute to building pipelines that are robust, reliable, and support RBAC-based data access controls.

Required Skills & Experience

3+ years of experience in Data Engineering or similar role. Strong expertise with Snowflake, including schema design, warehouse configuration, and data product development. Advanced SQL skills with experience writing optimized, high-performance queries. Hands-on experience in Python for data processing, particularly with pandas or equivalent frameworks. Experience with Airflow, DBT, or similar data orchestration/ELT frameworks. Excellent understanding of ETL/ELT patterns, idempotency, and data engineering best practices. Strong data modelling experience (3NF, dimensional modelling, semantic layers). Familiarity with data governance and metadata cataloguing best practices. Experience integrating data pipelines with enterprise access control / RBAC. Working experience with GitHub or similar version control tools. Ability to work with business stakeholders, gather requirements, and deliver scalable solutions.

Preferred / Nice to Have

Experience with AWS data services (S3, Glue, Lambda, IAM). Knowledge of data virtualisation platforms, especially Denodo (cache management, query performance tuning). Certifications in Snowflake, AWS, or Denodo. Degree in Computer Science, Data Engineering, Mathematics, or related field (or equivalent professional experience).
Confirmar seu email: Enviar Email