Bengaluru, Karnataka, India
19 hours ago
Data Engineer III-Databricks, SQL

Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.


 As a Data Engineer III at JPMorgan Chase within the Employee Platforms team, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.

 

 

Job responsibilities

 

Design and deliver trusted data collection, storage, access, and analytics solutions that are secure, stable, and scalable.Develop, test, and maintain critical data pipelines and data architectures across multiple technical areas and business functions.Build and optimize ELT/ETL workflows using SQL, Spark, and Databricks to support analytical and operational use cases.Support review and implementation of controls to protect enterprise data and meet governance and compliance requirements.Advise stakeholders and implement custom configuration changes in one to two tools to generate requested business products.Update logical and physical data models based on evolving use cases and sources.Use advanced SQL (joins, aggregations) and apply NoSQL where appropriate based on workload and access patterns.Implement and automate CI/CD for data pipelines and infrastructure as code with Terraform in AWS environments.Monitor, troubleshoot, and improve pipeline performance, reliability, and cost efficiency.Implement data quality checks, lineage, and metadata management across the data lifecycle.Contribute to an agile team culture of diversity, opportunity, inclusion, respect, and continuous improvement.

 

Required qualifications, capabilities, and skills

Formal training or certification on software engineering concepts and 3+ years applied experienceDemonstrate end‑to‑end experience across the data lifecycle, including ingestion, modeling, transformation, and serving.Design and implement data models for analytical and operational workloads.Use Terraform and AWS services to build cloud‑native, infrastructure‑as‑code solutions.Write advanced SQL for complex joins, aggregations, and performance tuning.Apply working knowledge of Spark to build scalable, distributed data processing jobs.Program in Python to create reliable data pipelines and automation.Implement Medallion architecture patterns and build robust pipelines on Databricks.Apply CI/CD practices and tools to automate build, test, and deployment of data pipelines.Perform statistical data analysis and select appropriate tools and data patterns to meet business analysis needs.  Preferred qualifications, capabilities, and skills  Hands‑on relevent software development experience.Program in multiple modern languages, with Python preferred.Work with relational and NoSQL databases.Use CI/CD tools such as Jules and version control systems like BitBucket and Git.

 

Confirmar seu email: Enviar Email