Be an integral part of an Agile team that's constantly pushing the envelope to enhance, build, and deliver top-notch technology products.
As a Senior Lead Software Engineer at JPMorgan Chase, within the Commercial & Investment Bank - Digital Client Relationship team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics in a secure, stable, and scalable way. You drive significant business impact through your capabilities and contributions and apply deep technical expertise and problem-solving methodologies to tackle a diverse array of challenges that span multiple data pipelines, data architectures and other data consumers
Job Responsibilities
Designs and builds hybrid on-prem and public cloud data platform solutionsDesigns and builds end-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloadsDevelops and owns data products that are reusable, well-documented, and optimized for analytics, BI, and AI/ML consumersImplements and manages modern data lake and Lakehouse architectures, including Apache Iceberg table formatsImplements interoperability across data platforms and tools, including Databricks, Snowflake, Amazon Redshift, AWS Glue, and Lake FormationEstablishes and maintains end-to-end data lineage to support observability, impact analysis, and regulatory requirementsImplements data quality validation and monitoring using frameworks such as Great ExpectationsProvides recommendations and insight on data management and governance procedures and intricacies applicable to the acquisition, maintenance, validation, and utilization of dataDesigns and delivers trusted data collection, storage, access, and analytics data platform solutions in a secure, stable, and scalable wayDefines database back-up, recovery, and archiving strategyCreates functional and technical documentation supporting best practices. Advises junior engineers and technologistsRequired qualifications, capabilities, and skills
Formal training or certification on computer science concepts or equivalent and 5+ years applied experienceHands-on experience building and operating batch and streaming data pipelines at scaleExperience with Apache Iceberg and modern table formats in Lakehouse environmentStrong proficiency with Databricks, Snowflake, Amazon Redshift, and AWS data services such as Glue and Lake FormationExperience implementing data lineage, data quality, and data observability frameworksWorking experience with both relational and NoSQL databasesAdvanced understanding of database back-up, recovery, and archiving strategyExperience presenting and delivering visual data