Jersey City, NJ, United States
14 hours ago
Lead Software Engineer - Market Risk

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.

As a Lead Software Engineer at JPMorgan Chase within the Market Risk MXL DataLake Team, you will join a strategic initiative building cutting-edge data platforms for market risk and analytics. In this role, you'll design and implement high-volume data pipelines and historical data stores, collaborating closely with architects, risk technologists, and product owners.

Job Responsibilities

Design, build, and maintain large-scale historical data stores on modern big-data platformsDevelop robust, scalable data pipelines using PySpark / Spark for batch and incremental processingApply strong data-modelling principles (e.g. dimensional, Data Vault–style, or similar approaches) to support long-term historical analysis and regulatory requirementsEngineer high-quality, production-grade code with a focus on correctness, performance, testability, and maintainabilityOptimize Spark workloads for performance and cost efficiency (partitioning, clustering, file layout, etc.)Collaborate with architects and senior engineers to evolve platform standards, patterns, and best practicesContribute to code reviews, technical design discussions, and continuous improvement of engineering practices

Required qualifications, capabilities and skills

Degree-level education in Computer Science, Software Engineering, or a related discipline (or equivalent practical experience)Strong software engineering fundamentals, including data structures, algorithms, and system designProven experience building large-scale data engineering solutions on big-data platformsHands-on experience developing PySpark / Spark pipelines in production environmentsSolid understanding of data modelling for analytical and historical data use casesExperience working with large volumes of structured data over long time horizonsFamiliarity with distributed systems concepts such as fault tolerance, parallelism, and idempotent processing.

Preferred Qualifications

Experience with Databricks, Delta Lake, or similar cloud-based big-data platformsHands-on experience designing and implementing Data Vault 2.0 models.Exposure to historical / regulatory data platforms, risk data, or financial servicesKnowledge of append-only data patterns, slowly changing dimensions, or event-driven data modelsExperience with CI/CD, automated testing, and production monitoring for data pipelinesExperience building highly reliable, production-grade risk systems with robust controls and integration with modern SRE tooling.
Confirmar seu email: Enviar Email