You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.
As a Software Engineer II at JPMorganChase within the Consumer and community banking- Data technology you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.
Job responsibilities
Executes standard software solutions, design, development, and technical troubleshootingWrites secure and high-quality code using the syntax of at least one programming language with limited guidanceDesigns, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implicationsApplies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automationApplies technical troubleshooting to break down solutions and solve technical problems of basic complexityGathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application developmentLearns and applies system processes, methodologies, and skills for the development of secure, stable code and systemsAdds to team culture of diversity, opportunity, inclusion, and respect
Required qualifications, capabilities, and skills
[Action Required: Insert 1st bullet according to Years of Experience table]Hands-on practical experience in system design, application development, testing, and operational stabilityDesign, build, and maintain scalable ETL/ELT pipelines for batch data with ETL tools (e.g. Abinitio) and Libraries (e.g. PySpark).Develop data models (dimensional, star/snowflake) and warehouse/lakehouse structures.Implement orchestration and scheduling (e.g., Airflow, Prefect) with robust monitoring and alerting.Ensure data quality, lineage, and governance using appropriate frameworks and catalog tools.Optimize pipeline performance, storage formats (e.g., Parquet, Delta), and query execution.Advanced SQL skills (query optimization, window functions, schema design), Python programming, and experience with PySpark for distributed data processing; familiar with data warehousing/lakehouse concepts and columnar formats (Parquet, ORC).Proficient in workflow orchestration (Airflow, Prefect, Dagster), cloud platforms (AWS, Azure, or GCP), version control (Git), CI/CD pipelines (GitHub Actions, Azure DevOps), and containerization (Docker).Knowledge of data quality and testing practices (Great Expectations, unit/integration tests), with strong problem-solving, communication, and documentation abilities.
Preferred qualifications, capabilities, and skills
Familiarity with modern front-end technologiesExposure to cloud technologies