Plano, TX, United States
11 hours ago
AWS Software Engineer II- Python/PySpark/Kafka

Job Description 

You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.

As an AWS Software Engineer II- Python/PySpark/Kafka at JPMorgan Chase within the Consumer and Community Banking-Digital Technology team, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. 

Job responsibilities  
•    Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics.
•    Utilize Python for data processing and transformation tasks, ensuring efficient and reliable data workflows.
•    Implement data orchestration and workflow automation using Apache Airflow.
•    Deploy and manage containerized applications using Kubernetes (EKS) and Amazon ECS.
•    Use Terraform for infrastructure provisioning and management, ensuring a robust and scalable data infrastructure.
•    Develop and optimize data models to support business intelligence and analytics requirements.
•    Work with graph databases to model and query complex relationships within data.
•    Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs

Required qualifications, capabilities and skills  

Formal training or certification on software engineering concepts and 2+ years applied  experience. Strong programming skills in Python, with basic knowledge of Java. Experience with Apache Airflow for data orchestration and workflow management.Proficiency in data modeling techniques and best practices.Experience with graph databases and modeling and querying graph data.Hands-on experience with Pyspark and managing large datasets. Experience in implementing ETL transformations on big data platforms and streaming, particularly with NoSQL databases (MongoDB, DynamoDB, Cassandra)


Preferred qualifications, capabilities and skills  
•    Strong analytical and problem-solving skills, with attention to detail.
•    Ability to work independently and collaboratively in a team environment.
•    Good communication skills, with the ability to convey technical concepts to non-technical stakeholders.
•    A proactive approach to learning and adapting to new technologies and methodologies.

•    Familiarity with container orchestration platforms such as Kubernetes (EKS) and Amazon ECS

Experience with Terraform or CloudFormation for infrastructure as code and cloud resource management. 

 

 

 

 

 

 

 

Confirmar seu email: Enviar Email