Calgary, Alberta
7 hours ago
Data Engineer
Airswift is seeking a Data Engineer to support a major energy client in Calgary. This is an exciting opportunity for someone passionate about data and eager to build modern, scalable data solutions that drive business value.

As a Data Engineer, you will collaborate with Data Analysts, Business Systems Analysts, and subject matter experts to deliver fit-for-purpose data pipelines and data products for analytics and data science. You will be responsible for developing pipelines that extract data from multiple sources, transform and validate it, and load it into cloud-based environments such as data lakes, data warehouses, or applications for further analysis and visualization.
The Enterprise Data and Analytics team is focused on enabling the organization to become more data-driven, working across departments to deliver impactful data solutions.

Key Accountabilities:
  Estimate, design, and develop scalable, optimized data pipelines to support Business Intelligence and Advanced Analytics. Develop and maintain standards and best practices for MLOps and machine learning initiatives, using DevOps CI/CD processes. Provide hands-on support for data integration, including monitoring, configuration, troubleshooting, and user administration. Optimize data objects for consumption by analytics and data science teams. Follow ITIL processes to transition solutions from project to production. Conduct root cause analysis and resolve incidents reported by monitoring systems or end-users. Configure and administer tools such as HVR, Knime, and Magnotix for data replication and integration. Establish and maintain best practices and templates for data engineering across different toolsets and use cases (e.g., ML/AI vs. BI Analytics).
Skills & Qualifications
  Post-secondary degree in Computer Science, Software Engineering, or a related field—or equivalent experience in data engineering or integration. 5+ years of experience developing cost-optimized, scalable, and configurable ETL/ELT pipelines. Strong experience with Python, SQL, and PySpark. Proficiency in Azure, Databricks, Azure Data Factory, and other cloud-based tools. Familiarity with DevOps/CI-CD pipelines, performant data stores, and operational REST APIs. Experience with tools such as HVR, Knime, Magnotix, Synapse Analytics, Data Lakes, and Scala is an asset.
Confirmar seu email: Enviar Email