We are seeking skilled and motivated Spark & Databricks Developers to join our dynamic team for a long-term project. The ideal candidate will have strong hands-on experience in Apache Spark, Databricks, and GitHub-based development workflows. Key Responsibilities: Design, develop, and optimize big data pipelines using Apache Spark. Build and maintain scalable data solutions on Databricks. Collaborate with cross-functional teams for data integration and transformation. Manage version control and code collaboration using GitHub. Ensure data quality, performance tuning, and job optimization. Participate in code reviews, testing, and documentation. Required Skills & Experience: 4–8 years of experience in data engineering or related roles. Strong hands-on expertise in Apache Spark (batch & streaming). Proficient in using Databricks for developing and managing data workflows. Experience with GitHub for code versioning, pull requests, and branching strategies. Good understanding of data lake/data warehouse architectures. Strong SQL and Python scripting skills.