Hyderabad, Telangana, India
1 day ago
Deputy Director - Enterprise Data Operations
Overview Job Title: Data Engineer – L10 PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development. PepsiCo’s Enterprise Data Operations (EDO) team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Enterprise Data Operations (EDO) does: · Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company · Responsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset · Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders · Increase awareness about available data and democratize access to it across the company Responsibilities Job Description As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. The candidate should have at least 5+ years of experience working on cloud platforms. 2+ years of experience in Azure is needed. S/he will have to front end “technical discussions” with leads of different business sectors. S/he will have to be on top of all support issues pertaining to t Qualifications Bachelors in computer science engineering or related field Skills, Abilities, Knowledge · Excellent communication skills, both verbal and written, and the ability to influence and demonstrate confidence in communications with senior level management. · Comfortable with change, especially that which arises through company growth. · Ability to understand and translate business requirements into data and technical requirements. · High degree of organization and ability to coordinate effectively with team. · Positive and flexible attitude and adjust to different needs in an ever-changing environment. · Foster a team culture of accountability, communication, and self-management. · Proactively drive impact and engagement while bringing others along. · Consistently attain/exceed individual and team goals · Ability to learn quickly and adapt to new skills. · Certified candidate in Azure Fundamental is preferred. · Below are nice-to-have experience for the candidate: - Proficiency with GIT and understanding of DevOps pipelines - Data Quality frameworks using Great Expectation suite - Understanding of Cloud networking (VNET RBACs etc.) - Fair understanding of web applications Qualifications · 14+ years of overall technology experience that includes at least 8+ years of hands-on software development, data engineering. · 8+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala, etc.). · 2+ years in cloud data engineering experience in Azure. Azure Certification is a plus. · Experience with version control systems like Github and deployment & CI tools. · Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. · Experience with data profiling and data quality tools is a plus. · Experience in working with large data sets and scaling applications like Kubernetes is a plus. · Experience with Statistical/ML techniques is a plus. · Experience with building solutions in the retail or in the supply chain space is a plus · Understanding metadata management, data lineage, and data glossaries is a plus. · Working knowledge of agile development, including DevOps and DataOps concepts. · Familiarity with business intelligence tools (such as PowerBI). · BE/BTech/MCA (Regular) in Computer Science, Math, Physics, or other technical fields. · The candidate must have thorough knowledge in Spark, SQL, Python, Databricks and Azure - Spark (Joins, Upserts, Deletes, Aggregates, Repartitioning, Optimizations, working with structured and unstructured data, framework designing etc.) - SQL (Joins, merge, aggregates, indexing, clustering, functions, stored procedures, optimizations etc.) - Python (Functions, modules, classes, tuples, lists, dictionaries, lists, error handling, multi-threading etc.) - Azure (Azure Data Factory, Service Bus, Log Analytics, Event Grid, Event Hub, Logic App, App services etc.) - Databricks (Clusters, pools, workflows, authorization, APIs, DBRs, AQE, optimizations etc.)
Confirmar seu email: Enviar Email