Rives de Paris, France
1 day ago
Sr. Manager/Staff Engineer, AI Infrastructure and Operations

ROLE SUMMARY 

 

The Senior Manager/Staff Engineer, AI Infrastructure & MLOps Engineering is a senior individual contributor position reporting directly to the Director, AI Infrastructure and Operations Lead. This role is highly technical, hands-on, and focused on building the core automation, tooling, and infrastructure that power the internal AI platform and services. 

 In this position, the Senior Manager/Staff Engineer is responsible for designing and implementing systems that enable scientists and engineers to rapidly build, deploy, and monitor machine learning models in production. Work will span Python-based automation, containerization with Docker, CI/CD pipelines, AWS cloud infrastructure, microservices, and high-performance model serving frameworks. 

 The role plays a critical part in advancing the organization’s MLOps capabilities by creating reusable components and internal developer platforms that increase velocity, reliability, and scalability of AI/ML delivery. 

 

ROLE RESPONSIBILITIES  

 

Core Engineering & Automation 

 

Design, build, and maintain Python-based tooling, SDKs, and automation frameworks to support model development, deployment, and monitoring workflows. 

Develop containerized solutions using Docker and orchestrate them using Kubernetes (including Kubeflow or similar MLOps platforms). 

Build and maintain CI/CD pipelines to streamline ML model integration, testing, and deployment into production environments. 

Implement robust automation for provisioning, configuring, and managing cloud resources using Infrastructure-as-Code (Terraform, Pulumi, AWS CDK, etc.). 

 

Cloud Infrastructure & Platform Engineering 

 

Architect and manage scalable, secure, and high-availability AWS infrastructure to support AI workloads. 

Develop and optimize microservices architectures for AI/ML serving, ensuring high throughput and low latency. 

Build and maintain APIs and services for model management, feature stores, and inference pipelines. 

Implement monitoring, logging, and observability tools to ensure performance, availability, and reliability of AI services. 

 

Model Serving & MLOps Enablement 

 

Operationalize ML model serving at scale using frameworks such as TensorFlow Serving, TorchServe, KServe, Seldon Core, or custom inference services. 

Create reusable MLOps components for data preprocessing, training orchestration, model validation, and deployment. 

Develop automation to reduce ML model deployment time, enforce versioning, and enable rollback/upgrade capabilities. 

Work closely with data scientists to translate research workflows into production-grade, scalable services. 

 

Collaboration & Best Practices 

 

Partner with AI researchers, data engineers, and platform engineers to deliver integrated solutions. 

Champion engineering excellence by promoting design documentation, code reviews, CI/CD best practices, and testing automation. 

Contribute to a culture of shared ownership, transparency, and internal open-source development. 

Act as a technical advisor and subject matter expert for MLOps tooling and infrastructure. 

 

  

BASIC QUALIFICATIONS  

 

7+ years of hands-on software engineering experience with strong Python skills in production environments. 

Proven experience designing and operating Docker-based containerized solutions and deploying them on Kubernetes. 

Deep understanding of CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins, Argo Workflows, etc.) for AI/ML workflows. 

Strong knowledge of Infrastructure-as-Code tools (Terraform, Pulumi, AWS CDK). 

Solid understanding of distributed systems, scaling patterns, and production system reliability. 

Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 

 

PREFERRED QUALIFICATIONS 

 

Expertise in AWS cloud services (EC2, S3, Lambda, EKS, SageMaker, API Gateway, CloudFormation, IAM, etc.). 

Experience building and deploying microservices and REST/gRPC APIs for AI model delivery. 

Familiarity with Mlflow, Kubeflow or other MLOps orchestration platforms is a strong plus. 

Proficiency with model serving frameworks (TensorFlow Serving, TorchServe, KServe, Seldon Core, BentoML, etc.). 

 

Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates.

Égalité des chances & Emploi 

Nous croyons que des équipes diversifiées et inclusives sont essentielles à la réussite d'une entreprise. En tant qu'employeur, Pfizer s'engage à valoriser la diversité et l’inclusion sous toutes ses formes. Cette diversité se reflète également à travers les patients et les communautés que nous servons. Ensemble, continuons à bâtir une culture qui encourage, soutient et responsabilise nos employés.

Handicap & Inclusion

Notre mission est de libérer le potentiel de nos collaborateurs et nous sommes fiers d'être un employeur inclusif pour les personnes handicapées, garantissant ainsi l'égalité des chances en matière d'emploi pour tous les candidats. Nous vous encourageons à donner le meilleur de vous-même en sachant que nous apporterons tous les ajustements raisonnables pour soutenir votre candidature et votre carrière future. Votre expérience avec Pfizer commence ici !

Information & Business Tech

Confirmar seu email: Enviar Email