Noida, Uttar Pradesh
10 hours ago
DevOps Engineer

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.

 

Primary Responsibilities:

Design and implement infrastructure as code using tools like Terraform. Automate repetitive tasks to improve efficiency using Python and other scripting toolsBuild, manage, and maintain CI/CD pipelines for seamless application deployments. Integrate with tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOpsLeverage AI and AIOps platforms to optimize operations, predict incidents, and reduce manual overhead.Implement Agentic DevOps practices to enhance automation and decision-makingMonitor system performance and availability using monitoring tools like Prometheus, Grafana, or Datadog. Develop strategies for improving system reliability and uptime. Respond to incidents, troubleshoot issues, and conduct root cause analysisDeploy, manage, and scale containerized applications using Docker and Kubernetes. Implement Kubernetes cluster configurations, including monitoring, scaling, and upgradesUse tools like Ansible or Chef for configuration management and provisioningWork with tools like Consul for service discovery and dynamic configuration. Manage network configurations, DNS, and load balancersUse tools like Packer to automate the creation of machine images. Ensure standardized environments across development, testing, and productionManage and deploy cloud infrastructure on AWS, Azure, or Google Cloud. Optimize cloud resources to ensure cost-efficiency and scalabilityCollaborate with development and QA teams to ensure smooth software delivery. Provide mentorship to junior team members on best practices and technologiesLead system implementation projects, ensuring that new systems are integrated seamlessly into existing infrastructureConsult with clients and stakeholders to understand their needs and provide expert advice on system integration and implementation strategiesDevelop and execute integration plans, ensuring that all components work together effectivelyProvide training and support to clients and internal teams on new systems and technologiesStay updated with the latest industry trends and technologies to provide cutting-edge solutionsParticipate in rotational shifts to provide 24/7 support for critical systems and infrastructure and promptly addressing any issues that ariseAdhere to company policies and demonstrate flexibility in adapting to evolving business needsComply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so

Required Qualifications:

Bachelor's degree in Computer Science, Engineering, or a related fieldCertifications in Kubernetes, Terraform, or any public cloud platform like AWS, Azure, or GCP5+ years of experience in a DevOps role or similarExperience with distributed systems and microservices architectureProgramming Languages: Proficiency in Python and experience with other scripting languages (e.g., Bash)Cloud Platforms: Familiarity with AWS, Azure, or GCP services. Proven experience implementing Public Cloud Services using Terraform within Terraform Enterprise or HCP TerraformFamiliarity with AI/ML concepts relevant to DevOps (predictive analytics, anomaly detection)Knowledge of Agentic AI frameworks for automation workflowsInfrastructure as Code: Proficiency in tools like Terraform and Ansible. Proven experience in authoring Terraform and shared Terraform ModulesDevOps Tools: Solid experience with tools like Terraform, Kubernetes, Docker, Packer, and ConsulCI/CD Pipelines: Hands-on experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.)Monitoring & Logging: Knowledge of monitoring tools (e.g., Prometheus, Grafana, Datadog) and logging tools (e.g., ELK Stack)Version Control: Experience with Git and branching strategiesSystem Implementation & Integration: Proven experience in system implementation and integration projectsConsulting Skills: Ability to consult with clients and stakeholders to understand their needs and provide expert advice

 

Soft Skills:

Solid analytical and problem-solving skillsProven excellent communication and collaboration abilitiesAbility to work in an agile and fast-paced environment

 

At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Confirmar seu email: Enviar Email