Sr. SDM, AI Inference Technology, Neuron SDK
Amazon.com
AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon Elastic Compute Cloud (EC2), to new product innovations that continue to set AWS’s services and features apart in the industry.
Come develop inference acceleration for AWS Neuron, the complete software stack for Trainium, Amazon's custom cloud-scale machine learning accelerators that power the latest AI models
As the Sr. SDM for the Inference Technology Team, you will lead a strong team of managers and engineers to build fundamental inference technology building blocks and libraries to enable AI developers to optimize model for inference on Trainium and Inferentia devices. You will be responsible for the full development life cycle of inference library and feature development, including reliability and scalability. You will develop the Neuronx_Distributed Inference Libraries and contribute to other popular open source Inference Libraries, enabling customers to optimize LLMs, multimodal, and generative models.
The ideal candidate will have an established background in delivering AI feature support for demanding, fast-changing priorities or delivering high-performance models using distributed inference libraries. The ideal candidate should have a strong technical ability to understand and manage a vertically integrated system stack that consisting of hardware, frameworks, and workflows.
A day in the life
You will work with the executive leadership and other senior management and technical leaders to define product directions and deliver them to customers. We build massive-scale distributed training and inference solutions, developing the full stack of software, servers and chips together with teams across the Annapurna organization to run the largest machine learning workloads.
About the team
About AWS
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
Work/Life Balance
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
Mentorship & Career Growth
We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
Come develop inference acceleration for AWS Neuron, the complete software stack for Trainium, Amazon's custom cloud-scale machine learning accelerators that power the latest AI models
As the Sr. SDM for the Inference Technology Team, you will lead a strong team of managers and engineers to build fundamental inference technology building blocks and libraries to enable AI developers to optimize model for inference on Trainium and Inferentia devices. You will be responsible for the full development life cycle of inference library and feature development, including reliability and scalability. You will develop the Neuronx_Distributed Inference Libraries and contribute to other popular open source Inference Libraries, enabling customers to optimize LLMs, multimodal, and generative models.
The ideal candidate will have an established background in delivering AI feature support for demanding, fast-changing priorities or delivering high-performance models using distributed inference libraries. The ideal candidate should have a strong technical ability to understand and manage a vertically integrated system stack that consisting of hardware, frameworks, and workflows.
A day in the life
You will work with the executive leadership and other senior management and technical leaders to define product directions and deliver them to customers. We build massive-scale distributed training and inference solutions, developing the full stack of software, servers and chips together with teams across the Annapurna organization to run the largest machine learning workloads.
About the team
About AWS
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
Work/Life Balance
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
Mentorship & Career Growth
We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
Confirmar seu email: Enviar Email
Todos os Empregos de Amazon.com