Amsterdam, NL
2 days ago
Sr. Bus Dev AI Infrastructure - Europe North, WWSO EMEA AI Infrastructure Business Development and Solutions Architecture
Join AWS's newly formed AI Infrastructure team to lead the go-to-market strategy for one of the fastest-growing segments in cloud computing across the Nordics and Northern Europe.

You will work with the most innovative AI companies in the region—from foundation model builders to enterprises deploying state-of-the art inference and training workloads—helping them run and train Open Weight, Finetuned, Bespoke, and Domain Specific Large Language Models best on AWS.

AWS is establishing dedicated AI Infrastructure teams across EMEA to capture the rapidly expanding market for AI training, inference, and accelerated compute. As the Business Development Manager for AI Infrastructure in Europe North, you will own the regional GTM strategy, drive revenue growth, and build deep customer relationships with organizations pushing the boundaries of AI—model producers, domain-specific AI builders, and enterprises scaling AI-native applications.

This is a greenfield opportunity to shape a new function within a high-growth domain. You will be part of a small, focused EMEA-wide team of AI Infrastructure specialists, working at the intersection of GPU-accelerated computing, large-scale model training, inference optimization, and cloud infrastructure.


Key job responsibilities
- Own and execute the AI Infrastructure go-to-market strategy for Europe North, driving pipeline creation, deal progression, and revenue attainment across EC2 accelerated compute, Amazon EKS for AI workloads, and Amazon SageMaker HyperPod and Inference
- Identify, engage, and win high-value AI infrastructure customers—including model producers, AI-native startups, and enterprises building domain-specific AI solutions across industries such as financial services, healthcare, manufacturing, and telecommunications
- Develop and execute account-level strategies for priority customers running or evaluating large-scale AI training and inference workloads, positioning AWS as the platform of choice
- Execute strategic partner co-sell motions with key ecosystem players including NVIDIA, open-weight model providers (e.g. Meta, Mistral), and inference framework communities (vLLM, Ray, Anyscale)
- Collaborate with consulting and systems integration partners to enable AI infrastructure best practices and drive joint customer engagements
- Lead scaled GTM motions including developer community engagement, customer roundtables, workload assessments, and regional campaigns that position AWS AI infrastructure leadership
- Aggregate voice-of-customer feedback on capacity, performance, pricing, and feature requirements—working with AWS service teams to influence product roadmaps and regional infrastructure investments
- Contribute to EMEA-wide AI Infrastructure initiatives beyond your primary region, collaborating with peers across France, Germany, UKI, Israel, and MENAT on strategic programs and high-value opportunities


A day in the life
No two days are the same. You might start your morning reviewing GPU capacity requirements with a model producer scaling inference workloads across Europe, then join a technical deep-dive with your Specialist SA colleague to design an optimized deployment architecture on EC2 or EKS. After lunch, you're co-developing a joint GTM motion with partners like NVIDIA or Meta, before jumping into a pipeline review with your Area AI/ML sales leader. You'll collaborate daily with account teams, solution architects, partner managers, and AWS service teams — connecting the dots between customer needs, technical capabilities, and market opportunity to win AI infrastructure workloads across the Nordics, Baltics, and Benelux.

About the team
The AI Infrastructure team is a newly formed sub-domain under EMEA's AI/ML organization, created to bring singular focus to one of AWS's highest-growth market segments. The team consolidates expertise from across accelerated computing, containers, and machine learning infrastructure into a unified organization with clear ownership and accountability.

We serve customers across the full AI infrastructure stack—from GPU cluster provisioning and distributed training to inference optimization and model deployment at scale. Our customers include some of the most technically sophisticated organizations in the world, and our work directly impacts how AI is built and deployed across Europe.

You will join a team of passionate builders who combine deep technical curiosity with commercial acumen. We operate with a startup mentality inside one of the world's largest technology companies—moving fast, experimenting boldly, and holding ourselves to the highest standards.
Confirmar seu email: Enviar Email