Are you passionate about driving responsible AI innovation in a dynamic, people-focused environment? Join us as VP to lead testing and monitoring of AI solutions that support HR and Employee Experience. You will have the opportunity to shape the future of AI governance, collaborate with cross-functional teams, and make a meaningful impact on how we support our employees. This is your chance to champion ethical AI practices and deliver enterprise-grade solutions.
As VP, AI Testing & Monitoring – HR & Employee Experience in the HR & Employee Experience team, you will work on ensuring the accuracy, fairness, security, and governance of all AI and ML systems supporting HR and Employee Experience. You will partner with internal and external stakeholders to design, operate, and continuously improve a centralized testing framework, delivering trustworthy and explainable AI solutions. You will also work closely with engineering teams to develop and maintain data pipelines and infrastructure needed for robust testing and monitoring.
Job responsibilities:
Establish and lead AI Testing & Monitoring for HR and Employee Experience, setting its mission, strategy, and success metrics.Serve as the accountable owner for model quality assurance and ethical compliance across all HR and Employee Experience AI use cases.Align testing standards with firmwide AI risk management and governance frameworks.Design and operationalize a dual-lane testing model for generative and statistical/ML AI solutions.Implement continuous monitoring pipelines and dashboards to detect model drift, data quality issues, and policy violations.Define and maintain standard metrics, SLAs, and certification thresholds for production readiness and operational health.Ensure vendor alignment with security, privacy, and regulatory standards, and drive innovation through automated testing, synthetic data, and bias mitigation techniques.Partner with HR and Employee Experience product teams to embed testing checkpoints throughout the AI development lifecycle.Collaborate with Compliance, Legal, Model Risk, and Operational Risk teams to ensure traceability, auditability, and regulatory adherence.Work with engineering teams to develop and maintain data pipelines and infrastructure supporting the testing and monitoring framework.
Required qualifications, capabilities, and skills:
Minimum 10 years of experience in AI/ML governance, data science, with at least 5 years in a regulated enterprise environment.Proven experience building or managing model testing or validation functions, ideally within financial services, technology, or consulting.Deep understanding of generative AI and large language model evaluation techniques, including prompt variance testing, bias audits, hallucination metrics, and guardrail evaluation.Strong grounding in statistical and predictive model validation, including drift analytics, bias detection, and performance monitoring.Experience collaborating with engineering teams on data pipeline and infrastructure development for AI/ML testing and monitoring.Preference for candidates with strong individual contributor (IC) experience and deep technical knowledge, who can architect and build solutions hands-on.Exceptional cross-functional influence and vendor management skills.
Preferred qualifications, capabilities, and skills :
Financial service background PhD in one of the above disciplines.