Join the LLM Suite team at the Chief Analytics Office (CAO), the driving force behind firm-wide generative AI innovation at our company. As part of our mission, we develop, deploy, and scale LLM Suite—our proprietary platform for secure, enterprise-grade generative AI—empowering teams across the organization to unlock new value from data.
As a Summer Associate in the LLM Suite team at the Chief Analytics Office, you will advance generative AI by working hands-on with state-of-the-art Large Language Models (LLMs) and Multimodal LLMs, promoting the evolution of our LLM Suite platform. You will build scalable AI solutions by designing and optimizing reusable APIs and microservices that power LLM Suite, enabling seamless integration and rapid development of AI-promoten products. You will collaborate across disciplines by partnering with Cloud Engineering to deliver robust, production-ready generative AI solutions. You will promote responsible AI by helping shape governance, controls, and ethical standards for generative AI and data use across the firm.
Job Responsibilities
Leverage vast data assets and cutting-edge generative AI models within LLM Suite to deliver impactful analytics and agentic workflows.
Bridge scientific research and software engineering, applying expertise in both to advance LLM Suite’s capabilities.
Lead the design and delivery of scalable, production architectures for generative AI applications.
Empower teams to build their own ML products using LLM Suite, fostering a culture of innovation and collaboration.
Required qualifications, capabilities, and skills
Currently enrolled in a PhD program in a quantitative discipline (e.g., Computer Science, Computer Engineering).
Deep understanding of statistics, optimization, and machine learning theory, with emphasis on NLP, RL, and/or Computer Vision algorithms.
Foundational knowledge of cloud engineering and distributed systems.
Excellent communication skills to convey complex technical concepts and build trust with stakeholders.
Preferred qualifications, capabilities, and skills
Hands-on expertise in parameter-efficient fine-tuning, model quantization, and quantization-aware training for LLMs.
Familiarity with advanced prompting strategies (Chain-of-Thoughts, Tree-of-Thoughts, Graph-of-Thoughts) for generative AI.
Demonstrated experience with agentic workflows and orchestration patterns for automating and coordinating complex AI tasks.
Experience building batch and streaming microservices, exposed via gRPC and/or GraphQL, for AI/ML platforms.
Published research in natural language processing or deep learning at major conferences or journals.