BE, B.Tech, M.S., or Ph.D. in Engineering, Computer Science, Operations Research, Statistics, Applied Mathematics, or a related quantitative field.
2+ years of hands-on experience in designing, developing, and deploying generative AI models, including deep expertise in Large Language Models (LLMs), Transformers, Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and diffusion models. Experience with prompt engineering, fine-tuning, and RAG architectures is highly desirable.
2+ years of experience in Python with strong proficiency in generative AI frameworks and libraries such as TensorFlow, PyTorch, Hugging Face Transformers, LangChain, and LlamaIndex.
Extensive experience with Google Cloud Platform (GCP) services, particularly Vertex AI (including its Generative AI capabilities, Model Garden, and Workbench), BigQuery, and MLOps tools for large-scale model deployment and monitoring. Familiarity with distributed computing frameworks (e.g., Spark, Ray) for large model training is a plus.
Excellent problem-solving, communication, and data presentation skills, with a strong understanding of ethical AI principles and responsible AI development.
•
Design, develop, and deploy cutting-edge generative AI models, including Large Language Models (LLMs), Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and diffusion models, to solve complex engineering and business challenges.
Leverage advanced techniques such as prompt engineering, fine-tuning, and Retrieval Augmented Generation (RAG) architectures to customize and optimize generative models for specific Ford use cases, enhancing their performance and relevance.
Train, fine-tune, validate, and rigorously monitor the performance, safety, and ethical implications of generative AI models throughout their lifecycle.
Explore and apply generative AI for tasks such as synthetic data generation, advanced content creation, intelligent automation of processes, and enhancing human-computer interaction across various domains like connected vehicles, product development, and customer service.
Establish scalable, efficient, and automated MLOps (Machine Learning Operations) pipelines specifically tailored for the lifecycle management of generative AI models, from experimentation and rapid prototyping to production deployment and continuous improvement.
Analyze and extract relevant information from large amounts of historical business data, especially related to quality, product development, and connected vehicles, in both structured and unstructured formats, to inform generative model development.
Package and present complex findings and model capabilities clearly and concisely to diverse cross-functional teams and stakeholders.