
This document certifies that
Ilian Georgiev
has successfully achieved the
Uplimit Full-Stack Machine Learning with Metaflow Certification
Recipients of Uplimit’s Finetuning Open Source LLMs certificate have gained foundational skills in fine-tuning large language models (LLMs) for custom and domain-specific applications. The course, designed for data scientists, machine learning engineers, and AI researchers, covered advanced techniques like Direct Preference Optimization (DPO), Supervised Fine-Tuning (SFT), and Orthogonal Random Parameter Optimization (ORPO), along with data-centric evaluation, benchmarking, and model quantization for productizing models.
Certification criteria:
Actively participated in Uplimit’s dynamic course on Finetuning Open Source LLMs: attended lectures, discussion sessions, and more.
Built a customized, fine-tuned large language model (LLM) optimized for specific applications through evaluation, fine-tuning, and deployment techniques, ready for production use.
November 4, 2024
Instructor: Ben Burtenshaw
issued by
•
•
Learners gained practical skills in selecting evaluation metrics, preparing high-quality datasets, and deploying models efficiently, empowering them to adapt LLMs to custom, domain-specific needs.
•

Finetuning Open Source LLMs
Certification
Ilian Georgiev
Congratulations on receiving your Finetuning Open Source LLMs Certificate from Uplimit! This course was designed to be challenging, and you stepped up in all the right ways.
Skills / Knowledge
- finetuning
Issued on
November 4, 2024
Expires on
Does not expire