Datacraft State-of-the-Art Review

Practical insights for LLM fine-tuning and evaluation

Talk cover

Synopsis

If vendors almost announce to sell AGI-as-as-service through API to enterprise client, the reality after trying to use proprietary LLM as a service on a specific use case is often different. Non-relevant generation, out of context answer, misunderstanding of the user queries and more broadly lacking subject matter expertise starts to erode the users and shareholders confidence of the potential transformative power of deploying LLM-enhanced business workflow across your organisation. You're not alone in this journey.

In this talk, we’ll explore the landscape of fine-tuning solutions for open-source LLM, weighing their pros and cons. We'll delve into the data required and how to design a robust evaluation framework to systematically assess your in-house model's performance.

We’ll deep dive on the subtle differences between the Parameter Efficient Finetuning Methods PEFT), the reinforcement learning approaches, what to keep in mind when considering which one to use.

This talk is a synthesis of deploying LLM capabilities at various organisations, from startup to corporate environments. It's a blend of insights from research papers and pragmatic experiences. We won’t go onto the details of the mathematical operations under the hood for each fine-tuning approach, instead our goal is to share the intuition of those concepts, equipping you to design an effective roadmap for fine-tuning an LLM for your specific business use case.

Watch the replay

Full recording of the talk at DataCraft Paris, exploring LLM fine-tuning approaches and evaluation frameworks.

Audience

Machine learning engineers, data scientist, research engineers, applied AI scientists

Resources

Download Presentation Slides (PDF)

Link to the event

https://datacraft.paris/event/etat-de-lart-from-agi-promises-to-llm-realities-practical-insights-into-language-model-fine-tuning-and-evaluation/

Cite this talk

Use this citation to reference this work in your research or documentation.

Paupier, F. (2024, Feb 28). Practical insights for LLM fine-tuning and evaluation, https://fpaupier.fr/202402_LLM_finetuning