From "Simple" Fine-Tuning to Your Own Mixture of Expert Models Using Open-Source Models

code red code red

Nowadays, training a large language model (LLM) from scratch is a huge effort, even for very large companies. Starting from pre-trained models to create your own custom models is no longer just an option for resource-constrained organizations; it has become a necessary starting point for many.

In this context, various techniques and strategies can help to maximize the potential of pre-trained models:

  • Lora: A technique for low-rank adaptation, which allows for efficient fine-tuning of models by focusing on adjusting a small subset of the model's parameters.
  • Quantization and QLora: Methods to reduce the computational complexity and memory footprint of models without significantly compromising their performance, enabling more efficient deployment and fine-tuning.
  • Managing Multiple Lora Adapters: This involves using multiple Lora adapters to equip models with multiple skills, allowing for a flexible and modular approach to model capabilities.
  • Fine Embeddings Management to Improve RAG (Retrieval-Augmented Generation): Enhancing the management of embeddings can significantly improve the performance of RAG systems, which combine the strengths of information retrieval and generative models.
  • Mixing Models: Creating Your MoE (Mixture of Experts) Model: This advanced technique involves combining several fine-tuned models to create a Mixture of Experts model, leveraging the strengths of each individual model to enhance overall performance.

These strategies provide a robust toolkit for those who plan to adapt and enhance pre-trained models to meet specific needs, even without deep expertise in machine learning. By understanding and applying these techniques, organizations can harness the power of modern AI with greater efficiency and effectiveness cutting costs.
 


Speaker

Sebastiano Galazzo

CTO @Synapsia AI, Winner of Three AI Awards, 25 Years Working in AI and ML

Winner of three AI awards, I’ve been working in AI and machine learning for 25 years, designing and developing AI and computer graphic algorithms.

I’m very passionate about AI, focusing on Audio, Image and Natural Language Processing, and predictive analysis as well.
I received several national and international awards that recognize my work and contributions in these areas.

Microsoft MVP for Artificial Intelligence Category, I have the pleasure of being a guest speaker in national and international events.

Read more
Find Sebastiano Galazzo at:

Date

Thursday Sep 26 / 03:40PM CEST ( 50 minutes )

Location

Ballroom A

Share