Taking LLMs out of the Black Box: A Practical Guide to Human-in-the-Loop Distillation

code red code red

As the field of natural language processing advances and new ideas develop, we’re seeing more and more ways to use compute efficiently, producing AI systems that are cheaper to run and easier to control. Large Language Models (LLMs) have enormous potential, but also challenge existing workflows in industry that require modularity, transparency and data privacy. In this talk, I'll show some practical solutions for using the latest state-of-the-art models in real-world applications and distilling their knowledge into smaller and faster components that you can run and maintain in-house.

Interview:

What key takeaways can attendees expect from your InfoQ Dev Summit session?

  • Discover practical ways to use large generative models at development time to build modular and transparent systems you can control and run in-house.
  • Learn tricks for how to avoid the "prototype plateau" and close the gap between prototype and productions when developing next-generation AI applications. 
  • Understand the importance of refactoring your code and your data and how to involve human domain experts in the process

What's the focus of your work these days?

I'm always working on new tools that make it easier for developers to adapt workflows like the one I'm showing in my talk into their day-to-day work and that address requirements in industry, like modularity, transparency, explainability and data privacy.

What technical aspects of your role are most important?

Our work focuses on making the latest AI and natural language processing technologies available to developers, to use in real-world, industrial-strength applications. This is very challenging: our tools and libraries need to be fast and efficient, run on a variety of platforms and versions, including CPU and GPU, and implement a consistent and stable API and good developer experience. They also need to be easy to understand and adopt, while also powerful and programmable to support fully custom use cases.

How does your InfoQ Dev Summit Munich session address current challenges or trends in the industry?

Large Language Models (LLMs) have a lot of potential to transform natural language understanding use cases in industry, but they also challenge best practices and important requirements, like modularity, transparency, explainability and data privacy. My session introduces an alternative approach and mindset for using large generative models and their capabilities, without relying on third-party APIs and black-box models at runtime, which is already showing promising results in real-world industry applications.

How do you see the concepts discussed in your InfoQ Dev Summit Munich session shaping the future of the industry?

While we see a lot of focus on in-context learning and larger and larger models, it's important not to lose sight of other established techniques and new methods for using compute more efficiently to produce smaller models that are easier to control and cheaper to run. I believe approaches like human-in-the-loop distillation and task-specific language models will play an important role in applied NLP going forward.


Speaker

Ines Montani

Co-Founder & CEO @Explosion, Core Developer of spaCy and Prodigy, Python Software Foundation Fellow

Ines Montani is a developer specializing in tools for AI and NLP technology. She’s the co-founder and CEO of Explosion and a core developer of spaCy, a popular open-source library for Natural Language Processing in Python, and Prodigy, a modern annotation tool for creating training data for machine learning models.

Read more
Find Ines Montani at:

Date

Thursday Sep 26 / 10:20AM CEST ( 50 minutes )

Location

Ballroom C

Topics

AI/ML Natural Language Processing Large Language Models

Share