AI-Enabled Delivery: Leveraging ChOP & LLMs in Delivering More Effective Learning Experiences at QCon

code red code red

AI-Enabled Delivery: Leveraging ChOP & LLMs in Delivering More Effective Experiences at QCon dives into our journey and the lessons learned from using chat-oriented programming (ChOP), Retrieval-Augmented Generation (RAG), and Prompt Engineering to build an AI-driven certification program for the software conference QCon.

We wanted to create a certification program for conference attendees called the InfoQ Certified Architect in Emerging Technologies (ICAET) that centered around deeply understanding not just the content of QCon, but also capturing the experience of attending the event. To create this certification, we combined the intuition of a 16-time chair of the event, Wesley Reisz, with an authoritative knowledge base from the conference using Retrieval Augmented Generation.

This is the story of how we combined expertise found in human intuition with the power of an LLM to create experiences that far exceed what we could have done before. Some of the things we will discuss in the talk include:

  • Insights from using chat-oriented programming to build a video pipeline that extracts text from videos, breaks the text into chunks, generates embeddings, and loads them into a vector database for an LLM.
  • ChatOps, Vibe Coding, Agentic Engineering… 80% of the cost of a project is in the maintenance of the software we produce. So while AI tools enable us to move and get to market faster, what is the cost to the bigger picture? Agentic engineering leverages ChOP but keeps the developer (and their engineering expertise) in the driver's seat. We’ll talk about the difference and show actual working examples.
  • Patterns we employed to build the training prompts, evaluate the solution’s capabilities and limitations, and extract actionable insights—such as leveraging traditional data stores for statistics and counts—to enhance the LLM's ability to count.
  • Recommendations and pragmatic advice derived from using the intuition of an expert practitioner versus the LLM.

This talk is given by Wes Reisz, the 16-time chair of past QCons, who works at Equal Experts building solutions to reduce complexity in software. A key approach to reducing complexity is using AI Enabled Delivery.


Speaker

Wes Reisz

Technical Principal @EqualExperts | ex-Thoughtworker & ex-VMWare | 16-Time QCon Chair | Creator/Co-host of The InfoQ Podcast

With over 20 years of experience in software engineering, Wesley Reisz has chaired more than 16 QCon software conferences across San Francisco, London, and New York, founded the highly respected InfoQ Podcast, and spent over a decade teaching 400-level software architecture and programming courses at the University of Louisville. These experiences have given him deep expertise in software architecture, cloud-native engineering, and platform thinking, alongside a broad knowledge of various software domains.

Wes is a Technical Principal Consultant at Equal Experts, specializing in reducing complexity in software using application modernization, platform engineering, and AI Enabled Delivery. He embodies the concept of a T-shaped engineer—blending broad expertise across software domains with deep technical knowledge of the cloud-native ecosystem—and strongly believes in the transformative power of speaking, teaching, and continuous learning.

Before joining Equal Experts, Wes held technical leadership roles at:

  • Thoughtworks, where he focused on cloud and modernization
  • VMware, as a Tanzu Platform Architect specializing in Spring, Kubernetes, and developer paths to production
  • An edge-related startup, where he served as VP of Technology, driving innovation at the edge

You can reach Wes via:

Read more
Find Wes Reisz at: