Manipulating The Machine: Prompt Injections And Counter Measures

code red code red

In the rapidly evolving landscape of Large Language Models (LLMs), GPTs, and the omnipresence of OpenAI, prompt injections have become a subtle but significant vector for causing harm to AI-based tools. With seemingly simple yet creative techniques, it is often possible to extract information that was meant to remain hidden. We will explore various methods to perform prompt injection to extract system prompts and documents used by GPTs and other LLM-based tools. Along the way, we will discuss ways to integrate countermeasures into these tools to defend against attempts to steal information. Learn about prompt engineering, prompt injection, protective measures, and why, after all, we should not worry too much.


Speaker

Georg Dresler

Developer, Architect & Kotlin Enthusiast @RaySono With Over 10 Years in Mobile App Development

Georg studied computer science with a focus on web and network technologies but decided to become an app developer when the first iPhone was released. He spends most of his professional time architecting and developing apps using Kotlin Multiplatform, Flutter, native technologies and more recently also Python and LLMs.

With more than ten years of experience, he has a strong focus on application architecture, data modeling, testing and code quality. He gave talks at WeAreDevelopers, meetups in Munich and has taught a course on mobile app development at Hochschule Furtwangen.

Read more
Find Georg Dresler at:

Date

Friday Sep 27 / 03:40PM CEST ( 50 minutes )

Location

Ballroom C

Topics

LLM Chat GPT Prompt Injection Prompt Protection Prompt Engineering

Slides

Slides are not available

Share