Zoeken...

Nederlands

English

Login exposanten

11-12 september 2024

Data Expo

De beurs

Data Expo

Partners

Aftermovie

2023

Voor wie is de beurs?

Data managers

Data specialist

Manager

C-level

IT professional

Informatie over deze editie

Voor de bezoeker

Ontdek de nieuwste ontwikkelingen en kansen op onze beurs.

Voor exposanten

Deel je expertise en innovaties met een breed publiek en laat je bedrijf groeien op onze beursvloer.

Gratis tickets

Binnen 2 minuten je gratis tickets in je mailbox

Voor bezoekers

Over deze editie

Programma

Sprekers

Exposantenlijst

Plattegrond

2023

Beursmagazine

2023

Praktische informatie

Locatie & Openingstijden

Connect App

Contact

Bekijk ook eens

Waarom bezoeken?

Voor wie is de beurs

Lezing geven

Aftermovie

Klaar voor je bezoek?

Gratis tickets

Heb je nog een vraag?

Til je beurservaring naar een hoger niveau

Premium tickets

Maak jouw beursbezoek extra efficiënt en comfortabel 

Voor exposanten

Deelnemen aan de beurs

Waarom deelnemen

Deelnamemogelijkheden

Aanvullende mogelijkheden

Spreken tijdens Data Expo

Extra informatie

Bezoekersprofiel

Exposantenbrochure aanvragen

Klantverhalen

Waarschuwing datapartijen

Klaar voor de stap?

Contact een specialist

Stand reserveren

Blog & Kennis

Ontdek

Blog

Whitepaper & e-books

Uitgelicht

6 Must-haves bij data governance

Interview: ‘Grote AI-dromen verwezenlijk je in kleine stapjes’

Carrière in Data

Carrière in Data

Vacatures

Leren

Community

Contact

Voor bezoekers

Heb je nog een vraag?

Locatie en openingstijden

Voor exposanten

Contact een specialist

Gratis tickets
11-12 september 2024 | Jaarbeurs Utrecht Data Expo

Data Expo

De beurs

Data Expo

Partners

Aftermovie

2023

Voor wie is de beurs?

Data managers

Data specialist

Manager

C-level

IT professional

Informatie over deze editie

Voor de bezoeker

Ontdek de nieuwste ontwikkelingen en kansen op onze beurs.

Voor exposanten

Deel je expertise en innovaties met een breed publiek en laat je bedrijf groeien op onze beursvloer.

Gratis tickets

Binnen 2 minuten je gratis tickets in je mailbox

Voor bezoekers

Voor bezoekers

Over deze editie

Programma

Sprekers

Exposantenlijst

Plattegrond

2023

Beursmagazine

2023

Praktische informatie

Locatie & Openingstijden

Connect App

Contact

Bekijk ook eens

Waarom bezoeken?

Voor wie is de beurs

Lezing geven

Aftermovie

Klaar voor je bezoek?

Gratis tickets

Heb je nog een vraag?

Til je beurservaring naar een hoger niveau

Premium tickets

Maak jouw beursbezoek extra efficiënt en comfortabel 

Voor exposanten

Voor exposanten

Deelnemen aan de beurs

Waarom deelnemen

Deelnamemogelijkheden

Aanvullende mogelijkheden

Spreken tijdens Data Expo

Extra informatie

Bezoekersprofiel

Exposantenbrochure aanvragen

Klantverhalen

Waarschuwing datapartijen

Klaar voor de stap?

Contact een specialist

Stand reserveren

Blog & Kennis

Blog & Kennis

Ontdek

Blog

Whitepaper & e-books

Uitgelicht

6 Must-haves bij data governance

Interview: ‘Grote AI-dromen verwezenlijk je in kleine stapjes’

Carrière in Data

Carrière in Data

Carrière in Data

Vacatures

Leren

Community

Contact

Contact

Voor bezoekers

Heb je nog een vraag?

Locatie en openingstijden

Voor exposanten

Contact een specialist

Nederlands

Selecteer taal

English

Login exposanten

Gratis tickets
Big Data Expo Vorm F (1) Big Data Expo Vorm C (1)

LLM Engineering: Building Production-Ready LLM-enabled Systems

Donderdag 12:30 - 13:00
Lezingenzaal 3
Petra Heck

Senior Researcher AI Engineering

Linkedin Meer over deze spreker

With the advent of foundation models and generative AI, especially the recent explosion in Large Language Models (LLMs), we see our students and companies around us build a whole new type of AI-enabled systems: LLM-based systems or LLM systems in short. The most well-known example of an LLM system is the chatbot ChatGPT. Inspired by ChatGPT and its possibilities, many developers want to build their own chatbots, trained on their own set of documents, e.g. as an intelligent search engine. For the specific text generation task they have to:  

1) select the most appropriate LLM, sometimes fine-tune it,
2) engineer the document retrieval step (Retrieval Augmented Generation, RAG),
3) engineer the prompt,
4) engineer a user interface that hides the complexity of prompts and answers to end users.  Especially prompt engineering is a new activity introduced in LLM systems.

It is intrinsically hard as the possibilities are endless, prompts are hard to test or compare, the result might vary with different LLM models or model versions, prompts are difficult to debug, you need domain expertise (and language skills!) to engineer fitting prompts for the task at hand, and so on. For LLMs however, prompt engineering is the main way the models can be adapted to support specific tasks. So, for LLM systems we must conclude that they are data + model + prompt + code. Where it must also be noted that with LLM systems the model is usually provided by an external party and thus hard or inpossible for the developer to control, other than by engineering prompts. The external party might however frequently update its LLM and this might necessitate a system update for the LLM system as well. In this talk, we analyze the quality characteristics of LLM systems and discuss the challenges for engineering LLM systems, illustrated by real examples. We also present the solutions we have found untill now to address the quality characteristics and the challenges.

Leon Schrijvers

Docent-Onderzoeker

Linkedin Meer over deze spreker

With the advent of foundation models and generative AI, especially the recent explosion in Large Language Models (LLMs), we see our students and companies around us build a whole new type of AI-enabled systems: LLM-based systems or LLM systems in short. The most well-known example of an LLM system is the chatbot ChatGPT. Inspired by ChatGPT and its possibilities, many developers want to build their own chatbots, trained on their own set of documents, e.g. as an intelligent search engine. For the specific text generation task they have to:  

1) select the most appropriate LLM, sometimes fine-tune it,
2) engineer the document retrieval step (Retrieval Augmented Generation, RAG),
3) engineer the prompt,
4) engineer a user interface that hides the complexity of prompts and answers to end users.  Especially prompt engineering is a new activity introduced in LLM systems.

It is intrinsically hard as the possibilities are endless, prompts are hard to test or compare, the result might vary with different LLM models or model versions, prompts are difficult to debug, you need domain expertise (and language skills!) to engineer fitting prompts for the task at hand, and so on. For LLMs however, prompt engineering is the main way the models can be adapted to support specific tasks. So, for LLM systems we must conclude that they are data + model + prompt + code. Where it must also be noted that with LLM systems the model is usually provided by an external party and thus hard or inpossible for the developer to control, other than by engineering prompts. The external party might however frequently update its LLM and this might necessitate a system update for the LLM system as well. In this talk, we analyze the quality characteristics of LLM systems and discuss the challenges for engineering LLM systems, illustrated by real examples. We also present the solutions we have found untill now to address the quality characteristics and the challenges.

With the advent of foundation models and generative AI, especially the recent explosion in Large Language Models (LLMs), we see our students and companies around us build a whole new type of AI-enabled systems: LLM-based systems or LLM systems in short. The most well-known example of an LLM system is the chatbot ChatGPT. Inspired by ChatGPT and its possibilities, many developers want to build their own chatbots, trained on their own set of documents, e.g. as an intelligent search engine. For the specific text generation task they have to:  

1) select the most appropriate LLM, sometimes fine-tune it,
2) engineer the document retrieval step (Retrieval Augmented Generation, RAG),
3) engineer the prompt,
4) engineer a user interface that hides the complexity of prompts and answers to end users.  Especially prompt engineering is a new activity introduced in LLM systems.

It is intrinsically hard as the possibilities are endless, prompts are hard to test or compare, the result might vary with different LLM models or model versions, prompts are difficult to debug, you need domain expertise (and language skills!) to engineer fitting prompts for the task at hand, and so on. For LLMs however, prompt engineering is the main way the models can be adapted to support specific tasks. So, for LLM systems we must conclude that they are data + model + prompt + code. Where it must also be noted that with LLM systems the model is usually provided by an external party and thus hard or inpossible for the developer to control, other than by engineering prompts. The external party might however frequently update its LLM and this might necessitate a system update for the LLM system as well. In this talk, we analyze the quality characteristics of LLM systems and discuss the challenges for engineering LLM systems, illustrated by real examples. We also present the solutions we have found untill now to address the quality characteristics and the challenges.

Terug naar het overzicht

Bezoek Data Expo

Datagedreven leiders ontdekken essentiele kennis, contacten en technologie tijdens Data Expo

Of je nu al een vergevorderde data-strategie hebt ontwikkeld of je organisatie juist de eerste stappen wil zetten richting een datagedreven onderneming; Data Expo reikt jou de oplossingen om de volgende stap te zetten.

Waarom bezoeken?