.png?width=100&height=115&name=Big%20Data%20Expo%20Vorm%20F%20(1).png)
.png?width=100&height=115&name=Big%20Data%20Expo%20Vorm%20C%20(1).png)
SLM Playbook - A beginner's guide to Small Language Models
Woensdag 10:15 - 10:45
Lezingenzaal 3
Roberto Flores
Global AI & Data Engineering Lead
While Large Language Models (LLMs) capture headlines, their significant computational and financial costs can be prohibitive. But what if you could achieve potentially 80% of the necessary capability for specific tasks at only 20% of the resource cost? Small Language Models (SLMs) – typically under 13 billion parameters – present a powerful, efficient, and often open-source alternative. This session provides a practical playbook for data professionals, developers, and architects seeking to understand and leverage SLMs effectively. We'll explore the SLM landscape, differentiate them from their larger counterparts, and detail the advantages of task-specific models, including the flexibility and accessibility offered by open-source options. Crucially, we will examine diverse implementation strategies: running SLMs locally (e.g., with Ollama) for maximum privacy and control, utilizing cloud-based Inference APIs for ease of integration, and leveraging managed cloud platforms (like Azure AI Studio, Vertex AI, SageMaker) for scalable enterprise deployment. Attendees will gain foundational knowledge on selecting appropriate models based on task requirements, understanding open-source licenses, and ethical considerations, cover essential concepts like quantization for optimization, and see practical use cases where SLMs excel – empowering them to deploy efficient, cost-effective, and powerful AI solutions.
While Large Language Models (LLMs) capture headlines, their significant computational and financial costs can be prohibitive. But what if you could achieve potentially 80% of the necessary capability for specific tasks at only 20% of the resource cost? Small Language Models (SLMs) – typically under 13 billion parameters – present a powerful, efficient, and often open-source alternative. This session provides a practical playbook for data professionals, developers, and architects seeking to understand and leverage SLMs effectively. We'll explore the SLM landscape, differentiate them from their larger counterparts, and detail the advantages of task-specific models, including the flexibility and accessibility offered by open-source options. Crucially, we will examine diverse implementation strategies: running SLMs locally (e.g., with Ollama) for maximum privacy and control, utilizing cloud-based Inference APIs for ease of integration, and leveraging managed cloud platforms (like Azure AI Studio, Vertex AI, SageMaker) for scalable enterprise deployment. Attendees will gain foundational knowledge on selecting appropriate models based on task requirements, understanding open-source licenses, and ethical considerations, cover essential concepts like quantization for optimization, and see practical use cases where SLMs excel – empowering them to deploy efficient, cost-effective, and powerful AI solutions.
Terug naar het overzicht
Geïnteresseerd in deze lezing?
Meld je nu gratis aan voor Data Expo en beleef twee dagen vol inspiratie, praktijkinzichten en vernieuwende datatoepassingen. Ontdek wat data voor jóúw organisatie kan betekenen!
We believe data drives digital transformation
Blog
De kracht van Retrieval-Augmented Generation (RAG) ontsluiten
Digitale Transformatie voor MKB: 8x Voordelen en Uitdagingen
Meld je aan voor de nieuwsbrief
naar boven