.png?width=100&height=115&name=Big%20Data%20Expo%20Vorm%20F%20(1).png)
.png?width=100&height=115&name=Big%20Data%20Expo%20Vorm%20E%20(1).png)
SLM Playbook - A beginner's guide to Small Language Models
Wednesday 10:15 - 10:45
Lezingenzaal 3
Roberto Flores
Global AI & Data Engineering Lead
While Large Language Models (LLMs) capture headlines, their significant computational and financial costs can be prohibitive. But what if you could achieve potentially 80% of the necessary capability for specific tasks at only 20% of the resource cost? Small Language Models (SLMs) – typically under 13 billion parameters – present a powerful, efficient, and often open-source alternative. This session provides a practical playbook for data professionals, developers, and architects seeking to understand and leverage SLMs effectively. We'll explore the SLM landscape, differentiate them from their larger counterparts, and detail the advantages of task-specific models, including the flexibility and accessibility offered by open-source options. Crucially, we will examine diverse implementation strategies: running SLMs locally (e.g., with Ollama) for maximum privacy and control, utilizing cloud-based Inference APIs for ease of integration, and leveraging managed cloud platforms (like Azure AI Studio, Vertex AI, SageMaker) for scalable enterprise deployment. Attendees will gain foundational knowledge on selecting appropriate models based on task requirements, understanding open-source licenses, and ethical considerations, cover essential concepts like quantization for optimization, and see practical use cases where SLMs excel – empowering them to deploy efficient, cost-effective, and powerful AI solutions.
While Large Language Models (LLMs) capture headlines, their significant computational and financial costs can be prohibitive. But what if you could achieve potentially 80% of the necessary capability for specific tasks at only 20% of the resource cost? Small Language Models (SLMs) – typically under 13 billion parameters – present a powerful, efficient, and often open-source alternative. This session provides a practical playbook for data professionals, developers, and architects seeking to understand and leverage SLMs effectively. We'll explore the SLM landscape, differentiate them from their larger counterparts, and detail the advantages of task-specific models, including the flexibility and accessibility offered by open-source options. Crucially, we will examine diverse implementation strategies: running SLMs locally (e.g., with Ollama) for maximum privacy and control, utilizing cloud-based Inference APIs for ease of integration, and leveraging managed cloud platforms (like Azure AI Studio, Vertex AI, SageMaker) for scalable enterprise deployment. Attendees will gain foundational knowledge on selecting appropriate models based on task requirements, understanding open-source licenses, and ethical considerations, cover essential concepts like quantization for optimization, and see practical use cases where SLMs excel – empowering them to deploy efficient, cost-effective, and powerful AI solutions.
Back to overview
Visit Data Expo
Interested in this lecture?
Register now for free for Data Expo and experience two days full of inspiration, practical insights, and innovative data applications. Discover what data can do for your organization!
We believe data drives digital transformation
Unlocking the Power of Retrieval-Augmented Generation (RAG)
Digital Transformation for SMEs: 8 Benefits and Challenges
Subscribe for the newsletter
To top