On-Demand Webinar

Rapid LLM Experimentation with Gretel and Lambda

Hear from teams behind the AI developer cloud Lambda and the synthetic data platform Gretel about how their combined stack drives faster AI experimentation and innovation.

Recorded on August 13, 2024

The trend towards small language models (SLMs) is opening up entirely new possibilities for AI/ML teams everywhere. Model series like Mistral, Qwen, Yi, Phi, and SmolLM with 0.3B - 9B parameters are quickly becoming a go-to solution for prototyping and quickly shipping new task-specific solutions. Zooming out, AI development for enterprises is on the path to resembling a much more iterative, experimental, and agile process, mirroring the evolution of software development, where agile and CI/CD are now standard practice. Yet, there are two important differences: agility in the age of LLMs requires easy access to high-quality context-specific data and reliable, on-demand access to hardware for training and inferencing.

In this webinar, learn how Gretel and Lambda together unlock faster experimentation so teams can easily vet approaches, fail fast, and be much more agile in delivering a LLM solution that works. We will use Gretel Navigator, the first compound AI system for synthetic data generation, to design and iterate on a task-specific dataset. Designing (from scratch) and iterating on data is built into Navigator and into how users interact with it, creating a new paradigm for how AI/ML teams approach overall model development. Teams are no longer limited to experimenting with just architectures, model configurations and training parameters. They can quickly experiment with data itself, and increasingly it’s data experimentation that’s driving most innovation.

At the same time, Lambda offers easy on-demand access to world-class GPU infrastructure, including Infiniband for multi-node training, substantially reducing costs of experimentation. Its 1-click clusters and reserved cloud remove the barrier so often associated with not being able to just try things out and fail fast. We will use Lambda to fine-tune an SLM on several versions of the synthetic dataset from Gretel to reinforce how easy it is to experiment with task-specific LLMs. With Gretel and Lambda, ideas, and not the data/hardware, are the bottleneck.

Join us to learn how to:

  • Design and iterate on synthetic data using Gretel Navigator
  • Quickly spin up world-class GPU compute via Lambda 
  • Speed-up innovation and reduce AI development costs

Presented by

Alex(sm)

Alex Watson

Co-Founder and Chief Product Officer, Gretel

Yev

Yev Meyer, Ph.D.

Chief Scientist, Gretel

BrendanFulcher_Lambda (1)

Brendan Fulcher

ML Solutions Engineer, Lambda

Discord Join us in the Synthetic Data Community Discord  https://gretel.ai/discord