Case Study: Autobound achieves 20x faster LLM iteration with Vellum

A Vellum Case Study

Preview of the Autobound Case Study

How Autobound Achieved a 20x Faster End-to-End LLM Iteration Cycle

Autobound, a company that uses AI to help sales teams create personalized outreach emails, faced significant challenges in developing its AI-generated content. Their initial process for iterating on LLM prompts was slow and cumbersome, relying on custom Python scripts and manual version management in tools like Google Sheets and OpenAI's playground. This hindered their ability to quickly improve the quality of their outputs, which was a key differentiator for their business. They turned to Vellum's platform to find a solution.

By implementing Vellum, Autobound gained the ability to rapidly test and refine prompts using real-world, live data through features like the Prompt Sandbox and Evaluations. This new workflow allowed them to experiment with different models and prompts with ease. As a result, Vellum helped Autobound accelerate its end-to-end LLM iteration cycle by 20 times. Furthermore, they achieved a dramatic 4-5x reduction in latency, slashing the time to generate an email from 30 seconds down to just 6-7 seconds.


View this case study…

Autobound

‍Daniel Weiner

Founder


Vellum

22 Case Studies