Case Study: Revamp runs 15M+ LLM executions with Vellum

A Vellum Case Study

Preview of the Revamp Case Study

How Revamp Reliably Runs 15M+ LLM Executions in Production

Revamp, a YC-backed startup that provides AI-powered hyper-personalized messaging for eCommerce brands, faced challenges in managing and scaling its AI development. The team needed to catch issues quickly, version prompts, avoid regressions, and handle over 15 million LLM executions in production each month without massive engineering overhead. They sought a solution to ensure reliability and maintain quality at scale and found it in Vellum.

By implementing Vellum, Revamp gained a first-class prompt versioning system and observability tools. The solution enabled seamless debugging, cutting issue resolution time by 50%, and allowed for decoupled deployments where prompt updates could be pushed live with a single click without app redeployment. This allowed Revamp to reliably run 15 million monthly executions, use data to fine-tune models, and accelerate their AI development cycle.


View this case study…

Revamp

Xinchi Qi

Co-Founder


Vellum

22 Case Studies