Case Study: Twelve Labs achieves advanced multimodal video understanding with Databricks

A Databricks Case Study

Preview of the Twelve Labs Case Study

Twelve Labs Mastering Multimodal AI for Advanced Video Understanding

Twelve Labs needed a way to power advanced video understanding at scale, including semantic video search, content recommendation, and video RAG, and the challenge was handling large video datasets with rich multimodal context. Databricks helped by pairing Twelve Labs’ Embed API with Databricks Mosaic AI Vector Search to create a unified approach for indexing and querying video embeddings.

Databricks implemented a pipeline that generated multimodal embeddings, stored them in a Delta table, and synced them to a Vector Search index for similarity search and recommendations. The result was faster development and more efficient workflows for advanced video applications, with support for complex natural-language queries across large video libraries and automatic index syncing to keep results current.


View this case study…

Databricks

457 Case Studies