Case Study: Logically achieves up to 40% faster GPU inference with Databricks Mosaic AI

A Databricks Case Study

Preview of the Logically Case Study

Logically Turbocharging GPU Inference with Databricks Mosaic AI

Logically, founded in 2017 and focused on using AI to turn web, social, and digital data into actionable threat intelligence, needed to better control growing GPU inference times as data volumes increased. The company wanted to optimize cluster usage on Databricks to improve latency and make better use of scarce GPU resources.

Using Databricks Mosaic AI and Spark tuning, Logically adjusted fractional GPU allocation, concurrent task execution, and partition sizing to push more work onto each GPU. With Databricks, Logically reduced the runtime of its flagship complex models by up to 40% and improved GPU utilization, creating a stronger foundation for future optimization across its AI platform.


View this case study…

Databricks

457 Case Studies