Case Study: Fintool reduces customer churn by 84% and cuts response latency by 60% with Datadog LLM Observability

A Datadog Case Study

Preview of the Fintool Case Study

Fintool reduces customer churn by 84% and monitors complex operations with Datadog LLM Observability

Fintool is a San Francisco–based financial copilot for institutional investors that uses LLMs to analyze earnings calls, SEC filings, and large-scale financial data. Facing a complex, multi-provider pipeline (Azure, OpenAI, Amazon Bedrock, Elasticsearch, Groq, Cerebras) and processing billions of tokens and millions of documents, the company found that slow, multi-step LLM responses were directly increasing customer churn and limiting growth.

By adopting Datadog LLM Observability’s auto-instrumentation and monitoring, Fintool gained end-to-end visibility into LLM calls, latency, token usage, and costs, enabling bottleneck identification, real-time provider/model rotation, load balancing, prompt refinement, and caching. The optimizations cut response latency by 60%, reduced customer churn by 84%, improved response accuracy, and allowed Fintool to scale reliably across regions and workloads.


Open case study document...

Fintool

Nicolas Bustamante

Chief Executive Officer


Datadog

90 Case Studies