Case Study: Paylocity achieves 800GB/day log analysis and simplified processing with Elastic

A Elastic Case Study

Preview of the Paylocity Case Study

Paylocity - Customer Case Study

Paylocity faced a huge observability challenge: by 2013 their Elastic Stack ingest pipeline was struggling to analyze roughly 800GB of logs a day. The original setup relied on fragile, repetitive grok filters in Logstash and a small cluster (Elasticsearch v5, a few master/data nodes with limited storage), which made parsing, scaling and reindexing at scale error-prone and slow even as hosts and log volume grew into the hundreds and billions of documents.

They simplified parsing with a consolidated grok pattern, expanded and re-architected the cluster (dedicated master/coordinating/data roles, many more data nodes, TLS and cross-cluster search), and optimized reindexing with inline scripts, slicing and non-blocking parameters (slices, refresh=false, larger batch sizes). The tuning produced dramatic throughput improvements (sustained tens of thousands of docs/sec, peaks near 100k/sec), supporting ~500M documents/day and ~30B indexed documents (~50TB total), and delivered faster troubleshooting and security response — Elastic became the team’s standard for investigation and monitoring.


Open case study document...

Paylocity

Justin Purdy

Systems Administrator


Elastic

349 Case Studies