Elastic
349 Case Studies
A Elastic Case Study
Paylocity faced a huge observability challenge: by 2013 their Elastic Stack ingest pipeline was struggling to analyze roughly 800GB of logs a day. The original setup relied on fragile, repetitive grok filters in Logstash and a small cluster (Elasticsearch v5, a few master/data nodes with limited storage), which made parsing, scaling and reindexing at scale error-prone and slow even as hosts and log volume grew into the hundreds and billions of documents.
They simplified parsing with a consolidated grok pattern, expanded and re-architected the cluster (dedicated master/coordinating/data roles, many more data nodes, TLS and cross-cluster search), and optimized reindexing with inline scripts, slicing and non-blocking parameters (slices, refresh=false, larger batch sizes). The tuning produced dramatic throughput improvements (sustained tens of thousands of docs/sec, peaks near 100k/sec), supporting ~500M documents/day and ~30B indexed documents (~50TB total), and delivered faster troubleshooting and security response — Elastic became the team’s standard for investigation and monitoring.
Justin Purdy
Systems Administrator