“Only accept features that scale” is one of Elasticsearch’s engineering principles. So how do we scale metrics stored in Elasticsearch? And is that even possible on a full-text search engine?
This talk explores:
- How are metrics stored in Elasticsearch? And how does this translate to disk use as well as query performance?
- What does an efficient, multi-tier architecture look like that balances speed for today’s data against density for older one?
- How can you compress metrics and what does the mathematical model look like for that?
We are trying this hands-on during the talk since this has become much simpler recently.