Honeycomb Retriever Database
Written
— Updated
- This is the data query system that Honeycomb uses to store and search data really quickly.
- Lambdas have really low latency to S3
- Issues that come up with Lambdas
- Can only have 1000 lambdas running at once
- The limit increases automatically if you're using a lot for a long time, but they tend to have their lambdas only run for a second or two.
- You can get the burst limit raised a bit but not too much.
- Waiting for lambdas to start because of the burst limit becomes an issue.
- 6MB return limit on Lambda. They get around this by passing large inputs and outputs between Lambdas in S3.
- Data is chunked by timestamp
- No secondary indexes (this is primarily to enable high write speeds)
- Filters are pushed down to the lambdas and run on each row.
- Running essentially a map-reduce style system.
- Notable quotes from Honeycomb's Book on Observability
But serverless functions and cloud object storage aren't 100% reliable. In practice, the latency at the tails of the distribution of invocation time can be significant orders of magnitude higher than the median time. That last 5% to 10% of results may take tens of seconds to return, or may never complete. In the Retriever implementation, we use impatience to return results in a timely manner. Once 90% of the requests to process segments have completed, the remaining 10% are re-requested, without canceling the still-pending requests. The parallel attempts race each other, with whichever returns first being used to populate the query result. Even if it is 10% more expensive to always retry the slowest 10% of subqueries, a different read attempt against the cloud provider's backend will likely perform faster than a "stuck" query blocked on S3, or network I/O that may never finish before it times out.
What happens if someone attempts to group results by a high-cardinality field? How can you still return accurate values without running out of memory? The simplest solution is to fan out the reduce step by assigning reduce workers to handle only a proportional subset of the possible groups. For instance, you could follow the pattern Chord does by creating a hash of the group and looking up the hash correspondence in a ring covering the keyspace.
- They have a complex well-designed ingest system as well, which is described in the PDF link at the bottom, but which I haven't described here.
- Sources
- https://www.dataengineeringpodcast.com/honeycomb-data-infrastructure-with-sam-stokes-episode-20/
- https://www.infoq.com/podcasts/aws-lambda-custom-database-retriever/
- https://www.honeycomb.io/blog/secondary-storage-to-just-storage
- https://em360tech.com/sites/default/files/2023-01/Honeycomb-OReilly-Book-on-Observability-Engineering.pdf (Chapter 16)