Comparing various compression algorithms for particular project

We build a hobby search engine for one blog platform and there is a challenging problem of fitting the whole search index and archival dataset into one server node.

There are 1.3 billion of posts and comments and the whole uncompressed data takes about 2Tb.
We generate *many* indexes for very tricky and fast searches, we can not afford waiting for 20+ seconds to find all comments Alice made in Bob’s blog, which is a number of seconds Sphinx takes to solve this task by enumerating all comments and filtering them according to author/blog fields, instead we have to answer in a fraction of second.

Our algorithms produce about 3x of uncompressed data compared to input, and generally this should be compressed. 1x among those 3x is original text content (slightly reduced by supported language, dropped html markup and so on), the rest is a set of binary indexes where index content is actually timestamp-like ids. Although these timestamp-like ids are monotonically increasing, they are not fixed-interval timestamps and we can not use facebook’s Gorilla without serious modifications.

As a test we decided to check how common compression algorithms will work this out. We tried zlib, snappy, lz4, lz4hc and zstd. Test dataset was about 220MB uncompressed, we measured time our tool needed to index this text with exactly the same settings, it produced about 660MB of data which was compressed during the test at the storage level, size of the final database was reported. We did not test bz2, since its compression/decompression time is much higher and that’s unacceptable for realtime.

Here are the results
index_compaction_time_database_size

It is a bit controversial that zstd is comparable in speed with zlib but final database takes 10% more space.

Arithmetic coding vs common compression

Arithmetic coding is a type of lossless compression, where more frequently used bytes are replaced with fewer bits than less frequent.

Arithmetic coding is commonly associated with Huffman coding, but it is not correct – Huffman coding splits input string into smaller and smaller chunks until chunk can be replaced with smaller entity. Usually Huffman encoding can not reach Shannon limit (theoretical compression limit for given data) for reasonable number of iterations, while arithmetic coding – which transfers the whole string into packed binary stream at once – approaches that limit much faster.

We work with Greylock search engine where there is a major task to implement full-text search index with as small disk footprint as possible. In our tests 200k of uncompressed input events occupy about 220Mb of disk space and whole index takes close to 450Mb. Index includes original content, common index and per-author index (this basically doubles the original index). Elastic search with morphology takes about 539Mb, without morphology – 317Mb, Sphinx (+ mysql to store original content) is about 250Mb, but this index is heavily truncated, there is still a notion of stop-words and morphology is broken.

We compress our binary index entries as well as original content using lz4hc or gzip, but I wondered whether arithmetic coding can do better than that. I skipped Huffman encoder because of its theoretical problems approaching Shannon limit (not talking about practical issues), and decided to check rANS entropy coder – this is an Asymmetric Numeral entropy Systems coder which combines speed of Huffman coding with compression rate of arithmetic coding. Well, at least that’s what its author says in the paper.
Here is an image from the paper showing Huffman coder approaching Shannon limit huffman

It happend to be worse than any common compression both for binary and text data.

Text data from original library: 768771 bytes
bz2: 232598
gzip: 312281
lz4: 363236
rans: 435113

Binary data from our index: 9678
bz2: 3652
gzip: 3665
lz4: 5262
rans: 7795

We do not drop idea of using arithmetic coding to pack binary data since Dropbox showed with its Lepton project that jpeg images can be losslessly compressed by 20% using tricky DCT indexes sorting and prediction (about 3-4% among 20%) and VP8 arithmetic coding for these indexes.

Naive comparison of Lepton and rANS yields dramatic difference:
Original jpeg file sile: 71517
Lepton encoded size: 55700
gzip: 71171
bz2: 71586
lz4: 71414
rANS encoded size: 71336

This is of course unfair comparison, since Lepton encodes not binary steam of the original image, but specially crafted DCT indexes.

Since our indexes are basically vectors of document IDs which are in fact partially increasing timestamp and partially sequence number, it might be a good idea to use timestamp compression algorithm described in Facebook’s Gorilla timeseries database – I wrote about it here, but that’s another story.

Greylock tutorial – distributed base search engine based on Elliptics

We’ve heavily updated Reverbrain documentation pages: doc.reverbrain.com, and I’m pleased to present our distributed base search engine Greylock. Documentation page includes generic information about search engine and tutorial, which includes installation process, configs and two types of clients: plain HTTP API (similar to what is expected from base search engine like Greylock and ElasticSearch) and Python client (works via HTTP too, but also uses Consul to acquire mailbox locks). If you need C++ tutorial, you can check greylock test suite which includes insertion/selection/removal as well as various iterators over data, self-recovery tests, statistics and other interesting bits.

I get a fair number of questions on how is Greylock different from ElasticSearch or Solr for instance or Amazon Cloud Search? They all have enormous amount of features and work with large amount of data, so what’s the purpose?

And the answer is just two words: scalability and automation.
If you worked with Elastic you do know which operations have to be made to reshard cluster when current sharding scheme becomes a bottleneck (how long it takes, what is the performance penalty and how dangerous is the process). When you work in the environment where new documents always arrive and space consumption grows with time, this resharding process will have to be started again and again with new servers added. At some point this becomes a serious issue.

With Greylock this is not needed at all. There is virtually no data and index movements when new servers are being added due to Elliptics bucket system. This design proved to work really well in Elliptics storage installations, where upload rates reach tens of terabytes daily, and that’s only our clients data, there are other seriously larger installations for example in Yandex.

We concentrated on scalability problem and solved it. And yet we do have a set of features. It is not comparable with Elastic of course even not counting NLP tasks which we will release later (language models and spelling correction for instance for any language where you can find a rather large corpus). Greylock supports basic relevance model based on the word distance among client request and words in the document.

Likely two of the worst issues are absence of numerical indexes and client locks. Both were made deliberately. Numerical indexes break pagination, which in turn means that if you want 100 documents out of a million, you will have to read them all into RAM, resort either to numeric order or into lexical order (that’s how document ids are stored in the inverted indexes), intersect the whole million of keys and return the first 100. For any subsequent request this has to be done again and again. Without numerics pagination works with iterators pointing to inverted indexes only, the whole index (and its millions of entries) is never being read, only some pages are accessed sequentially.

To help with numerics Greylock supports document timestamps, i.e. a single 64-bit numeric per document ID which is used in inverted indexes sorting order. Of course this is not a replacement for fair numeric index, but it does solve almost all of our use cases.

The second major issue is consistency and client locking. Greylock as well as Elliptics are not strictly consistent storages. With Greylock things are even worse – amount of data overwritten by a single client insert can be large and index structure (originally started as a distributed B+/*-tree) does not tolerate broken pages. Elastic and others implement consistency model (like Raft, Paxos or ZAB) internally. Greylock doesn’t. That’s why we require clients to acquire locks in some other system like Consul, Etcd or ZooKeeper to work properly. Our tutorial shows basic locking scheme implemented using strictly consistent Consul key-value storage.

We have a major plan for Greylock distributed search engine, expect new features and give it a try: http://doc.reverbrain.com/greylock:greylock
If you have any questions, you are welcome:
Google group: https://groups.google.com/forum/?fromgroups=#!forum/reverbrain,
this site and comments.