It was created to host huge amount of rather small compressible data in elliptics, backend architecture was implemented with HBase in mind, but some changes were tested and made different.
Data is compressed and sorted by key — you get HBase-like scans for free (although this is not yet exported to elliptics API).
Single generator machine, 6 different smack backends (each one uses different compression algorithm), ~500 millions of records on each node (300-900 bytes each, maybe there are bigger keys) 64 Gb of RAM (lxctl virtual hosts), 4-way raid10 storage with ext4.
This database does not fit RAM on each node — its size is about 70-90 Gb per node, while we only have 64Gb of memory. So reads frequently go to disk, this has to be taken into account.
Combined 6 nodes together ends up with 60.000 rps writing total of about 500 millions of unique keys.
Read performance (separate for each compression node):