Testing SQLite vs. Tokyo Cabinet (TC) vs. G-WAN's KV store
Preparing data for 10 rounds * 1,000 items = 10,000 operations per engine
Tokyo Cabinet fails because of locking errors at 10,000+
items because TC databases can either
be opened for READ or WRITE mode but
sprintf() Overhead: 1,000 entries processed in 0.28500 ms
Random Overhead: 1,000 entries processed in 0.02100 ms
(times per item operation in milliseconds, TC-FIXED 'wipe all' time [I/O is done there] is replaced by smaller 'delete' time)
|engine||insert||random insert||random update||traverse||in-order search||random search||delete||wipe all||total time|
|G-WAN vs SQLite||119.429x faster||105.906x faster||134.657x faster||29.960x faster||324.718x faster||298.263x faster||14.693x faster||28.023x faster||211.406x faster|
|G-WAN vs TC||13.048x faster||1.273x faster||1.313x faster||500.822x faster||11.504x faster||10.569x faster||23.459x faster||47.345x faster||30.864x faster|
|G-WAN vs TC-FIXED||1.318x slower||1.356x slower||1.200x slower||11.198x faster||4.359x faster||4.228x faster||2.056x faster||4.150x faster||2.765x faster|
Explaining the Results:
TC and TC-FIXED inserts are fast because TC's hash-table and TC-FIXED's array are pre-allocated
before any item can be added (by contrast, G-WAN's KV allocates memory on-demand). Pre-allocating
all the memory helps to shine in benchmarks but real-life applications may take months to fill these data
structures (if it ever happens), wasting previous memory that the system and other applications could
About the "Total Time":
G-WAN's KV store is 20-30x faster than Tokyo Cabinet "TC" (hash-table of variable-size keys/values).
G-WAN's KV store is 2-3x faster than Tokyo Cabinet "TC-FIXED" (an array of fixed-size keys/values).
Unlike TC, TC-FIXED and SQLite (where an insert/update blocks all other read & write threads),
G-WAN's Key-Value store is wait-free (it never blocks and it never delays any mix and number of
reads and writes).
G-WAN's Key-Value store ability to create indexes on existing data lets you continue using the same
indexed-model that everybody used during decades - just much faster.