-
Notifications
You must be signed in to change notification settings - Fork 175
Public APIs
The examples of ForestDB APIs described in this page are in the C language. The basic indexing unit of ForestDB is a document that consists of key, metadata, and body, each of which is an arbitrary length sequence of uninterpreted bytes. Documents are ordered by key (defaulting to a simple lexicographic sort, like memcmp
) and can be iterated in that order.
You can find the old APIs used in the 1.0 beta at the Public APIs (1.0 Beta) page.
ForestDB uses a global context that is maintained per process and applies to all databases created and managed by the process. This context can be configured explicitly before any other ForestDB calls are made using fdb_init()
. Once this ForestDB context is initialized, subsequent attempts to change this state will be ignored.
The following example shows how to initialize the global ForestDB context using the default configuration.
#include <assert.h>
#include "libforestdb/forestdb.h"
fdb_status status;
fdb_config config;
config = fdb_get_default_config();
status = fdb_init(&config);
assert(status == FDB_RESULT_SUCCESS);
...
Each ForestDB file has a unique name, that is used as a prefix of the path name for the file. A ForestDB file contains more than one KV store where documents are actually stored.
The following example shows how to create and open a ForestDB file and open the default KV store from the file handle. Note that the default configuration creates a new ForestDB file if not exist.
#include <assert.h>
#include "libforestdb/forestdb.h"
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_status status;
fdb_config config;
fdb_kvs_config kvs_config;
config = fdb_get_default_config();
kvs_config = fdb_get_default_kvs_config();
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
assert(status == FDB_RESULT_SUCCESS);
status = fdb_kvs_open(fhandle, &kvhandle, NULL, &kvs_config);
assert(status == FDB_RESULT_SUCCESS);
...
When NULL
is passed as KV store name, then it creates a new KV store instance named "default"
. Note that there is an alternative API fdb_kvs_open_default
for opening the default KV store, so that you can use it as follows:
...
status = fdb_kvs_open_default(fhandle, &kvhandle, &kvs_config);
assert(status == FDB_RESULT_SUCCESS);
...
Almost all ForestDB API calls return their results as fdb_status
type. The return value will be FDB_RESULT_SUCCESS
if the call succeeded. Otherwise, the value has an error number defined in fdb_errors.h.
ForestDB provides several configuration options via two types: fdb_config
and fdb_kvs_config
, for a ForestDB file and KV store instance, respectively. You can get the default configuration by calling fdb_get_default_config
and fdb_get_default_kvs_config
APIs, where the details are described in fdb_types.h.
Note that a ForestDB file handle (fdb_file_handle
) and KV store handle (fdb_kvs_handle
) cannot be used simultaneously by two or more threads, which means that each thread should have its own handles. As some languages (e.g., Golang) abstract threading, we recommend them to use the pooling for readers and writers.
ForestDB supports multiple KV stores in a single ForestDB file, and you can create new empty KV stores using the fdb_kvs_open
API. The example below shows how to create a new KV store in the file:
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_status status;
fdb_kvs_config kvs_config;
...
kvs_config = fdb_get_default_kvs_config();
status = fdb_kvs_open(fhandle, &kvhandle, "KV store name", &kvs_config);
assert(status == FDB_RESULT_SUCCESS);
...
ForestDB’s block cache is an alternative to the OS page cache, whose default size is 128 MB. You can change the size in bytes, or disable the cache by setting the size to zero:
...
fdb_config config;
config = fdb_get_default_config();
config.buffercache_size = 1073741824; // 1 GB
...
...
fdb_config config;
config = fdb_get_default_config();
config.buffercache_size = 0; // disable
...
The block cache is globally shared across all active ForestDB files, thus buffercache_size
will be ignored if the block cache is already initialized by the previously called fdb_open
API.
If you want to close a KV store handle, call fdb_kvs_close
as follows:
fdb_kvs_handle *kvhandle;
fdb_status status;
...
status = fdb_kvs_close(kvhandle);
assert(status == FDB_RESULT_SUCCESS);
fdb_close
API is used for closing a file handle. All opened KV store handles that belong to the file handle are closed together with the fdb_close
API call:
fdb_file_handle *fhandle;
fdb_kvs_handle *default_kv, *kvhandle1, *kvhandle2;
fdb_kvs_config kvs_config;
fdb_status status;
...
status = fdb_kvs_open_default(fhandle, &default_kv, &kvs_config);
status = fdb_kvs_open(fhandle, &kvhandle1, "First DB", &kvs_config);
status = fdb_kvs_open(fhandle, &kvhandle2, "Second DB", &kvs_config);
...
// This call also closes default_kv, kvhandle1, and kvhandle2 handles.
status = fdb_close(fhandle);
assert(status == FDB_RESULT_SUCCESS);
Note that global resources such as the block cache and file management data still reside in memory for fast restart, even after closing the file handle. If you want to completely terminate the ForestDB library, call fdb_shutdown
as follows:
fdb_status status;
...
status = fdb_shutdown();
assert(status == FDB_RESULT_SUCCESS);
As we mentioned, ForestDB performs indexing on a per-document basis. To create an in-memory document, call fdb_doc_create
. (This does not add the document to the KV store; it just allocates memory for the fdb_doc
structure and its contents.)
...
fdb_doc *doc;
fdb_status status;
status = fdb_doc_create(&doc, "foo", 3, "00", 2, "bar", 3);
assert(status == FDB_RESULT_SUCCESS);
...
Note that any field can be empty:
fdb_doc *doc1, *doc2, *doc3;
fdb_status status;
status = fdb_doc_create(&doc1, "foo", 3, NULL, 0, NULL, 0);
status = fdb_doc_create(&doc2, NULL, 0, NULL, 0, NULL, 0);
status = fdb_doc_create(&doc3, "bar", 3, NULL, 0, "baz", 3);
If you want to change the metadata or body of a document in memory, call fdb_doc_update
. (Again, this does not affect the KV store; it just updates the in-memory values stored in the fdb_doc
.)
...
fdb_doc *doc;
fdb_status status;
status = fdb_doc_create(&doc, "foo", 3, "00", 2, "bar", 3);
assert(status == FDB_RESULT_SUCCESS);
status = fdb_doc_update(&doc, "11", 2, "baz", 3);
assert(status == FDB_RESULT_SUCCESS);
...
This example modifies the metadata and body of the document doc
from 00
and bar
to 11
and baz
, respectively.
To free an fdb_doc
structure and the data it stores, just call fdb_doc_free
:
fdb_doc *doc;
fdb_status status;
...
status = fdb_doc_free(doc);
assert(status == FDB_RESULT_SUCCESS);
The following example shows how to insert a document into a KV store by calling fdb_set
:
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_doc *doc;
fdb_status status;
...
status = fdb_doc_create(&doc, "foo", 3, "00", 2, "bar", 3);
status = fdb_set(kvhandle, doc);
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
status = fdb_doc_free(doc);
...
If a document with the same key already exists, ForestDB overwrites it with the new document.
After a successful fdb_set
the fdb_doc
can be freed if desired, since its contents have now been copied into the KV store.
After making one or more changes to their corresponding KV stores, you must invoke fdb_commit
to commit all the pending updates to the file on disk. Note that a file handle (fdb_file_handle) should be passed to fdb_commit API. Without invoking fdb_commit, changes would be lost after closing the file handle or upon a system failure. fdb_commit
writes any pending in-memory data, updates the tree structures, adds a new header at the end of the file, and calls fsync
to flush all dirty data into the corresponding file.
If you want to write data to disk asynchronously, ForestDB supports an option for asynchronous commit mode in fdb_config
. The following example shows how to write a document and commit the update asynchronously. Note that fdb_commit
should be called even though the file is opened with asynchronous option.
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_doc *doc;
fdb_status status;
fdb_config config;
config = fdb_get_default_config();
config.durability_opt = FDB_DRB_ASYNC;
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
...
status = fdb_doc_create(&doc, "foo", 3, "00", 2, "bar", 3);
status = fdb_set(kvhandle, doc);
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
status = fdb_doc_free(doc);
...
To commit more than one document update at once, call fdb_set
for each document and then invoke fdb_commit
at the end. Note that updates across multiple KV stores can be batched. :
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_doc *doc1, *doc2, *doc3;
fdb_status status;
...
status = fdb_set(kvhandle, doc1);
status = fdb_set(kvhandle, doc2);
status = fdb_set(kvhandle, doc3);
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
...
If you want to add large numbers of documents at once, the wal_flush_before_commit
option in fdb_config
is useful. This option periodically flushes WAL entries into the on-disk tree before fdb_commit
API is called, in order to restrict the overall memory consumption by WAL indexes. fdb_commit
only has to be invoked once at the end of the bulk load.
Note that wal_flush_before_commit
option is enabled by default.
fdb_file_handle *fhandle;
fdb_status status;
fdb_config config;
config = fdb_get_default_config();
config.wal_flush_before_commit = true;
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
... (do bulk write) ...
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
...
Sequence numbers are used to denote the ordering of mutations that occur in a data store. ForestDB internally generates a new sequence number for every set, update or delete operation. However for certain uses cases where the ordering of mutations is externally maintained and guaranteed to be monotonically increasing, it is possible to have ForestDB use those sequence numbers instead of generating them. This is achieved by the use of the fdb_doc_set_seqnum
explained below.
fdb_file_handle *fhandle;
fdb_kvs_handle *db;
fdb_status status;
fdb_config config;
fdb_kvs_config kvs_config;
fdb_doc *doc;
fdb_seqnum_t seqnum = 42; // Custom sequence number externally generated
config = fdb_get_default_config();
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
kvs_config = fdb_get_default_kvs_config();
status = fdb_kvs_open_default(fhandle, &db, &kvs_config);
// The document can either be created or re-used..
status = fdb_doc_create(&doc, (void *)"key", 4, NULL, 0, (void*)"value", 6);
fdb_doc_set_seqnum(&doc, seqnum);
status = fdb_set(db, doc); // The document can be retrieved by sequence number 42
fdb_doc_free(doc);
...
Documents can be stored as a compressed form when they are written in the disk. ForestDB provides a compression option in fdb_config
, that compresses and decompresses the body of documents using the Snappy library. Snappy does not provide optimal compression, but it's very fast. You can enable the option as follows:
fdb_file_handle *fhandle;
fdb_status status;
fdb_config config;
config = fdb_get_default_config();
config.compress_document_body = true;
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
...
ForestDB provides two kinds of retrieval methods: retrieval by key and retrieval by sequence number. If you want to get a document using its key, create a document with empty meta and body fields, and call fdb_get
:
fdb_kvs_handle *kvhandle;
fdb_doc *doc;
fdb_status status;
...
status = fdb_doc_create(&doc, "foo", 3, NULL, 0, NULL, 0);
status = fdb_get(kvhandle, doc);
...
status = fdb_doc_free(doc);
...
If the target document is found, then the meta and body fields of doc
will be filled with those of the corresponding document.
A document can also be retrieved by its sequence number, which is a value assigned to it whenever it's created or updated, taken from a monotonically-increasing sequence counter maintained by each KV store. It is a 64-bit integer defined as the fdb_seqnum_t
type, and available as the seqnum
field of an fdb_doc
after it's been saved by fdb_set
.
To get a document by its sequence number, create an empty document, assign a sequence number to it, and call fdb_get_byseq
:
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_doc *doc1, *doc2;
fdb_status status;
fdb_seqnum_t seqnum;
...
status = fdb_doc_create(&doc1, "foo", 3, "00", 2, "bar", 3);
status = fdb_set(kvhandle, doc1);
seqnum = doc1->seqnum;
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
status = fdb_doc_free(doc1);
...
status = fdb_doc_create(&doc2, NULL, 0, NULL, 0, NULL, 0);
doc2->seqnum = seqnum;
status = fdb_get_byseq(kvhandle, doc2);
...
status = fdb_doc_free(doc2);
...
To read a document's metadata only, without its body, ForestDB provides two operations: fdb_get_metaonly
and fdb_get_metaonly_byseq
. Their usage is same as those of fdb_get
and fdb_get_byseq
:
fdb_kvs_handle *kvhandle;
fdb_doc *doc;
fdb_status status;
...
status = fdb_doc_create(&doc, "foo", 3, NULL, 0, NULL, 0);
status = fdb_get_metaonly(kvhandle, doc);
...
fdb_kvs_handle *kvhandle;
fdb_doc *doc;
fdb_status status;
fdb_seqnum_t seqnum;
...
status = fdb_doc_create(&doc, NULL, 0, NULL, 0, NULL, 0);
doc->seqnum = seqnum;
status = fdb_get_metaonly_byseq(kvhandle, doc);
...
In the case of fdb_get_metaonly_byseq
, both key and metadata are returned by calling the API.
After the call returns, the length of the body is available in the bodylen
field, even though the body
field is NULL.
Afterwards, the document's body can be read very efficiently by calling fdb_get_byoffset
, as described in the next section.
Documents also can be retrieved by their disk offset, which is literally the byte offset from the start of the ForestDB file to the position where the document is stored. This is very efficient because it bypasses the tree entirely, requiring only a simple read operation.
Each document contains an offset
field (a 64-bit unsigned integer) which is set by the fdb_set
, fdb_get
, fdb_get_byseq
, fdb_get_metaonly
, and fdb_get_metaonly_byseq
functions. To get a document using disk offset, create an empty document, assign a disk offset to it, and call fdb_get_byoffset
:
#include <stdint.h>
...
fdb_kvs_handle *kvhandle;
fdb_doc *doc1, *doc2;
fdb_status status;
uint64_t offset;
...
status = fdb_doc_create(&doc1, "foo", 3, "00", 2, "bar", 3);
status = fdb_set(kvhandle, doc1);
offset = doc1->offset;
...
status = fdb_doc_create(&doc2, NULL, 0, NULL, 0, NULL, 0);
doc2->offset = offset;
status = fdb_get_byoffset(kvhandle, doc2);
...
Deleting a document is similar to updating it. The following example shows how to delete a document:
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_doc *doc;
fdb_status status;
...
status = fdb_doc_create(&doc, "foo", 3, NULL, 0, NULL, 0);
status = fdb_del(kvhandle, doc);
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
status = fdb_doc_free(doc);
...
For some use cases it might be necessary to retain deleted document tombstones for a certain duration. This can be achieved by setting the purging_interval
in fdb_config
. Note that since the default value of the purging_interval
is 0
, deleted documents will be dropped immediately by ForestDB. If a non-zero value in seconds
is specified for this parameter, fdb_del
documents will not be physically removed at that time. ForestDB will lazily purge those documents upon the next compaction after the period configured in fdb_config
. Although the deleted documents cannot be retrieved by fdb_get
or fdb_get_byseq
, you can still read their metadata by calling fdb_get_metaonly
or fdb_get_metaonly_byseq
until they are purged from the file after purging_interval
seconds.
You can change the purging interval defined in fdb_config
as follows:
fdb_file_handle *fhandle;
fdb_status status;
fdb_config config;
config = fdb_get_default_config();
config.purging_interval = 3600;
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
...
The unit of the purging interval is in seconds. If set to zero (which is the default), deleted documents are immediately purged.
Document updates in ForestDB are first added to the write-ahead log (WAL), and later reflected into the HB+trie. WAL entries are automatically flushed when fdb_commit
is called, or when the total number of documents in the WAL exceeds a configured threshold in fdb_config
. The default value is 4,096 documents, and you can change the threshold as follows:
fdb_file_handle *fhandle;
fdb_status status;
fdb_config config;
config = fdb_get_default_config();
config.wal_threshold = 65536;
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
...
If you want to manually flush WAL entries before the threshold is reached, call fdb_commit
with the FDB_COMMIT_MANUAL_WAL_FLUSH
option, as follows:
fdb_file_handle *fhandle;
fdb_status status;
...
status = fdb_commit(fhandle, FDB_COMMIT_MANUAL_WAL_FLUSH);
...
ForestDB provides simplified key-value operations, for those who do not want to use document-related features such as sequence number and metadata. There are three functions for key-value operations: fdb_set_kv
, fdb_get_kv
, and fdb_del_kv
.
To insert a key-value pair, call fdb_set_kv
:
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_status status;
...
status = fdb_set_kv(kvhandle, "foo", 3, "bar", 3);
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
...
By calling fdb_get_kv
, you can get a value corresponding to a key:
fdb_kvs_handle *kvhandle;
fdb_status status;
void *value;
size_t value_len;
...
status = fdb_get_kv(kvhandle, "foo", 3, &value, &value_len);
...
free(value);
You should free the value returned from fdb_get_kv
API by calling free
.
Key-value pairs can be deleted using fdb_del_kv
API:
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_status status;
...
status = fdb_del_kv(kvhandle, "foo", 3);
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
...
Since these key-value pairs are internally converted to and stored as documents, the simplified functions are compatible with the original APIs such as fdb_set
and fdb_get
. The following example gets the body of a document inserted using fdb_set
, by calling fdb_get_kv
.
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_doc *doc;
fdb_status status;
void *value;
size_t value_len;
...
status = fdb_doc_create(&doc1, "foo", 3, "00", 2, "bar", 3);
status = fdb_set(kvhandle, doc1);
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
...
status = fdb_get_kv(kvhandle, "foo", 3, &value, &value_len);
...
An iterator traverses a KV store instance in order by key or sequence number. It can also traverse a smaller range of keys or sequences. An index operates on a snapshot of the KV store, so any mutations made after the iterator is created have no effect on the documents returned by the iterator.
fdb_iterator_init
and fdb_iterator_sequence_init
create an iterator for keys and sequence numbers, respectively:
fdb_kvs_handle *kvhandle,
fdb_iterator *itr_key, *itr_seq;
fdb_status status;
...
status = fdb_iterator_init(kvhandle, &itr_key,
NULL, 0, NULL, 0,
FDB_ITR_NONE);
assert(status == FDB_RESULT_SUCCESS);
...
status = fdb_iterator_sequence_init(kvhandle, &itr_seq,
0, 0, FDB_ITR_NONE);
assert(status == FDB_RESULT_SUCCESS);
...
You can assign a specific range to the iterator, by passing min and max values of the range. The following example creates an iterator traversing from key aaa
to key zzz
(inclusive):
...
status = fdb_iterator_init(kvhandle, &itr_key,
"aaa", 3, "zzz", 3,
FDB_ITR_NONE);
...
If you pass NULL
as the min key, the iterator starts at the smallest key in a given KV store. If the max key is NULL
, the iterator terminates after the largest key in the KV store. If the min key does not exist in the KV store, the iterator begins from the smallest key that is greater than the min key. If the max key does not exist in the KV store, the iterator ends at the largest key that is smaller than the max key.
The last parameter to fdb_iterator_init
is a set of option flags. The available options are to return only document metadata not bodies (FDB_ITR_METAONLY
), to omit deleted documents from the iteration (FDB_ITR_NO_DELETES
), and to not include the min (FDB_ITR_SKIP_MIN_KEY
) and max (FDB_ITR_SKIP_MAX_KEY
) keys in the iteration range.
fdb_iterator_sequence_init
follows the same semantics. The example below shows how to create an iterator for the range of sequence number from 10 to 20:
...
status = fdb_iterator_sequence_init(kvhandle, &itr_seq,
10, 20, FDB_ITR_NONE);
...
Once an iterator is created, you can get the documents from the iterator by calling fdb_iterator_get
and fdb_iterator_next
repetitively, regardless of the type of the iterator. fdb_iterator_next
returns FDB_RESULT_SUCCESS
if the next document for the iterator exists. doc
parameter in fdb_iterator_get
API will point to an fdb_doc
filled in with the contents of the document pointed by the iterator. (As usual, this document structure should be freed when you're done using it.)
fdb_iterator_next
returns FDB_RESULT_ITERATOR_FAIL
when it reaches the end of the range. When finished, you should close the iterator by calling fdb_iterator_close
.
fdb_kvs_handle *kvhandle,
fdb_doc *doc;
fdb_iterator *itr_key;
fdb_status status;
...
status = fdb_iterator_init(kvhandle, &itr_key,
NULL, 0, NULL, 0,
FDB_ITR_NONE);
do {
status = fdb_iterator_get(itr_key, &doc);
if (status != FDB_RESULT_SUCCESS) {
break;
}
...
fdb_doc_free(doc);
} while(fdb_iterator_next(itr_key) != FDB_RESULT_ITERATOR_FAIL);
status = fdb_iterator_close(itr_key);
...
fdb_kvs_handle *kvhandle,
fdb_doc *doc;
fdb_iterator *itr_seq;
fdb_status status;
...
status = fdb_iterator_sequence_init(kvhandle, &itr_seq,
0, 0, FDB_ITR_NONE);
do {
status = fdb_iterator_get(itr_seq, &doc);
if (status != FDB_RESULT_SUCCESS) {
break;
}
...
fdb_doc_free(doc);
} while(fdb_iterator_next(itr_seq) != FDB_RESULT_ITERATOR_FAIL);
status = fdb_iterator_close(itr_seq);
...
If the client knows the max lengths of key, metadata, and value in the iterator range, then it can pre-allocate fdb_doc instance with these max lengths, and pass it to fdb_iterator_get API, so that the memory allocation overhead can be avoided for each iteration.
fdb_iterator *itr_key;
fdb_status status;
fdb_doc *doc;
fdb_doc_create(&doc, NULL, 0, NULL, 0, NULL, 0);
doc->key = malloc(MAX_KEY_LENGTH);
doc->meta = malloc(MAX_META_LENGTH);
doc->body = malloc(MAX_VALUE_LENGTH);
...
do {
status = fdb_iterator_get(itr_key, &doc);
if (status != FDB_RESULT_SUCCESS) {
break;
}
...
} while(fdb_iterator_next(itr_key) != FDB_RESULT_ITERATOR_FAIL);
fdb_doc_free(doc);
status = fdb_iterator_close(itr_key);
...
If you want to move an iterator to get a document for a specific key, call fdb_iterator_seek
:
fdb_iterator *itr_key;
fdb_status status;
fdb_doc doc;
...
status = fdb_iterator_seek(itr_key, "car", 3, FDB_ITR_SEEK_HIGHER);
if (status == FDB_RESULT_SUCCESS) {
status = fdb_iterator_get(itr_key, &doc);
...
}
...
Afterwards, the next call to fdb_iterator_get
will return the document of the key that you sought to.
If a given seek key does not exist, fdb_iterator_seek
positions the iterator to the smallest key greater (or the largest key smaller) than the seek key if FDB_ITR_SEEK_HIGHER
(or FDB_ITR_SEEK_LOWER
) is passed as a seek option.
In addition, fdb_iterator_seek_to_min
and fdb_iterator_seek_to_max
APIs can be used to move the iterator to the smallest and largest key in the iteration range, respectively:
status = fdb_iterator_seek_to_min(itr_key);
if (status == FDB_RESULT_SUCCESS) {
status = fdb_iterator_get(itr_key, &doc);
...
fdb_doc_free(doc);
}
status = fdb_iterator_seek_to_max(itr_key);
if (status == FDB_RESULT_SUCCESS) {
status = fdb_iterator_get(itr_key, &doc);
...
fdb_doc_free(doc);
}
If you want to move an iterator backward, call fdb_iterator_prev
:
status = fdb_iterator_prev(itr_key);
if (status == FDB_RESULT_SUCCESS) {
status = fdb_iterator_get(itr_key, &doc);
...
fdb_doc_free(doc);
}
Basically, an iterator can move or seek in forward or backward direction.
You can get some information about a ForestDB file using fdb_get_file_info
:
fdb_file_handle *fhandle;
fdb_file_info info;
fdb_status status;
...
status = fdb_get_file_info(fhandle, &info);
The return type is a fdb_file_info
struct that contains the following information:
-
filename
: The ForestDB file name. -
new_filename
: The name of the new file, if the current file is being compacted. -
doc_count
: The approximate total number of documents stored in the file. -
space_used
: The approximate space occupied by live data in the file. -
file_size
: The total size of the file.
A similar API fdb_get_kvs_info
exists for KV store instances. The example below shows how to get information about a KV store:
fdb_kvs_handle *kvhandle;
fdb_kvs_info kvs_info;
fdb_status status;
...
status = fdb_get_kvs_info(kvhandle, &kvs_info);
The attributes of the return type fdb_kvs_info
are as follows:
-
name
: The name of the KV store. -
last_seqnum
: The last sequence number of the KV store.
Note that there is a separate API that returns the last sequence number only:
fdb_kvs_handle *kvhandle;
fdb_seqnum_t seqnum;
fdb_status status;
...
status = fdb_get_kvs_seqnum(kvhandle, &seqnum);
A similar API fdb_get_kvs_ops_info
exists for KV store instances. The example below shows how to get information about a KV store:
fdb_kvs_handle *kvhandle;
fdb_kvs_ops_info kvs_ops_info;
fdb_status status;
...
status = fdb_get_kvs_ops_info(kvhandle, &kvs_ops_info);
The attributes of the return type fdb_kvs_ops_info
are the following KV Store statistics:
-
num_sets
: The total number of fdb_set operations. -
num_dels
: The total number of fdb_del operations. -
num_commits
: The total number of fdb_commit operations. -
num_compacts
: The total number of fdb_compact operations. -
num_gets
: The total number of fdb_get, fdb_get_metaonly, fdb_get_byseq, fdb_get_byoffset operations. -
num_iterator_gets
: The total number of fdb_iterator_get, fdb_iterator_get_metaonly operations. -
num_iterator_moves
: The total number of fdb_iterator_seek, fdb_iterator_next, fdb_iterator_prev operations.
A fdb_get_buffer_cache_used() api exists to return the amount of buffer cache space actively used by all ForestDB instances:
uint64_t cache_used;
fdb_status status;
...
status = fdb_buffer_cache_used(&cache_used);
An API fdb_estimate_space_used
provides the amount of overall disk space used by active blocks (index and data) in a given ForestDB file:
size_t space_used;
fdb_file_handle *fhandle;
...
space_used = fdb_estimate_space_used(fhandle);
Another API fdb_estimate_space_used_from
returns the overall disk space used by all snapshots starting from a given snapshot marker in a ForestDB file:
size_t space_used;
fdb_file_handle *fhandle;
fdb_status status;
fdb_config config;
fdb_snapshot_info_t *markers;
uint64_t num_markers;
fdb_snapshot_marker_t snap_marker;
...
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
// Multiple commits are performed below.
...
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
...
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
...
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
...
status = fdb_get_all_snap_markers(fhandle, &markers, &num_markers);
snap_marker = markers[num_markers - 2].marker; // The second oldest snapshot maker.
status = fdb_free_snap_markers(markers, num_markers);
// Get the amount of disk space used by all snapshots from the second oldest snapshot marker.
space_used_upto = fdb_estimate_space_used_from(fhandle, snap_marker);
...
An API fdb_get_file_version
provides the version of a given ForestDB file:
const char *file_version;
fdb_file_handle *fhandle;
...
file_version = fdb_get_file_version(fhandle);
An API fdb_get_lib_version
returns the version of the current ForestDB library that is based on git-describe output:
const char *lib_version;
...
lib_version = fdb_get_lib_version();
ForestDB can create a snapshot of a specific KV store instance, using a sequence number that corresponds to one of the commits that have been persisted in the file. A snapshot is a read-only immutable KV store instance: subsequent changes to the KV store will have no effect on the snapshot.
The following example shows how to create a snapshot:
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle, *snapshot;
fdb_status status;
fdb_seqnum_t seqnum;
...
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
status = fdb_get_kvs_seqnum(kvhandle, &seqnum);
status = fdb_snapshot_open(kvhandle, &snapshot, seqnum);
...
status = fdb_kvs_close(snapshot);
To create an in-memory snapshot for a given KV store, FDB_SNAPSHOT_INMEM should be passed to fdb_snapshot_open
API as a sequence number. An in-memory snapshot is a non-durable consistent view of the KV store instance and carries the latest version of all the keys and values at the point of the snapshot and can even be taken out of uncommitted transactions.
As a snapshot is treated as a read-only KV store handle, it must be closed with fdb_kvs_close
when it is not needed anymore.
When the client opens a ForestDB file, it may need to know the list of all the snapshots that can be instantiated from the file. For this, the client can use fdb_get_all_snap_markers
API as follows:
fdb_file_handle *fhandle;
fdb_kvs_handle *kv1, *kv2, *snapshot;
fdb_status status;
fdb_snapshot_info_t *markers;
uint64_t num_markers;
fdb_seqnum_t seqnum = 0;
bool found_snapshot_marker = false;
...
status = fdb_kvs_open(fhandle, &kv1, "kv1", &kvs_config);
status = fdb_kvs_open(fhandle, &kv2, "kv2", &kvs_config);
...
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
status = fdb_get_all_snap_markers(fhandle, &markers, &num_markers);
for (int i = 0; i < num_markers; ++i) {
for (int j = 0; j < markers[i].num_kvs_markers; ++j) {
if (!strcmp(markers[i].kvs_markers[j].kv_store_name, "kv1")) {
seqnum = markers[i].kvs_markers[j].seqnum;
found_snapshot_marker = true;
break;
}
}
if (found_snapshot_marker) {
break;
}
}
status = fdb_free_snap_markers(markers, num_markers);
if (found_snapshot_marker) {
status = fdb_snapshot_open(kv1, &snapshot, seqnum);
...
status = fdb_kvs_close(snapshot);
}
...
As seen above, the snapshot of a given KV store "kv1" can be created using one of its snapshot markers that still exist in the file.
ForestDB supports transactional features which makes it possible to atomically update multiple documents. You can begin a transaction by calling fdb_begin_transaction
, and commit dirty updates by calling fdb_end_transaction
. Atomic updates across multiple KV stores in a ForestDB file are possible. Here's a simple example that atomically updates doc1
and doc2
to KV stores kv1
and kv2
, respectively:
fdb_file_handle *fhandle;
fdb_kvs_handle *kv1, *kv2;
fdb_doc *doc1, *doc2;
fdb_status status;
...
status = fdb_begin_transaction(fhandle, FDB_ISOLATION_READ_COMMITTED);
status = fdb_set(kv1, doc1);
status = fdb_set(kv2, doc2);
status = fdb_end_transaction(fhandle, FDB_COMMIT_NORMAL);
...
There are two isolation levels supported by the current version of ForestDB:
-
FDB_ISOLATION_READ_COMMITTED
: Read committed isolation level. Dirty updates (i.e., uncommitted updates) by other transactions are not visible, although updates committed by other transactions will be visible. -
FDB_ISOLATION_READ_UNCOMMITTED
: Read uncommitted isolation level. Dirty updates by other transactions are visible.
If you want to revert all dirty updates after fdb_begin_transaction
, invoke fdb_abort_transaction
instead of fdb_end_transaction
:
fdb_file_handle *fhandle;
fdb_kvs_handle *kv1, *kv2;
fdb_doc *doc1, *doc2;
fdb_status status;
...
status = fdb_begin_transaction(fhandle, FDB_ISOLATION_READ_COMMITTED);
status = fdb_set(kv1, doc1);
status = fdb_set(kv2, doc2);
status = fdb_abort_transaction(fhandle);
...
Since ForestDB is based on an MVCC model, concurrent read and update operations do not block each other. All ForestDB operations are thread-safe, but you should open a separate handle on a ForestDB file for each thread. Both the fdb_file_handle
and fdb_kvs_handle
structure are not thread-safe, and ForestDB does not guarantee thread-safety if multiple threads share the same file or KV store handle. The following example shows how to concurrently access to a ForestDB file:
- Thread 1
fdb_file_handle *fhandle_thread1;
fdb_kvs_handle *kvhandle_thread1;
fdb_status status;
fdb_config config;
fdb_kvs_config kvs_config;
...
status = fdb_open(&fhandle_thread1, "/tmp/db_filename", &config);
status = fdb_kvs_open_default(fhandle_thread1, &kvhandle_thread1, &kvs_config);
...
status = fdb_kvs_close(kvhandle_thread1);
status = fdb_close(fhandle_thread1);
- Thread 2
fdb_file_handle *fhandle_thread2;
fdb_kvs_handle *kvhandle_thread2;
fdb_status status;
fdb_config config;
fdb_kvs_config kvs_config;
...
status = fdb_open(&fhandle_thread2, "/tmp/db_filename", &config);
status = fdb_kvs_open_default(fhandle_thread2, &kvhandle_thread2, &kvs_config);
...
status = fdb_kvs_close(kvhandle_thread2);
status = fdb_close(fhandle_thread2);
The threads access the same ForestDB file /tmp/db_filename
using their own file handles (fhandle_thread1
and fhandle_thread2
), and they do not need to grab a lock on the ForestDB file when they invoke ForestDB operations.
Note: Multiple handles on the same file share the same underlying file caches, so the memory overhead per separate handle is low.
As ForestDB uses append-only style disk logging, the overall space occupied by the ForestDB file grows with more mutations. This fragmented file should be compacted to reclaim the space used by stale data. You can manually compact the file by calling fdb_compact
:
fdb_file_handle *fhandle;
fdb_status status;
fdb_config config;
...
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
...
status = fdb_compact(fhandle, "/tmp/db_filename2");
...
After compaction, the previous file /tmp/db_filename
is removed and new file /tmp/db_filename2
is created. All file handles referring to the old file automatically change their information to point to the new file, thus it is not necessary to close the old file and open the new file.
Note that you do not need to manually compact the file in auto-compaction mode, but manual compaction is still possible by calling fdb_compact
with a NULL
filename as follows:
fdb_file_handle *fhandle;
fdb_status status;
fdb_config config;
...
config = fdb_get_default_config();
config.compaction_mode = FDB_COMPACTION_AUTO;
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
...
status = fdb_compact(fhandle, NULL);
...
If the client wants to retain the stale data up to a given snapshot marker that corresponds to one of the commits in a ForestDB file, then fdb_compact_upto
API can be used as shown in the following example:
fdb_file_handle *fhandle;
fdb_status status;
fdb_config config;
fdb_snapshot_info_t *markers;
uint64_t num_markers;
fdb_snapshot_marker_t snap_marker;
...
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
// Multiple commits are performed below.
...
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
...
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
...
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
...
status = fdb_get_all_snap_markers(fhandle, &markers, &num_markers);
snap_marker = markers[num_markers - 2].marker; // The second oldest snapshot maker.
status = fdb_free_snap_markers(markers, num_markers);
// Retain the stale data up to the second oldest snapshot marker.
status = fdb_compact_upto(fhandle, "/tmp/db_filename2", snap_marker);
...
A ForestDB file created under auto-compaction mode cannot be opened under manual-compaction mode, and vice versa. To open a ForestDB file based on different compaction mode, it has to switch its compaction mode using fdb_switch_compaction_mode
API:
fdb_file_handle *fhandle;
fdb_status status;
fdb_config config;
...
config = fdb_get_default_config();
config.compaction_mode = FDB_COMPACTION_AUTO;
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
...
status = fdb_switch_compaction_mode(fhandle, FDB_COMPACTION_MANUAL, 0);
status = fdb_close(fhandle);
config.compaction_mode = FDB_COMPACTION_MANUAL;
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
...
fdb_switch_compaction_mode
also can be used for changing the compaction threshold for a ForestDB file whose compaction mode is currently auto-compaction. The following example changes the compaction threshold of the file from 30% to 50%:
fdb_file_handle *fhandle;
fdb_status status;
fdb_config config;
...
config = fdb_get_default_config();
config.compaction_mode = FDB_COMPACTION_AUTO;
config.compaction_threshold = 30;
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
...
status = fdb_switch_compaction_mode(fhandle, FDB_COMPACTION_AUTO, 50);
...
The compaction threshold is calculated as stale_data_size / total_file_size
. For example, when the compaction threshold and live data size are 30% and 70 MB, respectively, compaction is triggered when the file size gets larger than 100 MB, since the stale data size (i.e., 100 MB - 70 MB = 30 MB) exceeds 30% of the file size.
Note that compaction will not be executed if you set the compaction threshold to zero or 100%.
An ongoing manual or daemon compaction task can be cancelled by calling fdb_cancel_compaction API:
fdb_file_handle *fhandle;
fdb_status status;
...
status = fdb_cancel_compaction(fhandle);
...
If a compaction task is not currently running, the the request will be simply ignored.
Currently available only on Btrfs (B-Tree Files System) on Linux, fdb_compact_with_cow
and fdb_compact_upto_with_cow
can be used instead of fdb_compact
and fdb_compact_upto
, respectively, to instruct the file system to share valid document blocks from the old file with the new file. This avoids extra I/O, reduces write amplification and results in much shorter compaction times. Currently this API works only on the Linux Btrfs having the copy-file-range support that allows physical pages to be shared across files through the copy-on-write (CoW) nature of Btrfs, and performs well only in the offline manual compaction mode (no concurrent updates to the file being compacted). Example of this usage is as follows:
fdb_file_handle *fhandle;
fdb_status status;
fdb_config config;
...
config = fdb_get_default_config();
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
...
status = fdb_compact_with_cow(fhandle, "/tmp/db_new_filename");
if (status == FDB_RESULT_COMPACTION_FAIL) { // support not available on underlying platform
status = fdb_compact(fhandle, "/tmp/db_new_filename"); // fallback to old-style compaction
}
...
Similarly, fdb_compact_upto_with_cow
API is semantically identical fdb_compact_upto
.
The full compaction causes significant I/O overhead because all the active index and data blocks should be read from the old file and copied into the new file. To reduce the compaction I/O overhead, we can reuse stale blocks to write incoming mutations, instead of appending them into the file. Consequently, the file size will not grow rapidly and the full compaction does not need to be scheduled quite often.
There are two config parameters, block_reusing_threshold
and num_keeping_headers
, which control the behaviors of reusing stale blocks. block_reusing_threshold
represents the stale block reusing threshold in the unit of percentage (%), which is estimated as '(stale data size)/(total file size)'. When stale data size grows beyond this threshold, stale block reusing is triggered so that stale blocks are reused for further block allocations. Block reusing is disabled if this threshold is set to 0 or 100. num_keeping_headers
represents the number of the last commit headers whose stale blocks should be kept for snapshot readers. In other words, stale blocks that belong to those last commits should not be reused to write incoming mutations.
fdb_set_block_reusing_params
API allows an application to change the above config parameters dynamically at runtime. Based on our performance evaluations, we recommend a ForestDB user to set block_reusing_threshold
to 60% ~ 70% range, and num_keeping_headers
to 5 ~ 15 range at this time.
fdb_file_handle *fhandle;
fdb_status status;
fdb_config config;
...
config = fdb_get_default_config();
config.block_reusing_threshold = 65;
config.num_keeping_headers = 5;
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
...
status = fdb_set_block_reusing_params(fhandle, 70 /* block_reusing_threshold */, 10 /* num_keeping_header */);
...
Note that the full compaction can be triggered even if the block reusing is enabled. As stale block reusing incurs internal fragmentation over time, the full compaction still needs to be scheduled periodically, but less frequently with stale block reusing.
You can rollback a KV store instance to a specific point represented by either a sequence number that corresponds to a commit or a snapshot marker that corresponds to an opaque value returned by the fdb_get_all_snap_markers(). These are done by the fdb_rollback
and the fdb_rollback_all
calls respectively. The former reverts a specific KV store instance while the latter reverts all KV store instances to a past commit. Their API are as follows:
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_status status;
fdb_seqnum_t seqnum;
...
status = fdb_commit(fhandle, FDB_COMMIT_NORMAL);
status = fdb_get_seqnum(kvhandle, &seqnum);
...
status = fdb_rollback(&kvhandle, seqnum);
...
// If rolling back all KV stores to a past point in time snapshot.
fdb_snapshot_info_t *markers;
size_t num_markers;
int upto_point;
status = fdb_get_all_snap_markers(fhandle, &markers, &num_markers);
// Examine which snapshot marker is of relevance and upto which needs to be retained
upto_point = 3; // If the file had 10 commits, 7 oldest commits will be dropped, only last 3 will be retained.
status = fdb_rollback_all(fhandle, markers[upto_point].marker);
// Note that the fdb_get_all_snap_markers() issued on the new file are not the same as that from original file.
Note that all other mutations are blocked while the rollback operation is being performed.
ForestDB supports pluggable custom comparison function. A custom comparison function is given pointers to two keys, key1
and key2
for example, and must return zero if the keys are equal, a negative value if key1 < key2
, or a positive value if key1 > key2
.
The following code represents an example that uses a custom comparison function for floating-point numbers represented as the double
primitive type:
- Custom comparison function for
double
type:
int _cmp_double(void *key1_ptr, size_t keylen1,
void *key2_ptr, size_t keylen2)
{
double key1, key2;
key1 = *(double *)key1_ptr;
key2 = *(double *)key2_ptr;
if (key1 < key2) return -1;
else if (key1 > key2) return 1;
else return 0;
}
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_doc *doc;
fdb_status status;
fdb_config config;
fdb_kvs_config kvs_config;
double key;
void *value;
size_t value_len;
...
config = fdb_get_default_config();
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
kvs_config = fdb_get_default_kvs_config();
kvs_config.custom_cmp = _cmp_double;
status = fdb_kvs_open_default(fhandle, &kvhandle, &kvs_config);
...
status = fdb_doc_create(&doc,
(void *)&key, sizeof(double),
NULL, 0, value, value_len);
status = fdb_set(kvhandle, doc);
status = fdb_doc_free(doc);
...
Note: This example is not endian-safe, so a ForestDB file created on a little-endian CPU would not be readable on a big-endian CPU! Any real usage of custom key types should be careful to represent keys with a consistent byte ordering.
The next example shows another code that uses a custom comparison function for non-primitive type keys:
- Custom comparison function that compares only last four bytes in each key:
int _cmp_substring(void *key1, size_t keylen1,
void *key2, size_t keylen2)
{
if (keylen1 < 4 || keylen2 < 4) return 0;
return memcmp((char*)key1 + keylen1 - 4,
(char*)key2 + keylen2 - 4, 4);
}
fdb_file_handle *fhandle;
fdb_kvs_handle *kvhandle;
fdb_status status;
fdb_config config;
fdb_kvs_config kvs_config
...
config = fdb_get_default_config();
status = fdb_open(&fhandle, "/tmp/db_filename", &config);
kvs_config = fdb_get_default_kvs_config();
kvs_config.custom_cmp = _cmp_substring;
status = fdb_kvs_open_default(fhandle, &kvhandle, &kvs_config);
...
If you open a ForestDB file that contains at least one KV store that is based on custom key order, all custom comparison functions should be passed with corresponding KV store names, through fdb_open_custom_cmp
API. Suppose that there are four KV stores: default
, KV1
, KV2
, and KV3
, where the default
and KV3
use custom comparison functions _cmp_double
and _cmp_substring
, respectively, the following example represents how to pass the custom comparison functions when the file is opened:
fdb_file_handle *fhandle;
fdb_kvs_handle *default, *kv1, *kv2, *kv3;
fdb_status status;
fdb_config config;
fdb_kvs_config kvs_config
char *kvs_names[] = {NULL, (char*)"KV3"};
fdb_custom_cmp_variable functions[] = {_cmp_double,
_cmp_substring};
...
config = fdb_get_default_config();
status = fdb_open_custom_cmp(&fhandle, "/tmp/db_filename", &config,
2, kvs_names, functions);
kvs_config = fdb_get_default_kvs_config();
status = fdb_kvs_open_default(fhandle, &default, &kvs_config);
status = fdb_kvs_open(dbfile, &kv1, &kvs_config);
status = fdb_kvs_open(dbfile, &kv2, &kvs_config);
status = fdb_kvs_open(dbfile, &kv3, &kvs_config);
...
Note that KV stores based on the default (i.e., lexicographical) key order do not need to be included in the kvs_names
and functions
parameters. Once the ForestDB file is opened, the passed custom comparison functions are automatically assigned to the corresponding KV store handles, so that you do not need to assign the custom_cmp
attribute in the fdb_kvs_config
type.
ForestDB currently supports CommonCrypto on OS X and iOS, OpenSSL, and LibTomCrypt, as encryption options that can be configured at the build time. _ENCRYPTION
cmake macro can be optionally passed to specify which crypto library is used to support database encryption and can be set to commoncrypto
on iOS and OS X, openssl
for OpenSSL, or libtomcrypt
for LibTomCrypt.
The following example shows building ForestDB with the CommonCrypto encryption library and using it when the file is opened:
$ cmake -D_ENCRYPTION=commoncrypto path_to_source_directory
fdb_file_handle *fhandle;
fdb_kvs_handle *default;
fdb_status status;
fdb_config fconfig = fdb_get_default_config();
fdb_kvs_config kvs_config = fdb_get_default_kvs_config();
// AES with 256-bit key
fconfig.encryption_key.algorithm = FDB_ENCRYPTION_AES256;
// Simple key generation as an example
memset(fconfig.encryption_key.bytes, 0x42, sizeof(fconfig.encryption_key.bytes));
// open db
fdb_open(&fhandle, "./dummy1", &fconfig);
fdb_kvs_open_default(fhandle, &default, &kvs_config);
...
// Change the encryption key
fdb_encryption_key new_key;
new_key.algorithm = FDB_ENCRYPTION_AES256;
memset(new_key.bytes, 0xBD, sizeof(new_key.bytes));
strcpy((char*)new_key.bytes, "bar");
// Change the database file's encryption by compacting the database file with a new key
status = fdb_rekey(fhandle, new_key);
...
If you want to remove a ForestDB file, you can call the fdb_destroy
api as follows:
fdb_status status;
fdb_config config;
...
config = fdb_get_default_config();
status = fdb_destroy("/tmp/db_filename", &config);
...
The above routine can be useful to clean up all ForestDB files including meta data files for an auto compaction setting. If the destroyed file is currently being compacted by the auto compaction daemon which is still running, then fdb_destroy
will return FDB_RESULT_IN_USE_BY_COMPACTOR
, and the operation would need to be retried after the file compaction completes. In case of manual compaction a best-effort is done to wipe out all known files associated with the ForestDB path provided.
You can remove a specific KV store instance from a ForestDB file, without affecting any other KV store instances in the same file. The example below removes the KV store named "KV1"
:
fdb_file_handle *fhandle;
fdb_status status;
...
status = fdb_kvs_remove(fhandle, "KV1");
...
All handles to the KV store should be closed before calling fdb_kvs_remove
. Once you drop a KV store, all documents stored in the KV store will be permanently removed. Note that the DB file size will not be reduced immediately, because the stale documents still reside in the file. If you want to physically eliminate the stale data, call fdb_compact
after dropping a KV store instance.