History log of /5.5.2/forestdb/option/ (Results 1 - 25 of 54)
Revision (<<< Hide revision tags) (Show revision tags >>>)Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
5827abea23-Mar-2017 Jung-Sang Ahn <jungsang.ahn@gmail.com>

MB-23416: Handle memory allocation failure during compaction

* The sorting window or document write batch in compaction logic are
big memory segments, thus allocating them may fail in th

MB-23416: Handle memory allocation failure during compaction

* The sorting window or document write batch in compaction logic are
big memory segments, thus allocating them may fail in the environment
with small RAM, such as mobile devices.

* We retry allocation with reduced size, and then return failure after
a few more attempts.

Change-Id: I16d04c8ff6314a47dfbf8f71b27e2729c1d3d6b4
Reviewed-on: http://review.couchbase.org/75630
Reviewed-by: Sundararaman Sridharan <sundar@couchbase.com>
Tested-by: Jung-Sang Ahn <jungsang.ahn@gmail.com>

show more ...

Revision tags: v1.2, v1.1, v1.0
bb71615f21-Jan-2016 Chiyoung Seo <chiyoung.seo@gmail.com>

Support configurable daemon compaction interval per file at runtime

With this change, the client can change the daemon compaction interval
for a given file at runtime once it is opened.

Support configurable daemon compaction interval per file at runtime

With this change, the client can change the daemon compaction interval
for a given file at runtime once it is opened.
Note that when a given file is opened for the first time, then its
daemon compaction interval is set by the global config param.

Change-Id: I5e5c87eb1df014484e67d41bd05fb6796a814913

show more ...

27376f3b05-Jan-2016 Jung-Sang Ahn <jungsang.ahn@gmail.com>

Pre-reclaim stale blocks for next round block reuse

- Reclaiming stale blocks only when there is no more free block makes
the file size increase more and more, as there is some time gap

Pre-reclaim stale blocks for next round block reuse

- Reclaiming stale blocks only when there is no more free block makes
the file size increase more and more, as there is some time gap between
the exhaustion of free blocks and reclaiming new blocks.

- To address this issue, we need to pre-reclaim stale blocks for the
next round block reuse.

Change-Id: I4b8092c81038a51ed786a90e97560e5d228705b2

show more ...

f0b1bf7707-Dec-2015 Jung-Sang Ahn <jungsang.ahn@gmail.com>

MB-16219 Support superblock and circular block reusing

- Super blocks point to the up-to-date DB header.

- When configured conditions are satisfied, stale blocks are reused
in a

MB-16219 Support superblock and circular block reusing

- Super blocks point to the up-to-date DB header.

- When configured conditions are satisfied, stale blocks are reused
in a circular manner. This does not increase the DB file size, so
that we can largely reduce the overhead from compaction.

- The latest a few old versions are preserved for future snapshot
creation.

Change-Id: I6eb9209ced436c0a84ebb0216344eb04f300fca9

show more ...

9062144021-Oct-2015 Sundar Sridharan <sundar.sridharan@gmail.com>

MB-16622: Backoff bgflushers during wal_flush

During WAL flush there are many simultaneous I/O operations done
that involve the buffer cache and file. If the background daemon
flushe

MB-16622: Backoff bgflushers during wal_flush

During WAL flush there are many simultaneous I/O operations done
that involve the buffer cache and file. If the background daemon
flusher is running at the same time, there is heavy
contention causing 2X degradation in writer performance esp
when wal is large (eg 40K).

Change-Id: I484782bb798cb558645bfa9a85771c07c3ca173a

show more ...

4cdca91e22-Sep-2015 Sundar Sridharan <sundar.sridharan@gmail.com>

MB-16263: Add background bcache flushing capability

New background flusher threads will iterate over all files
and flush any immutable dirty blocks in a loop sleeping when
there are

MB-16263: Add background bcache flushing capability

New background flusher threads will iterate over all files
and flush any immutable dirty blocks in a loop sleeping when
there are no dirty immutable blocks. Keeps I/O utilized frees
more buffer cache blocks for front end end threads

Change-Id: Ib55c8c6cea43caa7f5f2635d35826c7d5be7f022

show more ...

604393d910-Sep-2015 Sundar Sridharan <sundar.sridharan@gmail.com>

MB-16235: skip sort by offset on wal_flush if file is cached

When buffer cache is large enough most of the file is resident
in it, we can skip sorting wal entries by offset for slightly

MB-16235: skip sort by offset on wal_flush if file is cached

When buffer cache is large enough most of the file is resident
in it, we can skip sorting wal entries by offset for slightly
better performance.

Change-Id: If977d93c20df39cb12daff5ba520dd536d203b2d

show more ...

8469cffe23-Jun-2015 Chiyoung Seo <chiyoung.seo@gmail.com>

Avoid zero-length buffer for sorting the offsets during compaction.

Change-Id: I361163c86741bd5c7ebbac9e427b9871c7641e2a

b31f83a418-Jun-2015 Chiyoung Seo <chiyoung.seo@gmail.com>

MB-15348 Use file access timestamp to select a victime file for cache eviction.

From the performance tests, we observed the contention on the lock that is used
to maintain the global fil

MB-15348 Use file access timestamp to select a victime file for cache eviction.

From the performance tests, we observed the contention on the lock that is used
to maintain the global file LRU list. This change reduces the lock conention
by using a file access timestamp to select a victim file for cache eviction.

In addition, this change uses a reader-writer spinlock to synchronize accesses
to the file list array in the buffer cache. Currently, the reader-writer spinlock
is only supported in non-Windows environments.

Our reader-writer spinlock implementation is based on the approach proposed in

https://jfdube.wordpress.com/2014/01/12/optimizing-the-recursive-read-write-spinlock/

Change-Id: I61b6916735d715b19572d484253e180f57d9b724

show more ...

c40231de11-Jun-2015 Chiyoung Seo <chiyoung.seo@gmail.com>

MB-15278 Add the multi-threaded auto compaction support.

Previously, there was only one daemon compaction thread. This can
cause out-of-disk space issues if many database files are opene

MB-15278 Add the multi-threaded auto compaction support.

Previously, there was only one daemon compaction thread. This can
cause out-of-disk space issues if many database files are opened
and accessed concurrently.

This change improves the auto daemon compaction to have multiple
threads perform compacting individual database files concurrently.

The default number of compactor threads are set to 4, but it can
be configurable upon starting the ForestDB engine.

Change-Id: I70515595e4171588506cfca49523e10448190f8a

show more ...

c6c3d27420-May-2015 Chiyoung Seo <chiyoung.seo@gmail.com>

MB-14060 Use async I/O to move data blocks to a new file during the compaction

This change uses libaio in Linux to read data blocks from the old file
and move them to the new file during

MB-14060 Use async I/O to move data blocks to a new file during the compaction

This change uses libaio in Linux to read data blocks from the old file
and move them to the new file during the compaction.

We also plan to add the asynchronous I/O support for other OSs
(e.g., Windows, OSx).

Change-Id: I2ba7462291b17171fde06fe5887db8cd0cf76c39

show more ...

d24dc9cc02-May-2015 Jung-Sang Ahn <jungsang.ahn@gmail.com>

MB-14060 Use probabilistic approach for blocking writer during compaction

- Define a probability variable P, where writer is not blocked when
P is 0, and writer is always blocked when P

MB-14060 Use probabilistic approach for blocking writer during compaction

- Define a probability variable P, where writer is not blocked when
P is 0, and writer is always blocked when P is 100.

- P is initialized to zero at the beginning of compaction, and
gradually increased if writer is faster a certain threshold. It is
also gradually decreased if writer is too slow.

- By using this probabilistic approach, slow writer is not blocked,
while fast writer (such as asynchronous full speed write) gets slow
down properly.

Change-Id: I4b06644f7042b812138017f12e704c9f202df47f

show more ...

3574536201-May-2015 Chiyoung Seo <chiyoung.seo@gmail.com>

MB-14060 Adapt compactor to grab the old file's lock for each batch doc move.

This commit changes the compactor's behavior, such that the compactor
explicitly grabs the old file's lock f

MB-14060 Adapt compactor to grab the old file's lock for each batch doc move.

This commit changes the compactor's behavior, such that the compactor
explicitly grabs the old file's lock for each batch doc move. This shows
better behaviors in balancing the normal writer and compactor performance.

In addition, we increase the compaction batch size to 128K to improve
the compaction performance.

Change-Id: I2139d9bdda9b7f56310ad1efc17eccb1ef542677

show more ...

c30e7c9d30-Apr-2015 Chiyoung Seo <chiyoung.seo@gmail.com>

MB-14060 Interleave compactor and normal writer through the old file's lock.

From the write-heavy tests, we observed that the compactor occasionally can't
catch up with the normal writer

MB-14060 Interleave compactor and normal writer through the old file's lock.

From the write-heavy tests, we observed that the compactor occasionally can't
catch up with the normal writer during the second phase of the compaction.

As a short-term solution, this change forces the compactor and writer to
interleave through the old file's lock, so that the compactor can catch up
with the writer in write-heavy use cases. We plan to address this issue
without affecting the writer performance in the future.

Change-Id: I69b25a7a7f5833089c77960e6c3e68355a66dc3f

show more ...

818018a122-Apr-2015 Chiyoung Seo <chiyoung.seo@gmail.com>

Increase the data size threshold for the compaction WAL flush.

As a doc size can be several MBs, we should avoid flushing the WAL
for each large doc during compaction.

Change-Id

Increase the data size threshold for the compaction WAL flush.

As a doc size can be several MBs, we should avoid flushing the WAL
for each large doc during compaction.

Change-Id: I7692e4f9970edee0995dba82c0ee0959b7ea22e9

show more ...

899a7f6114-Apr-2015 Chiyoung Seo <chiyoung.seo@gmail.com>

Remove mmaped implementation of WAL index.

As incoming writes are appended into the old file during compaction,
the mmaped implemenation of WAL index is no longer needed.

Change

Remove mmaped implementation of WAL index.

As incoming writes are appended into the old file during compaction,
the mmaped implemenation of WAL index is no longer needed.

Change-Id: Ie45991bcf4473faf38b2894be400a595d374ec52

show more ...

5257ddac14-Apr-2015 Chiyoung Seo <chiyoung.seo@gmail.com>

MB-14443 Put the limit on the buffer cache size by considering physical RAM

ForestDB client may try to configure a buffer cache whose size is greater
than the physical RAM available. We

MB-14443 Put the limit on the buffer cache size by considering physical RAM

ForestDB client may try to configure a buffer cache whose size is greater
than the physical RAM available. We should avoid this situation and return
the appropriate error to the client.

By default, we don't allow the buffer cache to be greater than 80% of
the physical RAM on a machine.

Change-Id: I746b0941b2eda27b6ec5c7b8e3234a23370fb240

show more ...

14f97ccf22-Mar-2015 Jung-Sang Ahn <jungsang.ahn@gmail.com>

Enlarge sorting window size for compaction

- Enlarge the size of sorting window for document offsets.

- Use larger batch size (up to 4 MB, or 4096 docs) to maximize
write bandwi

Enlarge sorting window size for compaction

- Enlarge the size of sorting window for document offsets.

- Use larger batch size (up to 4 MB, or 4096 docs) to maximize
write bandwidth.

Change-Id: I9a1354a4c23ca3eb26141ff8535202f57ea74775

show more ...

0c12841412-Mar-2015 Chiyoung Seo <chiyoung.seo@gmail.com>

Revert "Use file scanning style compaction"

This reverts commit 8462df0c0ca9915416400a046cdf8bd85b14c024.

Conflicts:
src/forestdb.cc

Change-Id: Iff4ca130e699a39

Revert "Use file scanning style compaction"

This reverts commit 8462df0c0ca9915416400a046cdf8bd85b14c024.

Conflicts:
src/forestdb.cc

Change-Id: Iff4ca130e699a397843fc95e230d48c484628b68

show more ...

8462df0c05-Mar-2015 Jung-Sang Ahn <jungsang.ahn@gmail.com>

Use file scanning style compaction

- The current compaction scans docs using HB+trie iterator, so docs
are moved into new the file in a key order. However, since docs are
randomly sc

Use file scanning style compaction

- The current compaction scans docs using HB+trie iterator, so docs
are moved into new the file in a key order. However, since docs are
randomly scattered on the DB file, the overhead from random reads
becomes a bottleneck of the compaction throughput.

- Instead, we can largely improve the overall throughput if we
sequentially scan docs in a byte offset order, from the beginning of
the file.

- Use more large batch size (up to 32 MB, or 65536 docs) to maximize
write bandwidth.

- Verified that the performance of compaction for workloads smaller
than the RAM size is also improved.

Change-Id: Ief90ed3ce1dbc28e19f44dd7338976fd9b40225c

show more ...

df59daa903-Mar-2015 Jung-Sang Ahn <jungsang.ahn@gmail.com>

MB-13663 Use mmap for memory allocation of key inserted during compaction

- For documents inserted by normal writer during compaction, use mmap()
for memory allocation of key string inst

MB-13663 Use mmap for memory allocation of key inserted during compaction

- For documents inserted by normal writer during compaction, use mmap()
for memory allocation of key string instead of malloc().

- Although mmapped memory segments increase the virtual memory size,
they do not always reside in the physical memory. If the size of
mmapped segments becomes large, some blocks are evicted into the
corresponding file according to the OS page cache policy.

- The mmap files are just temporary, thus we don't need to sync or
recover them if crash occurs.

Change-Id: I44c35a7b7de3e006e99480581a6caed73f7b4439

show more ...

6946702017-Feb-2015 Chiyoung Seo <chiyoung.seo@gmail.com>

MB-13101 Partition the buffer cache to reduce the lock contention.

This change partitions each file's buffer cache to reduce the lock
contention among multiple readers and a wrtier. The

MB-13101 Partition the buffer cache to reduce the lock contention.

This change partitions each file's buffer cache to reduce the lock
contention among multiple readers and a wrtier. The number of
partitions is configurable when a file is created.

Change-Id: I7cf3c0c0f8a794c92465a1253f75ab504713c530

show more ...

b84b312a06-Feb-2015 Chiyoung Seo <chiyoung.seo@gmail.com>

MB-13101 Partition the in-memory WAL index to reduce the lock contention.

This change implements the partitions of the in-memory WAL index that is
maintained per file. The number of part

MB-13101 Partition the in-memory WAL index to reduce the lock contention.

This change implements the partitions of the in-memory WAL index that is
maintained per file. The number of partitions is configurable when the
database file is opened.

Change-Id: I136a60aa0f6955fb91a09af8210a17392be16a12

show more ...

65a4da4327-Dec-2014 Jung-Sang Ahn <jungsang.ahn@gmail.com>

Improve dirty block eviction performance

- Block cache tries to avoid evicting dirty B+tree nodes because
they are likely to be updated later. This largely reduces the
write amplific

Improve dirty block eviction performance

- Block cache tries to avoid evicting dirty B+tree nodes because
they are likely to be updated later. This largely reduces the
write amplification during bulk load when working set is larger
than the RAM size (using 200M docs, W.amplification during bulk
load is reduced up to 5x).

- If O_DIRECT flag is not set, we don't need to use additional buffer
for flushing dirty blocks. Dirty blocks are directly flushed by using
pwrite(), and they will be buffered by OS's page cache. It reduces the
memcpy overhead (and also malloc overhead).

Change-Id: I86e958b29be38981fd56ba07430d8cf7cc41d847

show more ...

6a3a5c2d24-Nov-2014 Jung-Sang Ahn <jungsang.ahn@gmail.com>

Support DB file prefetching

- When an existing DB file is opened, and there is enough space in
block cache, the DB file is prefetched to improve the performance
right after a cold st

Support DB file prefetching

- When an existing DB file is opened, and there is enough space in
block cache, the DB file is prefetched to improve the performance
right after a cold start (such as rebooting).

- To maximize the efficiency, the prefetching logic sequentially reads
data from the end to the beginning of the DB file.

- Added one more field in fdb_config: 'prefetch_duration'. Prefetching
is aborted after the configured duration. If the duration is set to
zero, prefetching is disabled.

- Since prefetching is invoked only when there is enough free space
in block cache, it does not spoil the existing data in the cache.

Change-Id: Iea6e8301383299ea68a0b278abcf4af58d0b2ee8

show more ...

123