History log of /5.5.2/kv_engine/engines/ep/src/vbucket.h (Results 1 - 25 of 348)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v7.0.2, v6.6.3, v7.0.1, v7.0.0, v6.6.2, v6.5.2, v6.6.1, v6.0.5, v6.6.0, v6.5.1, v6.0.4, v6.5.0, v6.0.3, v5.5.4, v5.5.5, v5.5.6, v6.0.1, v5.5.3, v6.0.0, v5.1.3, v5.5.2, v5.5.1, v5.1.2, v5.1.1
# 01101706 16-May-2018 Tim Bradgate <tim.bradgate@couchbase.com>

MB-29707: Add checkpoint memory overhead stats

Change-Id: If1e8666d043d76a1fae64f2e7909d36be24790e9
Reviewed-on: http://review.couchbase.org/94307
Tested-by: Build Bot <build@couchba

MB-29707: Add checkpoint memory overhead stats

Change-Id: If1e8666d043d76a1fae64f2e7909d36be24790e9
Reviewed-on: http://review.couchbase.org/94307
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Jim Walker <jim@couchbase.com>

show more ...


# 4d7314a5 23-Mar-2018 Daniel Owen <owend@couchbase.com>

Create eligibleForPageOut function

The hifi_mfu eviction algorithm needs to identify whether a
StoredValue is eligible to be pagedOut of memory without actually
paging it out.

Create eligibleForPageOut function

The hifi_mfu eviction algorithm needs to identify whether a
StoredValue is eligible to be pagedOut of memory without actually
paging it out.

Previously we were using the StoredValue eligibleForEviction for
eviction function however this was found not to be valid for ephemerial
buckets, as the isDirty state is always true.

Therefore we factor out the logic responsible for determining if a
StoredValue is eligible to be paged out from both the couchbase
(persistent) and ephemerial versions of the pageOut function and put it
into a new function called eligibleForPageOut.

The hifi_nfu eviction algorithm is changed to use the current vbucket's
eligibleForPageOut function.

Change-Id: Ib4dd4cfc984a93408a4f9cc41ccf3eb391a470d8
Reviewed-on: http://review.couchbase.org/91512
Reviewed-by: Trond Norbye <trond.norbye@gmail.com>
Reviewed-by: Tim Bradgate <tim.bradgate@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# cd24b1b3 09-Mar-2018 Jim Walker <jim@couchbase.com>

MB-28773: Transfer the manifest UID to the replica VB

When a system event propagates a collection change, embed
the manifest UID in the event so that the replica is aware
of the UID

MB-28773: Transfer the manifest UID to the replica VB

When a system event propagates a collection change, embed
the manifest UID in the event so that the replica is aware
of the UID (and can persist/warm up from it).

Change-Id: I66e4ad4274558992b27c5ab02adb42295e761091
Reviewed-on: http://review.couchbase.org/91178
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>

show more ...


Revision tags: v5.0.1
# dea18910 25-Oct-2017 Jim Walker <jim@couchbase.com>

MB-26455: Allow correct processing of 'old' keys and separator changes

The collection's separator can only be changed when 0 collections
exist. However nothing stops the separator from c

MB-26455: Allow correct processing of 'old' keys and separator changes

The collection's separator can only be changed when 0 collections
exist. However nothing stops the separator from changing if there are
deleted collections (and their keys) in play.

Prior to this commit each separator change resulted in a single system
event being mutated, that event had a static key. Thus a VB could have
the following sequence of keys/events.

1 fruit::1
2 fruit::2
<fruit collection is deleted>
<separator is changed from :: to - >
<fruit collection is recreated>
6 fruit-1
7 fruit-2
<fruit collection is deleted>
<separator is changed from - to # >
9 $collections_separator (the Item value contains the new separator)
10 $collections::fruit (fruit recreated)
11 fruit#1
12 fruit#2

In this sequence, some of the changes are lost historically because a
static key is used for the separator change. Between seqno 2 and 6 the
separator changed from :: to -, but the separator change system event
is currently at seqno 9 with # as its value.

With this setup a problem exists if we were to process the historical
data e.g. whilst backfilling for DCP or performing a compaction
collection erase. The problem is that when we go back to seqno 1 and
2, we have no knowledge of the separator for those items, all we have
is the current # separator. We cannot determine that fruit::1 is a
fruit collection key.

This commit addresses this problem by making each separator change
generate a unique key. The key itself will encode the new separator,
and because the key is unique it will reside at the correct point in
history for each separator change.

The unique key format will be:

$collections_separator:<seqno>:<new_separator>

With this change the above sequence now looks as:

1 fruit::1
2 fruit::2
<fruit collection is deleted>
4 $collections_separator:3:- (change separator to -)
<fruit collection is recreated>
6 fruit-1
7 fruit-2
<fruit collection is deleted>
9 $collections_separator:8:# (change separator to #)
10 $collections::fruit (fruit recreated)
11 fruit#1
12 fruit#2

Now the code which looks at the historical data (e.g. backfill) will
encounter these separator change keys before it encounters collection
keys using that separator. Backfill and collections-erase can just
maintain a current separator and can now correctly split keys to
discover the collection they belong to. The collections eraser and
KVStore scanning code now include a collections context which has data
and associated code for doing this tracking.

A final piece of the commit is the garbage collection of these unique
keys. i.e. if each separator change puts a unique key into the seqno
index, how can we clean these away when they're no longer needed (i.e.
all fruit:: keys purged)?

Whilst the eraser runs it tracks all 'separator change' keys, because
a separator change can only happen when 0 collections exist, it can
assume that all but the latest separator change key are redundant once
the erase has completed. This list of keys are simply deleted in the
normal way by pushing a deleted Item into the checkpoint once
compaction is complete.

Change-Id: I4b38b04ed72d6b39ceded4a860c15260fd394118
Reviewed-on: http://review.couchbase.org/84801
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>

show more ...


# 4fa49052 05-Dec-2017 Dave Rigby <daver@couchbase.com>

MB-26021 [6/6]: Limit #checkpoint items flushed in a single batch

Expand the previous limiting of flusher batch size to also limit the
number of items from the Checkpoint Manager.

MB-26021 [6/6]: Limit #checkpoint items flushed in a single batch

Expand the previous limiting of flusher batch size to also limit the
number of items from the Checkpoint Manager.

In the case of Checkpoint items, we cannot arbitrarily "cut" the batch
in the middle of a checkpoint - as that would result in an
inconsistent state (non-checkpoint boundary) being written in the
couchstore snapshow. In the event of a crash / restart that wouldn't
be a valid state.

This is implemented by adding a new
CheckpointManager::getItemsForCursor() method; which differs from the
existing get*All*ItemsForCursor() in that it takes an approximate
limit argument. Note this is approximate as we only split the batch at
a checkpoint boundary - the "limit" specifies that we will finish
visiting the current checkpoint, but not visit the next.

Results in the following changes to VBucketBench/FlushVBucket - note
reduction in PeakFlushBytes (from 740M to 7.5M); and average bytes per
item (from 775 to 7) at larger DWQ sizes:

Before:

Run on (8 X 2300 MHz CPU s)
2018-02-16 17:23:25
-----------------------------------------------------------------------------------------
Benchmark Time CPU Iterations
UserCounters...-----------------------------------------------------------------------------------------
VBucketBench/FlushVBucket/1 438175 ns 319992 ns 2239 PeakBytesPerItem=175.266k PeakFlushBytes=175.266k 3.05183k items/s
VBucketBench/FlushVBucket/10 537116 ns 365452 ns 2042 PeakBytesPerItem=18.1953k PeakFlushBytes=181.961k 26.722k items/s
VBucketBench/FlushVBucket/100 928924 ns 724770 ns 1013 PeakBytesPerItem=2.82715k PeakFlushBytes=282.727k 134.741k items/s
VBucketBench/FlushVBucket/1000 4414461 ns 4079710 ns 176 PeakBytesPerItem=1000 PeakFlushBytes=977.438k 239.371k items/s
VBucketBench/FlushVBucket/10000 44486851 ns 43265875 ns 16 PeakBytesPerItem=781 PeakFlushBytes=7.45735M 225.712k items/s
VBucketBench/FlushVBucket/100000 429518562 ns 423825500 ns 2 PeakBytesPerItem=759 PeakFlushBytes=72.427M 230.416k items/s
VBucketBench/FlushVBucket/1000000 4025349877 ns 3942721000 ns 1 PeakBytesPerItem=775 PeakFlushBytes=740.04M 247.687k items/s

After:

Run on (8 X 2300 MHz CPU s)
2018-02-16 17:19:51
-----------------------------------------------------------------------------------------
Benchmark Time CPU Iterations
UserCounters...-----------------------------------------------------------------------------------------
VBucketBench/FlushVBucket/1 479525 ns 340742 ns 2023 PeakBytesPerItem=175.281k PeakFlushBytes=175.281k 2.86599k items/s
VBucketBench/FlushVBucket/10 526072 ns 375763 ns 1868 PeakBytesPerItem=18.1943k PeakFlushBytes=181.945k 25.9888k items/s
VBucketBench/FlushVBucket/100 981275 ns 721473 ns 1003 PeakBytesPerItem=2.82617k PeakFlushBytes=282.711k 135.357k items/s
VBucketBench/FlushVBucket/1000 4459568 ns 4118994 ns 173 PeakBytesPerItem=1000 PeakFlushBytes=977.438k 237.088k items/s
VBucketBench/FlushVBucket/10000 45353759 ns 44451063 ns 16 PeakBytesPerItem=781 PeakFlushBytes=7.45737M 219.694k items/s
VBucketBench/FlushVBucket/100000 414823038 ns 406181000 ns 2 PeakBytesPerItem=137 PeakFlushBytes=13.0832M 240.425k items/s
VBucketBench/FlushVBucket/1000000 3116659340 ns 3000999000 ns 1 PeakBytesPerItem=7 PeakFlushBytes=7.57903M 325.412k items/s

Change-Id: I2d3c618557f3f5928879f09f7cba58968abd04db
Reviewed-on: http://review.couchbase.org/86391
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Paolo Cocchi <paolo.cocchi@couchbase.com>

show more ...


# d6d065d5 05-Dec-2017 Dave Rigby <daver@couchbase.com>

MB-26021 [1/6]: Limit #backfill items flushed in a single batch

Add a new configuration parameter - flusher_backfill_batch_limit -
which allows the flusher to contrain how many backfill

MB-26021 [1/6]: Limit #backfill items flushed in a single batch

Add a new configuration parameter - flusher_backfill_batch_limit -
which allows the flusher to contrain how many backfill items will be
flushed in a single batch. This defaults to zero, which means no limit
and hence behaves as previously.

If a non-zero value is specified then no more than that number of
backfill items will be added to a single vBucket flusher commit; the
given vBucket will be flagged to indicate there's more data available
and hence flusher should be re-scheduled.

Change-Id: Ic9c86f915f63fca7f8802cc40597907b5a0c1d2b
Reviewed-on: http://review.couchbase.org/89341
Reviewed-by: Jim Walker <jim@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# a42f7120 06-Feb-2018 Daniel Owen <owend@couchbase.com>

MB-22010: Create ItemFreqDecayerTask

Creates an ItemFreqDecayerTask on the initialization of a KVBucket.
It is used to ensure that the frequency counters of items stored in the
hash

MB-22010: Create ItemFreqDecayerTask

Creates an ItemFreqDecayerTask on the initialization of a KVBucket.
It is used to ensure that the frequency counters of items stored in the
hash table do not all become saturated. When the task runs it will then
snooze for int max and will only be woken up when the frequency counter
of an item in the hash table becomes saturated.

Change-Id: I9ae51dfa6717c6349e43ddb08dde0759e43ca16b
Reviewed-on: http://review.couchbase.org/88761
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Jim Walker <jim@couchbase.com>

show more ...


Revision tags: v5.1.0
# 3c8b5eb6 19-Oct-2017 Dave Rigby <daver@couchbase.com>

MB-27554: [BP] Move numTotalItems from HashTable -> VBucket

Originally 04d6809a142a90a6bd8ddbd66e5109925b2b8f12

In Full-Eviction, items may exist in a VBucket without being in the

MB-27554: [BP] Move numTotalItems from HashTable -> VBucket

Originally 04d6809a142a90a6bd8ddbd66e5109925b2b8f12

In Full-Eviction, items may exist in a VBucket without being in the
HashTable, as they may have been fully evicted. As such, numTotalItems
is not a property of the HashTable, it is a property of the VBucket.

Therefore move numTotalItems from HashTable to VBucket, to better
encapsulate the VBucket's state.

Change-Id: Ic45de1ee49468753d7cc76804f8c5cc9eb64f881
Reviewed-on: http://review.couchbase.org/88381
Well-Formed: Build Bot <build@couchbase.com>
Reviewed-by: Jim Walker <jim@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>

show more ...


# 3ecac16d 11-Oct-2017 Dave Rigby <daver@couchbase.com>

MB-27554: [BP] Make VBucket::getNumNonResidentItems virtual

Originally a647ff3b736d73444d685b90e75a98af375ab246

Change VBucket::getNumNonResidentItems() to be a virtual method, with

MB-27554: [BP] Make VBucket::getNumNonResidentItems virtual

Originally a647ff3b736d73444d685b90e75a98af375ab246

Change VBucket::getNumNonResidentItems() to be a virtual method, with
implementations for Ephemeral and EP VBuckets.

Change-Id: Ic73bd50c77e38f89a38cc52c794415f6bb428fff
Reviewed-on: http://review.couchbase.org/88378
Well-Formed: Build Bot <build@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>

show more ...


# e39e9300 10-Dec-2017 Sriram Ganesan <sriram@couchbase.com>

MB-27162: Update revision sequence number before adding to checkpoint

When a given item has been deleted and then recreated in memory, a new
stored value is created with a revision seque

MB-27162: Update revision sequence number before adding to checkpoint

When a given item has been deleted and then recreated in memory, a new
stored value is created with a revision sequence number of 1 and pushed
into the checkpoint and the item's revision sequence number is updated in
memory. But, given that it could be potentially recreated, it should be
set to a value that is 1 greater than the maximum revision sequence number
for a deleted item in the vbucket and then pushed into the checkpoint

Regression caused by: http://review.couchbase.org/#/c/73224/

Change-Id: I82601731265435c00fbbf8209a8efa13fb85228a
Reviewed-on: http://review.couchbase.org/86686
Well-Formed: Build Bot <build@couchbase.com>
Tested-by: Sriram Ganesan <sriram@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>
Reviewed-by: Jim Walker <jim@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 4fc85e3e 30-Nov-2017 Trond Norbye <trond.norbye@gmail.com>

Tighten up engine API; Require cookie, cas and mutinfo for remove

Change-Id: I56e24566efe5e01cacd39209b229dc98995d9197
Reviewed-on: http://review.couchbase.org/86065
Reviewed-by: Dan

Tighten up engine API; Require cookie, cas and mutinfo for remove

Change-Id: I56e24566efe5e01cacd39209b229dc98995d9197
Reviewed-on: http://review.couchbase.org/86065
Reviewed-by: Daniel Owen <owend@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 0dbb0e53 26-Oct-2017 Jim Walker <jim@couchbase.com>

MB-25344: [13/n] Logically deleted keys and SetWithMeta

SetWithMeta must never fail if it encounters a logically deleted key.
1) It shouldn't conflict with it
2) AddWithMeta should i

MB-25344: [13/n] Logically deleted keys and SetWithMeta

SetWithMeta must never fail if it encounters a logically deleted key.
1) It shouldn't conflict with it
2) AddWithMeta should ignore it

Change-Id: I748f8118256d0a1a104fc12b198c2a2055acc41c
Reviewed-on: http://review.couchbase.org/85240
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 679718ef 09-Nov-2017 Jim Walker <jim@couchbase.com>

MB-25344: [12/n] Logically deleted keys and deleteWithMeta

If a key is found and it is logically deleted by collections, treat
it as ENOENT.

Change-Id: Ic26163d46a0b198ebcf787b5

MB-25344: [12/n] Logically deleted keys and deleteWithMeta

If a key is found and it is logically deleted by collections, treat
it as ENOENT.

Change-Id: Ic26163d46a0b198ebcf787b525909fc02fcecbae
Reviewed-on: http://review.couchbase.org/85181
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 04d6809a 19-Oct-2017 Dave Rigby <daver@couchbase.com>

Move numTotalItems from HashTable -> VBucket

In Full-Eviction, items may exist in a VBucket without being in the
HashTable, as they may have been fully evicted. As such, numTotalItems

Move numTotalItems from HashTable -> VBucket

In Full-Eviction, items may exist in a VBucket without being in the
HashTable, as they may have been fully evicted. As such, numTotalItems
is not a property of the HashTable, it is a property of the VBucket.

Therefore move numTotalItems from HashTable to VBucket, to better
encapsulate the VBucket's state.

Change-Id: I9d016fd45f393c4d678325471da429dfc1b6d0de
Reviewed-on: http://review.couchbase.org/84883
Reviewed-by: Sriram Ganesan <sriram@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 0e7758ba 26-Oct-2017 Jim Walker <jim@couchbase.com>

MB-25344: [11/n] Logically deleted keys and getMetaData

GetMetaData must ignore keys in deleted collections even if they may
hang around in the HT for some time. If collection deletion w

MB-25344: [11/n] Logically deleted keys and getMetaData

GetMetaData must ignore keys in deleted collections even if they may
hang around in the HT for some time. If collection deletion was
synchronous, then we would never be able to query a key in a deleted
collection, hence it's cleaner to say this key is gone rather than
return its meta.

Change-Id: I99ccfa2ff9fdf097d35f5d5cb1765c6dc3bdf129
Reviewed-on: http://review.couchbase.org/84841
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>

show more ...


# 5053e340 26-Oct-2017 Jim Walker <jim@couchbase.com>

MB-25344: [10/n] Logically deleted keys and getKeyStats

Allow getKeyStats to fail or work with logically deleted keys based
on the callers input.

Change-Id: Ibe3c2ca090a25643efe

MB-25344: [10/n] Logically deleted keys and getKeyStats

Allow getKeyStats to fail or work with logically deleted keys based
on the callers input.

Change-Id: Ibe3c2ca090a25643efee92ac53aacd371ef363c4
Reviewed-on: http://review.couchbase.org/84840
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# e46e29cf 26-Oct-2017 Jim Walker <jim@couchbase.com>

MB-25344: [9/n] Ignore logically deleted keys for statsVKey

Update statsVKey so it can return ENOENT for logically deleted
keys. Also make it aware of UNKNOWN_COLLECTION.

Change

MB-25344: [9/n] Ignore logically deleted keys for statsVKey

Update statsVKey so it can return ENOENT for logically deleted
keys. Also make it aware of UNKNOWN_COLLECTION.

Change-Id: Ib42d383434a20ac4a46051b966e973b94229b82a
Reviewed-on: http://review.couchbase.org/84839
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# d94a4ece 26-Oct-2017 Jim Walker <jim@couchbase.com>

MB-25344: [7/n] Ignore logically deleted keys for GET and update TTL

GET and update TTL should fail if the requested key is logically
deleted. Commit updates getAndUpdateTtl to accept a

MB-25344: [7/n] Ignore logically deleted keys for GET and update TTL

GET and update TTL should fail if the requested key is logically
deleted. Commit updates getAndUpdateTtl to accept a collections read
handle so that it can check any StoredValue for being logically
deleted.

Change-Id: I47046329b3275468d38886efd3efd37187e41d5b
Reviewed-on: http://review.couchbase.org/84837
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# ed850321 26-Oct-2017 Jim Walker <jim@couchbase.com>

MB-25344: [6/n] Ignore logically deleted keys for GET Locked

GETL should fail if the requested key is logically deleted. Commit
updates getLocked to accept a collections read handle so t

MB-25344: [6/n] Ignore logically deleted keys for GET Locked

GETL should fail if the requested key is logically deleted. Commit
updates getLocked to accept a collections read handle so that it
can check any StoredValue for being logically deleted.

Change-Id: Icf34c8644705f89aa6388954d1ab1e180ef360da
Reviewed-on: http://review.couchbase.org/84836
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# e9324f3e 26-Oct-2017 Jim Walker <jim@couchbase.com>

MB-25344: [5/n] Ignore logically deleted keys for GET

GET should fail if the requested key is logically deleted, even if
the request uses the GET_DELETED_VALUE flag. Keys deleted by

MB-25344: [5/n] Ignore logically deleted keys for GET

GET should fail if the requested key is logically deleted, even if
the request uses the GET_DELETED_VALUE flag. Keys deleted by
collection deletion should be considered deleted differently to when
the user deletes a key.

Commit updates getInternal to accept a collections read handle so that
it can check any StoredValue for being logically deleted.

Change-Id: I7f3357b6288b3533467779eb5e66483f37f7be11
Reviewed-on: http://review.couchbase.org/84835
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 0190e1c4 26-Oct-2017 Jim Walker <jim@couchbase.com>

MB-25344: [3/n] Ignore logically deleted keys for DELETE

The semantics of DELETE mean that it should fail if the request key
does not exist. With collection deletion, the removal of keys

MB-25344: [3/n] Ignore logically deleted keys for DELETE

The semantics of DELETE mean that it should fail if the request key
does not exist. With collection deletion, the removal of keys is
'lazy', similar to expiry, thus it's possible for DELETE to find a key
in the hash-table, which is actually logically deleted and should
trigger failure of the DELETE.

This change passes a CachingReadHandle down the DELETE path (we
already had read access held on collections for the entire operation,
so lock scope is not changed here). Within the depths of DELETE we can
now safely work with logically deleted keys.

Change-Id: I39ab8082ba26d08f8c885d73f775f22f0ba96595
Reviewed-on: http://review.couchbase.org/84833
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# cb233160 26-Oct-2017 Jim Walker <jim@couchbase.com>

MB-25344: [2/n] Ignore logically deleted keys for REPLACE

The semantics of REPLACE mean that it should only succeed if the key
already exists. With collection deletion, the removal of ke

MB-25344: [2/n] Ignore logically deleted keys for REPLACE

The semantics of REPLACE mean that it should only succeed if the key
already exists. With collection deletion, the removal of keys is
'lazy', similar to expiry, thus it's possible for REPLACE to find a
key in the hash-table, which is actually logically deleted and should
trigger failure of the REPLACE.

This change passes a CachingReadHandle down the REPLACE path (we
already had read access held on collections for the entire operation,
so lock scope is not changed here). Within the depths of REPLACE we
can now safely work with logically deleted keys.

Change-Id: Iccc9c6370b7c6267ab4cc5b46baa63f9ccc64c8f
Reviewed-on: http://review.couchbase.org/84832
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 64845a28 26-Oct-2017 Jim Walker <jim@couchbase.com>

MB-25344: [1/n] Ignore logically deleted keys for ADD

The semantics of ADD mean that it should only succeed if the key does
not already exist. With collection deletion, the removal of ke

MB-25344: [1/n] Ignore logically deleted keys for ADD

The semantics of ADD mean that it should only succeed if the key does
not already exist. With collection deletion, the removal of keys is
'lazy', similar to expiry, thus it's possible for ADD to find a key
in the hash-table, which it can overwrite when it's logically deleted
or expired.

This change passes a CachingReadHandle down the ADD path (we already
had read access held on collections for the entire operation, so lock
scope is not changed here). Within the depths of ADD we can now
safely ignore logically deleted keys.

Change-Id: I9f30977474c2a292c7eb26f3529504d61e553429
Reviewed-on: http://review.couchbase.org/84310
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# a7d604de 01-Nov-2017 Manu Dhundi <manu@couchbase.com>

MB-26470: Use shared ptr for Stream and ActiveStream objs.

Currently we use SingleThreadedRCPtr for Stream and ActiveStream
objs that are shared across multiple modules. This can lead to

MB-26470: Use shared ptr for Stream and ActiveStream objs.

Currently we use SingleThreadedRCPtr for Stream and ActiveStream
objs that are shared across multiple modules. This can lead to
cyclic references and hence memory leak.

This commit changes all SingleThreadedRCPtr for Stream and ActiveStream
to std::shared_ptr and hence sets up the ground to use std::weak_ptr
that would finally get rid of cyclic references.

Change-Id: If620386f6a93bf60f3b333962ae6e6dfaa2023ff
Reviewed-on: http://review.couchbase.org/84812
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>

show more ...


# c66ee788 26-Oct-2017 Jim Walker <jim@couchbase.com>

MB-25342: Add Collections::VB::Manifest::CachingReadHandle

The caching readhandle allows limited read access to the manifest
but fits into the functional flow of front-end operations.

MB-25342: Add Collections::VB::Manifest::CachingReadHandle

The caching readhandle allows limited read access to the manifest
but fits into the functional flow of front-end operations.

When constructing the CachingReadHandle from a key, the key is scanned
and a map lookup is performed. The result of the lookup is saved so
that further down the code path, subsequent isLogicallyDeleted checks
don't need to scan the key again and perform another map lookup.

Change-Id: Icffaf8a722f4a9e7e67bce870445cd3f75f940e3
Reviewed-on: http://review.couchbase.org/84474
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Trond Norbye <trond.norbye@gmail.com>

show more ...


12345678910>>...14