History log of /5.5.2/kv_engine/engines/ep/tests/module_tests/dcp_test.cc (Results 1 - 25 of 143)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v7.0.2, v6.6.3, v7.0.1, v7.0.0, v6.6.2, v6.5.2, v6.6.1, v6.0.5, v6.6.0, v6.5.1, v6.0.4, v6.5.0, v6.0.3, v5.5.4, v5.5.5, v5.5.6, v6.0.1, v5.5.3, v6.0.0, v5.1.3, v5.5.2
# 84af6315 10-Sep-2018 Jim Walker <jim@couchbase.com>

MB-31141: Don't reject snappy|raw DCP deletes

A related bug means that is possible for 5.x to create
deletes with a non-zero raw value. When 5.5x backfills such
an item for transmiss

MB-31141: Don't reject snappy|raw DCP deletes

A related bug means that is possible for 5.x to create
deletes with a non-zero raw value. When 5.5x backfills such
an item for transmission to another 5.5x node (and snappy
is enabled), the delete gets sent with a snappy datatype
and rejected by the target node, which triggers a rebalance
failure.

Change-Id: Ib91453f96882ef4e01ee0e2a3e5e51ed0786a325
Reviewed-on: http://review.couchbase.org/99414
Well-Formed: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Dave Rigby <daver@couchbase.com>

show more ...


Revision tags: v5.5.1, v5.1.2, v5.1.1
# fb85369e 30-Apr-2018 Jim Walker <jim@couchbase.com>

[BP] MB-29585: Obtain the streamMutex earlier in the snapshot task

Also MB-29369

Obtain the streamMutex and also check the stream is in-memory /
takeover-send before the task in

[BP] MB-29585: Obtain the streamMutex earlier in the snapshot task

Also MB-29369

Obtain the streamMutex and also check the stream is in-memory /
takeover-send before the task increments the cursor.

Backport of 11117bcc6fb717f2512a83e2b1952e08525ff0fb

Change-Id: I64c002737f4e20622400f3d0c4169adbf7154c31
Reviewed-on: http://review.couchbase.org/94142
Reviewed-by: Dave Rigby <daver@couchbase.com>
Well-Formed: Build Bot <build@couchbase.com>
Reviewed-by: Trond Norbye <trond.norbye@gmail.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# ba957b1b 15-May-2018 Dave Rigby <daver@couchbase.com>

MB-29675: Cache is{Snappy,ForceValueCompression}Enabled in ActiveStream

makeResponseFromItem is called for every item to be sent out over DCP;
and it shows up high in 'linux perf' profil

MB-29675: Cache is{Snappy,ForceValueCompression}Enabled in ActiveStream

makeResponseFromItem is called for every item to be sent out over DCP;
and it shows up high in 'linux perf' profiles (approx 6% of AuxIO
threads). The bulk of the cost is actually checking if the given
stream supports Snappy, and if all items should be focrcefully
compressed.

This cost comes from both isSnappyEnabled &
isForceValueCompressionEnabled() having to promote the producer
weak_ptr to a shared_ptr to check the relevent producer flag (and then
delete the shared_ptr).

Optimise this by simply caching the value of snappyEnabled /
forceValueCompression in the ActiveStream object at construction time,
and checking the local flag. We don't support changing either of these
flags dynamically for a stream, so this is safe - and avoids all the
shared_ptr manipulation.

With these changes makeResponseFromItem drops to less than 1% of AuxIO
threads.

Tests have been updated as we had one large test which would change
the compression settings of the producer for different scenarios. The
streams compression settings are fixed at the point of streamRequest
so it's easier to split the large test into smaller ones each with
their own producer configured for each scenario

Change-Id: Ice4a559fc7a54bfab4ce9a136d2dc9bdb618e6f4
Reviewed-on: http://review.couchbase.org/94216
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Paolo Cocchi <paolo.cocchi@couchbase.com>

show more ...


# df933ae7 09-May-2018 Paolo Cocchi <paolo.cocchi@couchbase.com>

MB-29662: DCP Consumer sets correct noop-interval on pre-5.0.0 Producer

This is a backport of the patch for MB-29441 (SHA
ae32b5caf1638c8926685d045ee4197a62bcc30c)

Change-Id: I0

MB-29662: DCP Consumer sets correct noop-interval on pre-5.0.0 Producer

This is a backport of the patch for MB-29441 (SHA
ae32b5caf1638c8926685d045ee4197a62bcc30c)

Change-Id: I02c49e08edaedacd4036cac0f677fc0c2c1a92ea
Reviewed-on: http://review.couchbase.org/94202
Well-Formed: Build Bot <build@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>

show more ...


# 1bbbe0bf 15-May-2018 Dave Rigby <daver@couchbase.com>

MB-29669: Intermittent failure in StreamTest.DiskBackfillFail

As seen intermittently in CV, this test can fail with:

[ RUN ] PersistentAndEphemeral/StreamTest.DiskBackfillF

MB-29669: Intermittent failure in StreamTest.DiskBackfillFail

As seen intermittently in CV, this test can fail with:

[ RUN ] PersistentAndEphemeral/StreamTest.DiskBackfillFail/persistent
unknown file: Failure
C++ exception with description "ActiveStream::transitionState: newState
(which is backfilling) is not valid for current state (which is dead)" thrown in the test body.
[ FAILED ] PersistentAndEphemeral/StreamTest.DiskBackfillFail/persistent, where GetParam() = "persistent" (143 ms)

Problem is a race between setting the stream to active and advancing
state to backfilling. Given the background task will already set the
state to backfilling, we can simply remove the explicit
transitionStateToBackfilling() call.

Change-Id: I45a68df657c9132924fd462bcd3be7b3e217446b
Reviewed-on: http://review.couchbase.org/94194
Reviewed-by: Jim Walker <jim@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# ae32b5ca 09-May-2018 Paolo Cocchi <paolo.cocchi@couchbase.com>

MB-29441: DCP Consumer sets correct noop-interval on pre-5.0.0 Producer

Note: the original patch (http://review.couchbase.org/#/c/93911) has been
reverted for MB-29599 (http://review.cou

MB-29441: DCP Consumer sets correct noop-interval on pre-5.0.0 Producer

Note: the original patch (http://review.couchbase.org/#/c/93911) has been
reverted for MB-29599 (http://review.couchbase.org/#/c/94009). This new
patch contains the fix for MB-29599.

Original commit message:

In MB-19955 we decreased the noop-interval from 180 seconds to 1 second
for DCP Producers. That change is part of versions >=5.0.0.
Note that from MB-19955 a DCP Producer uses the noop-interval only for
sending NOOP messages to the Consumer. That is to implement Fast
Failover.
For detecting dead connections, a post-5.0.0 Producer uses a new
idle-timeout (default value is 360 seconds).

But, on pre-5.0.0 a DCP Producer has a single noop-interval parameter
(default value is 180 seconds), which is used for both sending NOOP
messages to the Consumer and Dead Connection Detection.

When a post-5.0.0 Consumer sets the noop-interval on a pre-5.0.0
Producer (e.g., Online Upgrade with Swap Rebalance, CBSE-5179), it sends
'1 second'. Then the Producer sets 1 second as noop-interval and uses it
for Dead Connection Detection. That makes the Producer to drop all the
connections for which a NOOP response from the Consumer has not arrived
within 1 second.

To fix, we make a Consumer to check the if the Producer is a
pre-5.0.0 and to send the noop-interval accordingly (i.e., 180 seconds
if it is a pre-5.0.0 Producer, 1 second otherwise).
To check the version of the Producer, the Consumer sends a GetErrorMap
request and checks if the command is supported (the GetErrorMap command
is supported from versions >= 5.0.0).

Change-Id: Ie84b69d4943c5c3732509b727ae3b3f0e9893b39
Reviewed-on: http://review.couchbase.org/94055
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# e859e7d8 10-May-2018 Dave Rigby <daver@couchbase.com>

MB-29599: Revert "MB-29441: DCP Consumer sets correct noop-interval on pre-5.0.0 Producer"

Reverting as this change has resulted in buckets remaining in pending
state after adding a new

MB-29599: Revert "MB-29441: DCP Consumer sets correct noop-interval on pre-5.0.0 Producer"

Reverting as this change has resulted in buckets remaining in pending
state after adding a new bucket - error message seen:

2018-05-10T08:56:43.693017Z WARNING 281: Unsupported response packet received: fe, closing connection

This reverts commit d99b5a3ffc56f6f37a2d241ccd4e8f463fdf67c2.

Change-Id: Iac63be7a5dc526a3a79d57972bf8720e6c5ef87a
Reviewed-on: http://review.couchbase.org/94009
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# d99b5a3f 09-May-2018 Paolo Cocchi <paolo.cocchi@couchbase.com>

MB-29441: DCP Consumer sets correct noop-interval on pre-5.0.0 Producer

In MB-19955 we decreased the noop-interval from 180 seconds to 1 second
for DCP Producers. That change is part of

MB-29441: DCP Consumer sets correct noop-interval on pre-5.0.0 Producer

In MB-19955 we decreased the noop-interval from 180 seconds to 1 second
for DCP Producers. That change is part of versions >=5.0.0.
Note that from MB-19955 a DCP Producer uses the noop-interval only for
sending NOOP messages to the Consumer. That is to implement Fast
Failover.
For detecting dead connections, a post-5.0.0 Producer uses a new
idle-timeout (default value is 360 seconds).

But, on pre-5.0.0 a DCP Producer has a single noop-interval parameter
(default value is 180 seconds), which is used for both sending NOOP
messages to the Consumer and Dead Connection Detection.

When a post-5.0.0 Consumer sets the noop-interval on a pre-5.0.0
Producer (e.g., Online Upgrade with Swap Rebalance, CBSE-5179), it sends
'1 second'. Then the Producer sets 1 second as noop-interval and uses it
for Dead Connection Detection. That makes the Producer to drop all the
connections for which a NOOP response from the Consumer has not arrived
within 1 second.

To fix, we make a Consumer to check the if the Producer is a
pre-5.0.0 and to send the noop-interval accordingly (i.e., 180 seconds
if it is a pre-5.0.0 Producer, 1 second otherwise).
To check the version of the Producer, the Consumer sends a GetErrorMap
request and checks if the command is supported (the GetErrorMap command
is supported from versions >= 5.0.0).

Change-Id: I140e1fe934a369ebb5d9a9a922c97aa2136803ea
Reviewed-on: http://review.couchbase.org/93911
Reviewed-by: Tim Bradgate <tim.bradgate@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# de9e025d 03-May-2018 Jim Walker <jim@couchbase.com>

MB-29480, MB-29512: Fail backfills that go below purge-seqno

If a backfill is requested and it is not a backfill of everything
the start must be below the purge-seqno, otherwise a DCP cl

MB-29480, MB-29512: Fail backfills that go below purge-seqno

If a backfill is requested and it is not a backfill of everything
the start must be below the purge-seqno, otherwise a DCP client
may miss deletions which have been purged.

This is achieved by loading the purgeSeqno into the ScanContext
and getting backfill create to abort (setting the stream as dead)

The initScanContext method will have opened the data file (and kept
it open) so that the purge-seqno used in the final check won't
change again.

Change-Id: I7505529d46eb6f2b6006695baf7dd9f190237df9
Reviewed-on: http://review.couchbase.org/93690
Reviewed-by: Dave Rigby <daver@couchbase.com>
Reviewed-by: Tim Bradgate <tim.bradgate@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 38ba8360 30-Apr-2018 Jim Walker <jim@couchbase.com>

MB-29369: Obtain the streamMutex earlier in the snapshot task

Obtain the streamMutex and also check the stream is in-memory /
takeover-send before the task increments the cursor.

MB-29369: Obtain the streamMutex earlier in the snapshot task

Obtain the streamMutex and also check the stream is in-memory /
takeover-send before the task increments the cursor.

Change-Id: I82ba9b959859921062f817f9f8e2c1452cb852e7
Reviewed-on: http://review.couchbase.org/93497
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 1fed5e07 14-Mar-2018 Dave Rigby <daver@couchbase.com>

MB-28746: UBSan: Ignore incorrect static_cast to derived

As reported by UBSan:

dcp_test.cc:2079:13: runtime error: downcast of address 0x000107b95000 which does not point to an

MB-28746: UBSan: Ignore incorrect static_cast to derived

As reported by UBSan:

dcp_test.cc:2079:13: runtime error: downcast of address 0x000107b95000 which does not point to an object of type 'MockDcpConnMap'
0x000107b95000: note: object is of type 'DcpConnMap'

This cast is undefined behaviour - the DCP connection map object is of
type DcpConnMap; so it's undefined to cast to MockDcpConnMap. However,
in this instance MockDcpConnMap has identical member variables to
DcpConnMap - the mock just exposes normally private data - and so this
/seems/ ok.

Fixing it correctly would involve invasive changes to
EventuallyPersistentEngine to expose the DcpConnmap, therefore allow
it in general, but skip this test under UBSan.

Change-Id: I75afbd586579dd79e6fc9818a4a90d515f3e9228
Reviewed-on: http://review.couchbase.org/91020
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Trond Norbye <trond.norbye@gmail.com>

show more ...


Revision tags: v5.0.1
# dea18910 25-Oct-2017 Jim Walker <jim@couchbase.com>

MB-26455: Allow correct processing of 'old' keys and separator changes

The collection's separator can only be changed when 0 collections
exist. However nothing stops the separator from c

MB-26455: Allow correct processing of 'old' keys and separator changes

The collection's separator can only be changed when 0 collections
exist. However nothing stops the separator from changing if there are
deleted collections (and their keys) in play.

Prior to this commit each separator change resulted in a single system
event being mutated, that event had a static key. Thus a VB could have
the following sequence of keys/events.

1 fruit::1
2 fruit::2
<fruit collection is deleted>
<separator is changed from :: to - >
<fruit collection is recreated>
6 fruit-1
7 fruit-2
<fruit collection is deleted>
<separator is changed from - to # >
9 $collections_separator (the Item value contains the new separator)
10 $collections::fruit (fruit recreated)
11 fruit#1
12 fruit#2

In this sequence, some of the changes are lost historically because a
static key is used for the separator change. Between seqno 2 and 6 the
separator changed from :: to -, but the separator change system event
is currently at seqno 9 with # as its value.

With this setup a problem exists if we were to process the historical
data e.g. whilst backfilling for DCP or performing a compaction
collection erase. The problem is that when we go back to seqno 1 and
2, we have no knowledge of the separator for those items, all we have
is the current # separator. We cannot determine that fruit::1 is a
fruit collection key.

This commit addresses this problem by making each separator change
generate a unique key. The key itself will encode the new separator,
and because the key is unique it will reside at the correct point in
history for each separator change.

The unique key format will be:

$collections_separator:<seqno>:<new_separator>

With this change the above sequence now looks as:

1 fruit::1
2 fruit::2
<fruit collection is deleted>
4 $collections_separator:3:- (change separator to -)
<fruit collection is recreated>
6 fruit-1
7 fruit-2
<fruit collection is deleted>
9 $collections_separator:8:# (change separator to #)
10 $collections::fruit (fruit recreated)
11 fruit#1
12 fruit#2

Now the code which looks at the historical data (e.g. backfill) will
encounter these separator change keys before it encounters collection
keys using that separator. Backfill and collections-erase can just
maintain a current separator and can now correctly split keys to
discover the collection they belong to. The collections eraser and
KVStore scanning code now include a collections context which has data
and associated code for doing this tracking.

A final piece of the commit is the garbage collection of these unique
keys. i.e. if each separator change puts a unique key into the seqno
index, how can we clean these away when they're no longer needed (i.e.
all fruit:: keys purged)?

Whilst the eraser runs it tracks all 'separator change' keys, because
a separator change can only happen when 0 collections exist, it can
assume that all but the latest separator change key are redundant once
the erase has completed. This list of keys are simply deleted in the
normal way by pushing a deleted Item into the checkpoint once
compaction is complete.

Change-Id: I4b38b04ed72d6b39ceded4a860c15260fd394118
Reviewed-on: http://review.couchbase.org/84801
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>

show more ...


# e5e0876c 05-Mar-2018 Jim Walker <jim@couchbase.com>

MB-28428: DCP xattr stream needs to check for snappy

When DCP processes an item on a value/xattr only stream it needs to
consider that the value could also be compressed and must be

MB-28428: DCP xattr stream needs to check for snappy

When DCP processes an item on a value/xattr only stream it needs to
consider that the value could also be compressed and must be
decompressed before pruning and recompressed when done.

Change-Id: I346cfed359e445068be575bdbf21ec13e38c6e12
Reviewed-on: http://review.couchbase.org/90426
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 2288acbb 26-Feb-2018 Jim Walker <jim@couchbase.com>

MB-28370: Enable mem-tracking for DCPTest and stop negative mem_used

One of the DCP backfill tests began intermittently hanging after the
changes in 0739f2fd9. The test doesn't run with

MB-28370: Enable mem-tracking for DCPTest and stop negative mem_used

One of the DCP backfill tests began intermittently hanging after the
changes in 0739f2fd9. The test doesn't run with full memory tracking
and was relying only on the memOverhead changing. However in some
cases memOverhead had gone negative, resulting in a huge return
value from getEstimatedTotalMemory, resulting in the backfill being
suspended which is why the test hangs.

To fix:

1) Turn on full alloc/dealloc tracking when built with JEMALLOC so the
test can better track memory and avoid the backfill suspend.

2) Adjust getEstimatedTotalMemoryUsed so that with or without
memoryTrackingEnabled it doesn't try and return negative values
(which just become huge positive values).

2.1) Add tests for the negative cases

Change-Id: I9f32224eb412ab85ddf1501039bf767b0b9cf9df
Reviewed-on: http://review.couchbase.org/90053
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 8abcb345 22-Feb-2018 Jim Walker <jim@couchbase.com>

MB-27199: Fix infinite loop in StreamTest.BackfillOnly/ephemeral

The while loop checks seqno != numItems, the GAT loop is racing and
pushing the seqno up, allowing for the test to someti

MB-27199: Fix infinite loop in StreamTest.BackfillOnly/ephemeral

The while loop checks seqno != numItems, the GAT loop is racing and
pushing the seqno up, allowing for the test to sometimes never see
when seqno == numItems, allowing for the test to hang.

Adjust the test so that the GATs complete before reading the stream
state and also make the while loop safe considering that the seqno
can now change and be unpredictable based on when the backfill and
GAT loop interacted.

Change-Id: Ia6437ff1b7d83ebdfd80482459d0f915aaec5b30
Reviewed-on: http://review.couchbase.org/89882
Reviewed-by: Dave Rigby <daver@couchbase.com>
Reviewed-by: Tim Bradgate <tim.bradgate@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 0739f2fd 06-Feb-2018 Jim Walker <jim@couchbase.com>

MB-27199: Address TSAN issues with ephemeral backfill

The memory backfill will read much of a StoredValue when it does
StoredValue::toItem. All of the StoredValue members are generally

MB-27199: Address TSAN issues with ephemeral backfill

The memory backfill will read much of a StoredValue when it does
StoredValue::toItem. All of the StoredValue members are generally
updated under the HashBucketLock, so obtain the same for correct
access.

The linked-list code (range read etc...) often reads the bySeqno of
an entry. In general StoredValues are either out of the hashtable and
won't have their bySeqno changed by a frontend op and are protected
by the range-read lock, for elements in the hashtable, the hash-bucket
lock provides safe access for updates. However TSAN doesn't detect
this and sees someone writing the bySeqno with a hash-bucket lock and
someone reading it without the hash-bucket lock, hence the change to
StoredValue making it a RelaxedAtomic

dcp_test is updated so that a backfill test exasperates the original
issue in the MB.

Change-Id: Iadded56466b3ee92c075a3429936fcd578905b49
Reviewed-on: http://review.couchbase.org/88936
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# 35a7d6ae 14-Feb-2018 Manu Dhundi <manu@couchbase.com>

MB-27769: Remove conn from 'vbConns' map only when stream is erased

'Connmap' class holds a map of vbConns. We should not remove a
connection from the vbConns map unless we erase the str

MB-27769: Remove conn from 'vbConns' map only when stream is erased

'Connmap' class holds a map of vbConns. We should not remove a
connection from the vbConns map unless we erase the stream for
that vbucket from the producer connections streamsMap.

vbConnsMap is used to notify the connection when items are ready for
a stream on a connection.

Change-Id: I2b945d7ba78f5266e1862429979ae8d22781bd4a
Reviewed-on: http://review.couchbase.org/89370
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Jim Walker <jim@couchbase.com>

show more ...


# 6beefd65 07-Feb-2018 Sriram Ganesan <sriram@couchbase.com>

MB-27955: Enabling HELLO::Snappy on DCP connections should stream snappy
documents

When HELLO::Snappy is enabled on the DCP Producer, DCP should be able
to stream items of

MB-27955: Enabling HELLO::Snappy on DCP connections should stream snappy
documents

When HELLO::Snappy is enabled on the DCP Producer, DCP should be able
to stream items of datatype SNAPPY. Right now, value compression is
only enabled on the producer if force_value_compression control
message is passed to the DCP Producer

Change-Id: I72975fe03beff3ba2f22aef9d88f62a6e9912dce
Reviewed-on: http://review.couchbase.org/89026
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>
Reviewed-by: Sergey Avseyev <sergey.avseyev@gmail.com>
Tested-by: Sergey Avseyev <sergey.avseyev@gmail.com>

show more ...


# ca5c67aa 01-Feb-2018 Jim Walker <jim@couchbase.com>

MB-24860: Rename getTotalMemUsed to getEstimatedTotalMemoryUsed

The name better suits what the function returns, the value may read
more or less than the real amount allocation (with tha

MB-24860: Rename getTotalMemUsed to getEstimatedTotalMemoryUsed

The name better suits what the function returns, the value may read
more or less than the real amount allocation (with that +/-
controlled by the mem_used_merge_threshold_percent).

Change-Id: I7f82775b2bab9de9a25064dc7ea8660555653792
Reviewed-on: http://review.couchbase.org/88611
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Dave Rigby <daver@couchbase.com>

show more ...


# 1ffa33a9 02-Feb-2018 Manu Dhundi <manu@couchbase.com>

[Minor refactor]: Improve ActiveStream::getOutstandingItems() api

ActiveStream::getOutstandingItems() func returns the outstanding
elements from a vbucket's checkpoint (those correspondi

[Minor refactor]: Improve ActiveStream::getOutstandingItems() api

ActiveStream::getOutstandingItems() func returns the outstanding
elements from a vbucket's checkpoint (those corresponding to the
stream's cursor).

Hence it is
(1) clearer to return the outstanding elements as the output of
the function than passing a param and updating it.
(2) more efficient to pass a reference to the vbucket than the
copy of the shared_ptr to the vbucket.

Change-Id: I15f26ba97c9a755f124c9029497f9dd087bb663d
Reviewed-on: http://review.couchbase.org/88795
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Tim Bradgate <tim.bradgate@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>

show more ...


# d32e4b99 18-Jan-2018 Jim Walker <jim@couchbase.com>

MB-27457: [1/n] Stub out a new dcp_deletion engine callback

To allow engines to transmit the delete-time of deletes over DCP a
new packet format will be introduced. This new packet forma

MB-27457: [1/n] Stub out a new dcp_deletion engine callback

To allow engines to transmit the delete-time of deletes over DCP a
new packet format will be introduced. This new packet format is made
available to clients that explicitly enable collections or delete-time
on their producers.

This commit adds a stubbed out 'v2' delete callback which shows the
data the new packet format will send.

Note: Later commits will migrate the collection length field from the
legacy delete into this new formatted one, so for now it's duplicated.

Change-Id: Ife01c0e3479508a091f64cd5bf61398b98b38cfb
Reviewed-on: http://review.couchbase.org/88297
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>

show more ...


# 2c5efbe7 16-Jan-2018 Manu Dhundi <manu@couchbase.com>

MB-27604: Don't rollback if start_seqno > purge_seqno > snap_start_seqno

We need a rollback due to purge inorder to not miss out on any
permanently deleted items. Currently our check for

MB-27604: Don't rollback if start_seqno > purge_seqno > snap_start_seqno

We need a rollback due to purge inorder to not miss out on any
permanently deleted items. Currently our check for rollback due to
purge is very strict and we ask the client to rollback if we have purged
an item in the snapshot the client is looking for.

However to not miss out on any permanently deleted items, we should ask
the clients to rollback only if the client wants to start from a seqno
that is lesser than the purge_seqno. That is, only if
"start_seqno > purge_seqno > snap_start_seqno".

Change-Id: Ibfae86b35a4fd26efc5b96b350748b3bc4621f78
Reviewed-on: http://review.couchbase.org/87929
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


# d9711724 18-Jan-2018 Sriram Ganesan <sriram@couchbase.com>

MB-27542: rename enable_value_compression to force_value_compression

Given that a DCP client wants KV-engine to forcibly compress the values
over DCP, this control parameter is being ren

MB-27542: rename enable_value_compression to force_value_compression

Given that a DCP client wants KV-engine to forcibly compress the values
over DCP, this control parameter is being renamed appropriately

Change-Id: Iff7f321fbf94a5580cf843bf0a5e48e86cde9dc7
Reviewed-on: http://review.couchbase.org/88009
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>

show more ...


# c5bc93f8 18-Jan-2018 Sriram Ganesan <sriram@couchbase.com>

MB-27542: Datatype Snappy should be enabled for DCP compression

Before a DCP client can send a control message to forcibly
compress documents from the producer, HELLO::Snappy needs to be

MB-27542: Datatype Snappy should be enabled for DCP compression

Before a DCP client can send a control message to forcibly
compress documents from the producer, HELLO::Snappy needs to be
negotiated on the producer connection

Change-Id: I53d780f4c5ca2c93e4aad2f7147c128d790fb07c
Reviewed-on: http://review.couchbase.org/87999
Tested-by: Build Bot <build@couchbase.com>
Reviewed-by: Dave Rigby <daver@couchbase.com>

show more ...


# 687ed3e5 16-Jan-2018 Sriram Ganesan <sriram@couchbase.com>

Refactor: move makeCompressibleItem to test helpers

Creating a compressible item should be moved to generic test helpers
so that it can be used in tests other than DCP

Change-Id

Refactor: move makeCompressibleItem to test helpers

Creating a compressible item should be moved to generic test helpers
so that it can be used in tests other than DCP

Change-Id: Ia84a0ffcee4efceccc8eed4045fb05598aaa1d7b
Reviewed-on: http://review.couchbase.org/87877
Reviewed-by: Dave Rigby <daver@couchbase.com>
Tested-by: Build Bot <build@couchbase.com>

show more ...


123456