Skip to content

Commit

Permalink
Sort L0 files by newly introduced epoch_num (#10922)
Browse files Browse the repository at this point in the history
Summary:
**Context:**
Sorting L0 files by `largest_seqno` has at least two inconvenience:
-  File ingestion and compaction involving ingested files can create files of overlapping seqno range with the existing files. `force_consistency_check=true` will catch such overlap seqno range even those harmless overlap.
    - For example, consider the following sequence of events ("key@n" indicates key at seqno "n")
       - insert k1@1 to memtable m1
       - ingest file s1 with k2@2, ingest file s2 with k3@3
        - insert k4@4 to m1
       - compact files s1, s2 and  result in new file s3 of seqno range [2, 3]
       - flush m1 and result in new file s4 of seqno range [1, 4]. And `force_consistency_check=true` will think s4 and s3 has file reordering corruption that might cause retuning an old value of k1
    - However such caught corruption is a false positive since s1, s2 will not have overlapped keys with k1 or whatever inserted into m1 before ingest file s1 by the requirement of file ingestion (otherwise the m1 will be flushed first before any of the file ingestion completes). Therefore there in fact isn't any file reordering corruption.
- Single delete can decrease a file's largest seqno and ordering by `largest_seqno` can introduce a wrong ordering hence file reordering corruption
    - For example, consider the following sequence of events ("key@n" indicates key at seqno "n", Credit to ajkr  for this example)
        - an existing SST s1 contains only k1@1
        - insert k1@2 to memtable m1
        - ingest file s2 with k3@3, ingest file s3 with k4@4
        - insert single delete k5@5 in m1
        - flush m1 and result in new file s4 of seqno range [2, 5]
        - compact s1, s2, s3 and result in new file s5 of seqno range [1, 4]
        - compact s4 and result in new file s6 of seqno range [2] due to single delete
    - By the last step, we have file ordering by largest seqno (">" means "newer") : s5 > s6 while s6 contains a newer version of the k1's value (i.e, k1@2) than s5, which is a real reordering corruption. While this can be caught by `force_consistency_check=true`, there isn't a good way to prevent this from happening if ordering by `largest_seqno`

Therefore, we are redesigning the sorting criteria of L0 files and avoid above inconvenience. Credit to ajkr , we now introduce `epoch_num` which describes the order of a file being flushed or ingested/imported (compaction output file will has the minimum `epoch_num` among input files'). This will avoid the above inconvenience in the following ways:
- In the first case above, there will no longer be overlap seqno range check in `force_consistency_check=true` but `epoch_number`  ordering check. This will result in file ordering s1 <  s2 <  s4 (pre-compaction) and s3 < s4 (post-compaction) which won't trigger false positive corruption. See test class `DBCompactionTestL0FilesMisorderCorruption*` for more.
- In the second case above, this will result in file ordering s1 < s2 < s3 < s4 (pre-compacting s1, s2, s3), s5 < s4 (post-compacting s1, s2, s3), s5 < s6 (post-compacting s4), which are correct file ordering without causing any corruption.

**Summary:**
- Introduce `epoch_number` stored per `ColumnFamilyData` and sort CF's L0 files by their assigned `epoch_number` instead of `largest_seqno`.
  - `epoch_number` is increased and assigned upon `VersionEdit::AddFile()` for flush (or similarly for WriteLevel0TableForRecovery) and file ingestion (except for allow_behind_true, which will always get assigned as the `kReservedEpochNumberForFileIngestedBehind`)
  - Compaction output file  is assigned with the minimum `epoch_number` among input files'
      - Refit level: reuse refitted file's epoch_number
  -  Other paths needing `epoch_number` treatment:
     - Import column families: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`
     - Repair: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`.
  -  Assigning new epoch_number to a file and adding this file to LSM tree should be atomic. This is guaranteed by us assigning epoch_number right upon `VersionEdit::AddFile()` where this version edit will be apply to LSM tree shape right after by holding the db mutex (e.g, flush, file ingestion, import column family) or  by there is only 1 ongoing edit per CF (e.g, WriteLevel0TableForRecovery, Repair).
  - Assigning the minimum input epoch number to compaction output file won't misorder L0 files (even through later `Refit(target_level=0)`). It's due to for every key "k" in the input range, a legit compaction will cover a continuous epoch number range of that key. As long as we assign the key "k" the minimum input epoch number, it won't become newer or older than the versions of this key that aren't included in this compaction hence no misorder.
- Persist `epoch_number` of each file in manifest and recover `epoch_number` on db recovery
   - Backward compatibility with old db without `epoch_number` support is guaranteed by assigning `epoch_number` to recovered files by `NewestFirstBySeqno` order. See `VersionStorageInfo::RecoverEpochNumbers()` for more
   - Forward compatibility with manifest is guaranteed by flexibility of `NewFileCustomTag`
- Replace `force_consistent_check` on L0 with `epoch_number` and remove false positive check like case 1 with `largest_seqno` above
   - Due to backward compatibility issue, we might encounter files with missing epoch number at the beginning of db recovery. We will still use old L0 sorting mechanism (`NewestFirstBySeqno`) to check/sort them till we infer their epoch number. See usages of `EpochNumberRequirement`.
- Remove fix #5958 (comment) and their outdated tests to file reordering corruption because such fix can be replaced by this PR.
- Misc:
   - update existing tests with `epoch_number` so make check will pass
   - update #5958 (comment) tests to verify corruption is fixed using `epoch_number` and cover universal/fifo compaction/CompactRange/CompactFile cases
   - assert db_mutex is held for a few places before calling ColumnFamilyData::NewEpochNumber()

Pull Request resolved: #10922

Test Plan:
- `make check`
- New unit tests under `db/db_compaction_test.cc`, `db/db_test2.cc`, `db/version_builder_test.cc`, `db/repair_test.cc`
- Updated tests (i.e, `DBCompactionTestL0FilesMisorderCorruption*`) under #5958 (comment)
- [Ongoing] Compatibility test: manually run ajkr@36a5686 (with file ingestion off for running the `.orig` binary to prevent this bug affecting upgrade/downgrade formality checking) for 1 hour on `simple black/white box`, `cf_consistency/txn/enable_ts with whitebox + test_best_efforts_recovery with blackbox`
- [Ongoing] normal db stress test
- [Ongoing] db stress test with aggressive value #10761

Reviewed By: ajkr

Differential Revision: D41063187

Pulled By: hx235

fbshipit-source-id: 826cb23455de7beaabe2d16c57682a82733a32a9
  • Loading branch information
hx235 authored and facebook-github-bot committed Dec 13, 2022
1 parent 9b34c09 commit 98d5db5
Show file tree
Hide file tree
Showing 44 changed files with 1,780 additions and 486 deletions.
2 changes: 2 additions & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,15 @@
## Unreleased
### Behavior changes
* Make best-efforts recovery verify SST unique ID before Version construction (#10962)
* Introduce `epoch_number` and sort L0 files by `epoch_number` instead of `largest_seqno`. `epoch_number` represents the order of a file being flushed or ingested/imported. Compaction output file will be assigned with the minimum `epoch_number` among input files'. For L0, larger `epoch_number` indicates newer L0 file.

### Bug Fixes
* Fixed a regression in iterator where range tombstones after `iterate_upper_bound` is processed.
* Fixed a memory leak in MultiGet with async_io read option, caused by IO errors during table file open
* Fixed a bug that multi-level FIFO compaction deletes one file in non-L0 even when `CompactionOptionsFIFO::max_table_files_size` is no exceeded since #10348 or 7.8.0.
* Fixed a bug caused by `DB::SyncWAL()` affecting `track_and_verify_wals_in_manifest`. Without the fix, application may see "open error: Corruption: Missing WAL with log number" while trying to open the db. The corruption is a false alarm but prevents DB open (#10892).
* Fixed a BackupEngine bug in which RestoreDBFromLatestBackup would fail if the latest backup was deleted and there is another valid backup available.
* Fix L0 file misorder corruption caused by ingesting files of overlapping seqnos with memtable entries' through introducing `epoch_number`. Before the fix, `force_consistency_checks=true` may catch the corruption before it's exposed to readers, in which case writes returning `Status::Corruption` would be expected. Also replace the previous incomplete fix (#5958) to the same corruption with this new and more complete fix.

## 7.9.0 (11/21/2022)
### Performance Improvements
Expand Down
15 changes: 10 additions & 5 deletions db/column_family.cc
Original file line number Diff line number Diff line change
Expand Up @@ -565,7 +565,8 @@ ColumnFamilyData::ColumnFamilyData(
allow_2pc_(db_options.allow_2pc),
last_memtable_id_(0),
db_paths_registered_(false),
mempurge_used_(false) {
mempurge_used_(false),
next_epoch_number_(1) {
if (id_ != kDummyColumnFamilyDataId) {
// TODO(cc): RegisterDbPaths can be expensive, considering moving it
// outside of this constructor which might be called with db mutex held.
Expand Down Expand Up @@ -1128,12 +1129,9 @@ bool ColumnFamilyData::NeedsCompaction() const {
Compaction* ColumnFamilyData::PickCompaction(
const MutableCFOptions& mutable_options,
const MutableDBOptions& mutable_db_options, LogBuffer* log_buffer) {
SequenceNumber earliest_mem_seqno =
std::min(mem_->GetEarliestSequenceNumber(),
imm_.current()->GetEarliestSequenceNumber(false));
auto* result = compaction_picker_->PickCompaction(
GetName(), mutable_options, mutable_db_options, current_->storage_info(),
log_buffer, earliest_mem_seqno);
log_buffer);
if (result != nullptr) {
result->SetInputVersion(current_);
}
Expand Down Expand Up @@ -1520,6 +1518,13 @@ FSDirectory* ColumnFamilyData::GetDataDir(size_t path_id) const {
return data_dirs_[path_id].get();
}

void ColumnFamilyData::RecoverEpochNumbers() {
assert(current_);
auto* vstorage = current_->storage_info();
assert(vstorage);
vstorage->RecoverEpochNumbers(this);
}

ColumnFamilySet::ColumnFamilySet(const std::string& dbname,
const ImmutableDBOptions* db_options,
const FileOptions& file_options,
Expand Down
20 changes: 20 additions & 0 deletions db/column_family.h
Original file line number Diff line number Diff line change
Expand Up @@ -533,6 +533,24 @@ class ColumnFamilyData {
void SetMempurgeUsed() { mempurge_used_ = true; }
bool GetMempurgeUsed() { return mempurge_used_; }

// Allocate and return a new epoch number
uint64_t NewEpochNumber() { return next_epoch_number_.fetch_add(1); }

// Get the next epoch number to be assigned
uint64_t GetNextEpochNumber() const { return next_epoch_number_.load(); }

// Set the next epoch number to be assigned
void SetNextEpochNumber(uint64_t next_epoch_number) {
next_epoch_number_.store(next_epoch_number);
}

// Reset the next epoch number to be assigned
void ResetNextEpochNumber() { next_epoch_number_.store(1); }

// Recover the next epoch number of this CF and epoch number
// of its files (if missing)
void RecoverEpochNumbers();

private:
friend class ColumnFamilySet;
ColumnFamilyData(uint32_t id, const std::string& name,
Expand Down Expand Up @@ -634,6 +652,8 @@ class ColumnFamilyData {
// a Version associated with this CFD
std::shared_ptr<CacheReservationManager> file_metadata_cache_res_mgr_;
bool mempurge_used_;

std::atomic<uint64_t> next_epoch_number_;
};

// ColumnFamilySet has interesting thread-safety requirements
Expand Down
10 changes: 10 additions & 0 deletions db/compaction/compaction.cc
Original file line number Diff line number Diff line change
Expand Up @@ -780,6 +780,16 @@ uint64_t Compaction::MinInputFileOldestAncesterTime(
return min_oldest_ancester_time;
}

uint64_t Compaction::MinInputFileEpochNumber() const {
uint64_t min_epoch_number = std::numeric_limits<uint64_t>::max();
for (const auto& inputs_per_level : inputs_) {
for (const auto& file : inputs_per_level.files) {
min_epoch_number = std::min(min_epoch_number, file->epoch_number);
}
}
return min_epoch_number;
}

int Compaction::EvaluatePenultimateLevel(
const VersionStorageInfo* vstorage,
const ImmutableOptions& immutable_options, const int start_level,
Expand Down
3 changes: 3 additions & 0 deletions db/compaction/compaction.h
Original file line number Diff line number Diff line change
Expand Up @@ -378,6 +378,9 @@ class Compaction {
// This is used to filter out some input files' ancester's time range.
uint64_t MinInputFileOldestAncesterTime(const InternalKey* start,
const InternalKey* end) const;
// Return the minimum epoch number among
// input files' associated with this compaction
uint64_t MinInputFileEpochNumber() const;

// Called by DBImpl::NotifyOnCompactionCompleted to make sure number of
// compaction begin and compaction completion callbacks match.
Expand Down
2 changes: 2 additions & 0 deletions db/compaction/compaction_job.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1834,12 +1834,14 @@ Status CompactionJob::OpenCompactionOutputFile(SubcompactionState* sub_compact,
}

// Initialize a SubcompactionState::Output and add it to sub_compact->outputs
uint64_t epoch_number = sub_compact->compaction->MinInputFileEpochNumber();
{
FileMetaData meta;
meta.fd = FileDescriptor(file_number,
sub_compact->compaction->output_path_id(), 0);
meta.oldest_ancester_time = oldest_ancester_time;
meta.file_creation_time = current_time;
meta.epoch_number = epoch_number;
meta.temperature = temperature;
assert(!db_id_.empty());
assert(!db_session_id_.empty());
Expand Down
6 changes: 4 additions & 2 deletions db/compaction/compaction_job.h
Original file line number Diff line number Diff line change
Expand Up @@ -402,6 +402,7 @@ struct CompactionServiceOutputFile {
std::string largest_internal_key;
uint64_t oldest_ancester_time;
uint64_t file_creation_time;
uint64_t epoch_number;
uint64_t paranoid_hash;
bool marked_for_compaction;
UniqueId64x2 unique_id;
Expand All @@ -411,15 +412,16 @@ struct CompactionServiceOutputFile {
const std::string& name, SequenceNumber smallest, SequenceNumber largest,
std::string _smallest_internal_key, std::string _largest_internal_key,
uint64_t _oldest_ancester_time, uint64_t _file_creation_time,
uint64_t _paranoid_hash, bool _marked_for_compaction,
UniqueId64x2 _unique_id)
uint64_t _epoch_number, uint64_t _paranoid_hash,
bool _marked_for_compaction, UniqueId64x2 _unique_id)
: file_name(name),
smallest_seqno(smallest),
largest_seqno(largest),
smallest_internal_key(std::move(_smallest_internal_key)),
largest_internal_key(std::move(_largest_internal_key)),
oldest_ancester_time(_oldest_ancester_time),
file_creation_time(_file_creation_time),
epoch_number(_epoch_number),
paranoid_hash(_paranoid_hash),
marked_for_compaction(_marked_for_compaction),
unique_id(std::move(_unique_id)) {}
Expand Down
14 changes: 8 additions & 6 deletions db/compaction/compaction_job_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -380,11 +380,13 @@ class CompactionJobTestBase : public testing::Test {
}

VersionEdit edit;
edit.AddFile(level, file_number, 0, file_size, smallest_key, largest_key,
smallest_seqno, largest_seqno, false, Temperature::kUnknown,
oldest_blob_file_number, kUnknownOldestAncesterTime,
kUnknownFileCreationTime, kUnknownFileChecksum,
kUnknownFileChecksumFuncName, kNullUniqueId64x2);
edit.AddFile(
level, file_number, 0, file_size, smallest_key, largest_key,
smallest_seqno, largest_seqno, false, Temperature::kUnknown,
oldest_blob_file_number, kUnknownOldestAncesterTime,
kUnknownFileCreationTime,
versions_->GetColumnFamilySet()->GetDefault()->NewEpochNumber(),
kUnknownFileChecksum, kUnknownFileChecksumFuncName, kNullUniqueId64x2);

mutex_.Lock();
EXPECT_OK(
Expand Down Expand Up @@ -1655,7 +1657,7 @@ TEST_F(CompactionJobTest, ResultSerialization) {
rnd.RandomBinaryString(rnd.Uniform(kStrMaxLen)),
rnd.RandomBinaryString(rnd.Uniform(kStrMaxLen)),
rnd64.Uniform(UINT64_MAX), rnd64.Uniform(UINT64_MAX),
rnd64.Uniform(UINT64_MAX), rnd.OneIn(2), id);
rnd64.Uniform(UINT64_MAX), rnd64.Uniform(UINT64_MAX), rnd.OneIn(2), id);
}
result.output_level = rnd.Uniform(10);
result.output_path = rnd.RandomString(rnd.Uniform(kStrMaxLen));
Expand Down
23 changes: 6 additions & 17 deletions db/compaction/compaction_picker.cc
Original file line number Diff line number Diff line change
Expand Up @@ -31,27 +31,15 @@ bool FindIntraL0Compaction(const std::vector<FileMetaData*>& level_files,
size_t min_files_to_compact,
uint64_t max_compact_bytes_per_del_file,
uint64_t max_compaction_bytes,
CompactionInputFiles* comp_inputs,
SequenceNumber earliest_mem_seqno) {
// Do not pick ingested file when there is at least one memtable not flushed
// which of seqno is overlap with the sst.
CompactionInputFiles* comp_inputs) {
TEST_SYNC_POINT("FindIntraL0Compaction");

size_t start = 0;
for (; start < level_files.size(); start++) {
if (level_files[start]->being_compacted) {
return false;
}
// If there is no data in memtable, the earliest sequence number would the
// largest sequence number in last memtable.
// Because all files are sorted in descending order by largest_seqno, so we
// only need to check the first one.
if (level_files[start]->fd.largest_seqno <= earliest_mem_seqno) {
break;
}
}
if (start >= level_files.size()) {

if (level_files.size() == 0 || level_files[start]->being_compacted) {
return false;
}

size_t compact_bytes = static_cast<size_t>(level_files[start]->fd.file_size);
size_t compact_bytes_per_del_file = std::numeric_limits<size_t>::max();
// Compaction range will be [start, limit).
Expand Down Expand Up @@ -995,6 +983,7 @@ Status CompactionPicker::SanitizeCompactionInputFilesForAllLevels(
current_files[f].name +
" is currently being compacted.");
}

input_files->insert(TableFileNameToNumber(current_files[f].name));
}

Expand Down
33 changes: 17 additions & 16 deletions db/compaction/compaction_picker.h
Original file line number Diff line number Diff line change
Expand Up @@ -51,14 +51,15 @@ class CompactionPicker {
virtual ~CompactionPicker();

// Pick level and inputs for a new compaction.
//
// Returns nullptr if there is no compaction to be done.
// Otherwise returns a pointer to a heap-allocated object that
// describes the compaction. Caller should delete the result.
virtual Compaction* PickCompaction(
const std::string& cf_name, const MutableCFOptions& mutable_cf_options,
const MutableDBOptions& mutable_db_options, VersionStorageInfo* vstorage,
LogBuffer* log_buffer,
SequenceNumber earliest_memtable_seqno = kMaxSequenceNumber) = 0;
virtual Compaction* PickCompaction(const std::string& cf_name,
const MutableCFOptions& mutable_cf_options,
const MutableDBOptions& mutable_db_options,
VersionStorageInfo* vstorage,
LogBuffer* log_buffer) = 0;

// Return a compaction object for compacting the range [begin,end] in
// the specified level. Returns nullptr if there is nothing in that
Expand Down Expand Up @@ -91,6 +92,7 @@ class CompactionPicker {
// files. If it's not possible to conver an invalid input_files
// into a valid one by adding more files, the function will return a
// non-ok status with specific reason.
//
#ifndef ROCKSDB_LITE
Status SanitizeCompactionInputFiles(std::unordered_set<uint64_t>* input_files,
const ColumnFamilyMetaData& cf_meta,
Expand Down Expand Up @@ -255,12 +257,11 @@ class NullCompactionPicker : public CompactionPicker {
virtual ~NullCompactionPicker() {}

// Always return "nullptr"
Compaction* PickCompaction(
const std::string& /*cf_name*/,
const MutableCFOptions& /*mutable_cf_options*/,
const MutableDBOptions& /*mutable_db_options*/,
VersionStorageInfo* /*vstorage*/, LogBuffer* /* log_buffer */,
SequenceNumber /* earliest_memtable_seqno */) override {
Compaction* PickCompaction(const std::string& /*cf_name*/,
const MutableCFOptions& /*mutable_cf_options*/,
const MutableDBOptions& /*mutable_db_options*/,
VersionStorageInfo* /*vstorage*/,
LogBuffer* /* log_buffer */) override {
return nullptr;
}

Expand Down Expand Up @@ -304,11 +305,11 @@ class NullCompactionPicker : public CompactionPicker {
// files. Cannot be nullptr.
//
// @return true iff compaction was found.
bool FindIntraL0Compaction(
const std::vector<FileMetaData*>& level_files, size_t min_files_to_compact,
uint64_t max_compact_bytes_per_del_file, uint64_t max_compaction_bytes,
CompactionInputFiles* comp_inputs,
SequenceNumber earliest_mem_seqno = kMaxSequenceNumber);
bool FindIntraL0Compaction(const std::vector<FileMetaData*>& level_files,
size_t min_files_to_compact,
uint64_t max_compact_bytes_per_del_file,
uint64_t max_compaction_bytes,
CompactionInputFiles* comp_inputs);

CompressionType GetCompressionType(const VersionStorageInfo* vstorage,
const MutableCFOptions& mutable_cf_options,
Expand Down
2 changes: 1 addition & 1 deletion db/compaction/compaction_picker_fifo.cc
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,7 @@ Compaction* FIFOCompactionPicker::PickCompactionToWarm(
Compaction* FIFOCompactionPicker::PickCompaction(
const std::string& cf_name, const MutableCFOptions& mutable_cf_options,
const MutableDBOptions& mutable_db_options, VersionStorageInfo* vstorage,
LogBuffer* log_buffer, SequenceNumber /*earliest_memtable_seqno*/) {
LogBuffer* log_buffer) {
Compaction* c = nullptr;
if (mutable_cf_options.ttl > 0) {
c = PickTTLCompaction(cf_name, mutable_cf_options, mutable_db_options,
Expand Down
10 changes: 5 additions & 5 deletions db/compaction/compaction_picker_fifo.h
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,11 @@ class FIFOCompactionPicker : public CompactionPicker {
const InternalKeyComparator* icmp)
: CompactionPicker(ioptions, icmp) {}

virtual Compaction* PickCompaction(
const std::string& cf_name, const MutableCFOptions& mutable_cf_options,
const MutableDBOptions& mutable_db_options, VersionStorageInfo* version,
LogBuffer* log_buffer,
SequenceNumber earliest_memtable_seqno = kMaxSequenceNumber) override;
virtual Compaction* PickCompaction(const std::string& cf_name,
const MutableCFOptions& mutable_cf_options,
const MutableDBOptions& mutable_db_options,
VersionStorageInfo* version,
LogBuffer* log_buffer) override;

virtual Compaction* CompactRange(
const std::string& cf_name, const MutableCFOptions& mutable_cf_options,
Expand Down
16 changes: 8 additions & 8 deletions db/compaction/compaction_picker_level.cc
Original file line number Diff line number Diff line change
Expand Up @@ -50,15 +50,13 @@ class LevelCompactionBuilder {
public:
LevelCompactionBuilder(const std::string& cf_name,
VersionStorageInfo* vstorage,
SequenceNumber earliest_mem_seqno,
CompactionPicker* compaction_picker,
LogBuffer* log_buffer,
const MutableCFOptions& mutable_cf_options,
const ImmutableOptions& ioptions,
const MutableDBOptions& mutable_db_options)
: cf_name_(cf_name),
vstorage_(vstorage),
earliest_mem_seqno_(earliest_mem_seqno),
compaction_picker_(compaction_picker),
log_buffer_(log_buffer),
mutable_cf_options_(mutable_cf_options),
Expand Down Expand Up @@ -122,7 +120,6 @@ class LevelCompactionBuilder {

const std::string& cf_name_;
VersionStorageInfo* vstorage_;
SequenceNumber earliest_mem_seqno_;
CompactionPicker* compaction_picker_;
LogBuffer* log_buffer_;
int start_level_ = -1;
Expand Down Expand Up @@ -196,7 +193,10 @@ void LevelCompactionBuilder::SetupInitialFiles() {
}
output_level_ =
(start_level_ == 0) ? vstorage_->base_level() : start_level_ + 1;
if (PickFileToCompact()) {
bool picked_file_to_compact = PickFileToCompact();
TEST_SYNC_POINT_CALLBACK("PostPickFileToCompact",
&picked_file_to_compact);
if (picked_file_to_compact) {
// found the compaction!
if (start_level_ == 0) {
// L0 score = `num L0 files` / `level0_file_num_compaction_trigger`
Expand Down Expand Up @@ -825,16 +825,16 @@ bool LevelCompactionBuilder::PickIntraL0Compaction() {
return FindIntraL0Compaction(level_files, kMinFilesForIntraL0Compaction,
std::numeric_limits<uint64_t>::max(),
mutable_cf_options_.max_compaction_bytes,
&start_level_inputs_, earliest_mem_seqno_);
&start_level_inputs_);
}
} // namespace

Compaction* LevelCompactionPicker::PickCompaction(
const std::string& cf_name, const MutableCFOptions& mutable_cf_options,
const MutableDBOptions& mutable_db_options, VersionStorageInfo* vstorage,
LogBuffer* log_buffer, SequenceNumber earliest_mem_seqno) {
LevelCompactionBuilder builder(cf_name, vstorage, earliest_mem_seqno, this,
log_buffer, mutable_cf_options, ioptions_,
LogBuffer* log_buffer) {
LevelCompactionBuilder builder(cf_name, vstorage, this, log_buffer,
mutable_cf_options, ioptions_,
mutable_db_options);
return builder.PickCompaction();
}
Expand Down
10 changes: 5 additions & 5 deletions db/compaction/compaction_picker_level.h
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ class LevelCompactionPicker : public CompactionPicker {
LevelCompactionPicker(const ImmutableOptions& ioptions,
const InternalKeyComparator* icmp)
: CompactionPicker(ioptions, icmp) {}
virtual Compaction* PickCompaction(
const std::string& cf_name, const MutableCFOptions& mutable_cf_options,
const MutableDBOptions& mutable_db_options, VersionStorageInfo* vstorage,
LogBuffer* log_buffer,
SequenceNumber earliest_memtable_seqno = kMaxSequenceNumber) override;
virtual Compaction* PickCompaction(const std::string& cf_name,
const MutableCFOptions& mutable_cf_options,
const MutableDBOptions& mutable_db_options,
VersionStorageInfo* vstorage,
LogBuffer* log_buffer) override;

virtual bool NeedsCompaction(
const VersionStorageInfo* vstorage) const override;
Expand Down
Loading

0 comments on commit 98d5db5

Please sign in to comment.