Skip to content

Commit 20f1a13

Browse files
committed
changes required after rebase on main
1 parent 68d627e commit 20f1a13

File tree

3 files changed

+21
-21
lines changed

3 files changed

+21
-21
lines changed

crates/iceberg/src/arrow/caching_delete_file_loader.rs

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ use std::collections::HashMap;
2020
use futures::channel::oneshot;
2121
use futures::future::join_all;
2222
use futures::{StreamExt, TryStreamExt};
23-
use tokio::sync::oneshot::{channel, Receiver};
23+
use tokio::sync::oneshot::{Receiver, channel};
2424

2525
use super::delete_filter::{DeleteFilter, EqDelFuture};
2626
use crate::arrow::delete_file_loader::BasicDeleteFileLoader;
@@ -70,30 +70,30 @@ impl CachingDeleteFileLoader {
7070
/// Returned future completes once all loading has finished.
7171
///
7272
/// * Create a single stream of all delete file tasks irrespective of type,
73-
/// so that we can respect the combined concurrency limit
73+
/// so that we can respect the combined concurrency limit
7474
/// * We then process each in two phases: load and parse.
7575
/// * for positional deletes the load phase instantiates an ArrowRecordBatchStream to
76-
/// stream the file contents out
76+
/// stream the file contents out
7777
/// * for eq deletes, we first check if the EQ delete is already loaded or being loaded by
78-
/// another concurrently processing data file scan task. If it is, we return a future
79-
/// for the pre-existing task from the load phase. If not, we create such a future
80-
/// and store it in the state to prevent other data file tasks from starting to load
81-
/// the same equality delete file, and return a record batch stream from the load phase
82-
/// as per the other delete file types - only this time it is accompanied by a one-shot
83-
/// channel sender that we will eventually use to resolve the shared future that we stored
84-
/// in the state.
78+
/// another concurrently processing data file scan task. If it is, we return a future
79+
/// for the pre-existing task from the load phase. If not, we create such a future
80+
/// and store it in the state to prevent other data file tasks from starting to load
81+
/// the same equality delete file, and return a record batch stream from the load phase
82+
/// as per the other delete file types - only this time it is accompanied by a one-shot
83+
/// channel sender that we will eventually use to resolve the shared future that we stored
84+
/// in the state.
8585
/// * When this gets updated to add support for delete vectors, the load phase will return
86-
/// a PuffinReader for them.
86+
/// a PuffinReader for them.
8787
/// * The parse phase parses each record batch stream according to its associated data type.
88-
/// The result of this is a map of data file paths to delete vectors for the positional
89-
/// delete tasks (and in future for the delete vector tasks). For equality delete
90-
/// file tasks, this results in an unbound Predicate.
88+
/// The result of this is a map of data file paths to delete vectors for the positional
89+
/// delete tasks (and in future for the delete vector tasks). For equality delete
90+
/// file tasks, this results in an unbound Predicate.
9191
/// * The unbound Predicates resulting from equality deletes are sent to their associated oneshot
92-
/// channel to store them in the right place in the delete file managers state.
92+
/// channel to store them in the right place in the delete file managers state.
9393
/// * The results of all of these futures are awaited on in parallel with the specified
94-
/// level of concurrency and collected into a vec. We then combine all the delete
95-
/// vector maps that resulted from any positional delete or delete vector files into a
96-
/// single map and persist it in the state.
94+
/// level of concurrency and collected into a vec. We then combine all the delete
95+
/// vector maps that resulted from any positional delete or delete vector files into a
96+
/// single map and persist it in the state.
9797
///
9898
///
9999
/// Conceptually, the data flow is like this:

crates/iceberg/src/arrow/delete_file_loader.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,8 @@ use std::sync::Arc;
1919

2020
use futures::{StreamExt, TryStreamExt};
2121

22-
use crate::arrow::record_batch_transformer::RecordBatchTransformer;
2322
use crate::arrow::ArrowReader;
23+
use crate::arrow::record_batch_transformer::RecordBatchTransformer;
2424
use crate::io::FileIO;
2525
use crate::scan::{ArrowRecordBatchStream, FileScanTaskDeleteFile};
2626
use crate::spec::{Schema, SchemaRef};

crates/iceberg/src/delete_vector.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,9 @@
1717

1818
use std::ops::BitOrAssign;
1919

20+
use roaring::RoaringTreemap;
2021
use roaring::bitmap::Iter;
2122
use roaring::treemap::BitmapIter;
22-
use roaring::RoaringTreemap;
2323

2424
#[derive(Debug, Default)]
2525
pub struct DeleteVector {
@@ -63,7 +63,7 @@ impl Iterator for DeleteVectorIterator<'_> {
6363
type Item = u64;
6464

6565
fn next(&mut self) -> Option<Self::Item> {
66-
if let Some(ref mut inner) = &mut self.inner {
66+
if let Some(inner) = &mut self.inner {
6767
if let Some(inner_next) = inner.bitmap_iter.next() {
6868
return Some(u64::from(inner.high_bits) << 32 | u64::from(inner_next));
6969
}

0 commit comments

Comments
 (0)