Conversation
- Added `tiles daemon start <port>` rn used for testing diff p2p clients
- This makes sure the iroh endpointId can be decoded into the tiles account DID and vice-versa, fundamentally both are build from same tiles keypair underneath
- Now the linked device are added to local DB w an account type of SELF - Prevent redundant linking - Added a new flag `-x` or `--no_repl` to tiles command, which will prevent the tool from going directly to repl. This is useful if somebody want to setup tiles config w/0 downloading model. Particularly useful in p2p testing
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis pull request introduces P2P device linking functionality to Tiles. It adds a new Sequence Diagram(s)sequenceDiagram
actor User1
actor User2
participant Ep1 as Local Endpoint<br/>(User 1)
participant Gossip as Gossip Network
participant Ep2 as Remote Endpoint<br/>(User 2)
participant DB2 as Peer DB<br/>(User 2)
User1->>Ep1: tiles link enable --ticket ABC
activate Ep1
Ep1->>Ep1: Parse ticket, derive topic
Ep1->>Gossip: Subscribe to topic
activate Gossip
Ep1->>Gossip: Broadcast LinkRequest<br/>(did, nickname, nonce)
Ep2->>Gossip: Subscribed to topic
Gossip->>Ep2: Receive LinkRequest
activate Ep2
Ep2->>User2: Prompt: Accept link from User1?
User2->>Ep2: Y/N response (stdin)
alt User2 accepts
Ep2->>DB2: Save peer link
Ep2->>Gossip: Broadcast LinkAccepted
Gossip->>Ep1: Receive LinkAccepted
Ep1->>Ep1: Peer link established
else User2 rejects
Ep2->>Gossip: Broadcast LinkRejected
Gossip->>Ep1: Receive LinkRejected
Ep1->>Ep1: Link request denied
end
deactivate Ep2
deactivate Gossip
deactivate Ep1
sequenceDiagram
actor User2
participant Ep2 as Local Endpoint<br/>(User 2)
participant Gossip as Gossip Network
participant DB2 as Peer DB
participant mDNS as mDNS Registry
User2->>Ep2: tiles link enable
activate Ep2
Ep2->>Ep2: Generate LinkTicket<br/>or offline code
Ep2->>mDNS: Register endpoint<br/>(offline mode)
Ep2->>Gossip: Subscribe to topic
activate Gossip
Ep2->>Ep2: Print ticket/code<br/>for out-of-band sharing
Ep2->>Ep2: Wait for LinkRequest
note over Ep2,Gossip: Listening for peer connection
Gossip->>Ep2: LinkRequest arrives
Ep2->>User2: Prompt: Accept link<br/>from peer?
User2->>Ep2: Y/N response
alt User accepts
Ep2->>DB2: Save peer account
Ep2->>Gossip: Broadcast LinkAccepted
else User rejects
Ep2->>Gossip: Broadcast LinkRejected
end
deactivate Gossip
deactivate Ep2
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tiles/src/core/accounts.rs (1)
47-57:⚠️ Potential issue | 🔴 CriticalCritical:
TryFrom<String>doesn't handle"self", breakingget_current_userfor linked accounts.The
ACCOUNT::SELFvariant'sDisplayoutputs"self", butTryFrom<String>only recognizes"local". Whensave_self_account_dbstores aSELFaccount, any subsequent call toget_current_user(or similar queries parsingaccount_type) will fail with "Invalid account type" because"self"isn't matched.This is confirmed by the database schema (context snippet 1) and the usage in
network/mod.rs(context snippet 2) wheresave_self_account_dbis called.🐛 Proposed fix to add "self" parsing
impl TryFrom<String> for ACCOUNT { type Error = AccountError; fn try_from(value: String) -> Result<Self, Self::Error> { let value_lower = value.to_lowercase(); match value_lower.as_str() { "local" => Ok(ACCOUNT::LOCAL), + "self" => Ok(ACCOUNT::SELF), _ => Err(AccountError { error: "Invalid account type".to_owned(), }), } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/accounts.rs` around lines 47 - 57, The TryFrom<String> implementation for ACCOUNT (impl TryFrom<String> for ACCOUNT) currently only matches "local" and returns an AccountError for any other string; update the match in try_from to also accept "self" (e.g., add a branch matching "self" => Ok(ACCOUNT::SELF)) so values produced by ACCOUNT::SELF's Display (and saved by save_self_account_db) parse back correctly; keep the existing to_lowercase() behavior so matching remains case-insensitive.
🧹 Nitpick comments (7)
tiles/src/core/network/ticket.rs (1)
8-15: Consider addingClonederive forLinkTicket.
LinkTicketcontains fields that are allClone-able (String,EndpointAddr,TopicId). AddingClonewould make it more ergonomic to use without consuming the ticket.♻️ Proposed change
//TODO: Add tests -#[derive(serde::Serialize, serde::Deserialize, Debug)] +#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)] pub struct LinkTicket {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/ticket.rs` around lines 8 - 15, The LinkTicket struct lacks a Clone derive though all its fields (String, EndpointAddr, TopicId) implement Clone; update the struct declaration for LinkTicket to also derive Clone so instances can be duplicated without being consumed — add Clone alongside serde::Serialize, serde::Deserialize, Debug in the derive list for the LinkTicket type.tilekit/src/accounts.rs (1)
54-59: Fix typo in parameter name:publick_key→public_key.Minor typo in the function signature.
✏️ Proposed fix
-pub fn get_did_from_public_key(publick_key: &[u8; 32]) -> Result<String> { - let verifying_key = VerifyingKey::from_bytes(publick_key)?; +pub fn get_did_from_public_key(public_key: &[u8; 32]) -> Result<String> { + let verifying_key = VerifyingKey::from_bytes(public_key)?; let ed_did = Ed25519Did::from(verifying_key); Ok(ed_did.to_string()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tilekit/src/accounts.rs` around lines 54 - 59, Rename the function parameter in get_did_from_public_key from publick_key to public_key to fix the typo; update the parameter name in the function signature and all references inside the function (e.g., the call to VerifyingKey::from_bytes) and any external call sites that pass the argument so they use the corrected identifier public_key.tiles/src/daemon.rs (2)
27-27: Consider usingu16for port numbers.Network ports are 16-bit values (0–65535). Using
u32allows invalid port values and requires casting when binding. Consider usingu16throughout for type safety.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/daemon.rs` at line 27, The DEFAULT_PORT constant is declared as u32 but ports are 16-bit; change DEFAULT_PORT: u16 = 1729 and update any places that reference DEFAULT_PORT (including config structs, function signatures, parsing from env vars, and socket bind calls) to use u16 or parse with parse::<u16>() to avoid unnecessary casting; where APIs require a different integer type (e.g., SocketAddr or bind functions), perform an explicit, checked conversion (u16 -> u16 as required or u16::into()/as u32 only after validating) so port values remain in 0..=65535 and type safety is preserved.
163-169: Simplifyget_portusingunwrap_or.The function can be simplified using idiomatic Rust.
♻️ Proposed simplification
fn get_port(port: Option<u32>) -> u32 { - if let Some(port_number) = port { - port_number - } else { - DEFAULT_PORT - } + port.unwrap_or(DEFAULT_PORT) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/daemon.rs` around lines 163 - 169, The get_port function currently matches on Option manually; simplify it by returning port.unwrap_or(DEFAULT_PORT) instead of the if-let block—update the get_port definition to use unwrap_or with DEFAULT_PORT to make it idiomatic and concise while keeping the same behavior.tiles/src/core/accounts.rs (1)
263-295: Consider checking for duplicateuser_idbefore insert.The TODO comment mentions deferring unique constraints, but the current code will fail with a database error if the same
user_idwithaccount_type = "self"is inserted twice (due to theUNIQUE(account_type, user_id)constraint). Consider usingINSERT OR IGNOREor checking existence first withget_user_by_user_id.♻️ Proposed defensive insert
pub fn save_self_account_db(db_conn: &Connection, user_id: &str, nickname: &str) -> Result<()> { + // Check if user already exists to avoid constraint violation + if get_user_by_user_id(db_conn, user_id).is_ok() { + return Ok(()); // Already linked + } let user = User { // ... rest of function🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/accounts.rs` around lines 263 - 295, save_self_account_db currently inserts a user without checking for an existing (account_type, user_id) pair which will error due to the UNIQUE constraint; modify save_self_account_db to avoid duplicate-insert failures by either performing an existence check with get_user_by_user_id (and returning early) or switching the SQL to a defensive insert like "INSERT OR IGNORE" (or "INSERT ON CONFLICT DO NOTHING") for the query executed in save_self_account_db, ensuring you still populate/return appropriate result behavior when the row is skipped.tiles/src/main.rs (1)
226-235: Clarify fallback behavior when no daemon subcommand is provided.Line 234 falls back to
start_server(None)whendaemon_args.commandisNone, butDaemon Startwith an explicit port would usestart_cmd(port). This asymmetry might be intentional (direct server vs daemon mode), but consider adding a comment or unifying the behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/main.rs` around lines 226 - 235, The match falls back to start_server(None) when daemon_args.command is None, which is asymmetric with DaemonCommands::Start calling start_cmd(port); either add a brief comment above the match clarifying that a missing daemon subcommand intentionally runs the server in foreground via start_server(None) (so Commands::Daemon with no subcommand = direct server run), or if you want unified behavior change the fallback to call start_cmd with a sensible default (e.g., start_cmd(default_port) or refactor start_cmd to accept Option<port> and call start_cmd(None)) so that daemon and explicit Start use the same startup code; update the code accordingly around Commands::Daemon, daemon_args.command, DaemonCommands::Start, start_cmd and start_server.tiles/src/core/network/mod.rs (1)
193-250: Typo in function name:subsribe_loop→subscribe_loop.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 193 - 250, Rename the async function subsribe_loop to subscribe_loop (fix the typo) by updating its declaration (async fn subsribe_loop(...) -> Result<()> to async fn subscribe_loop(...) -> Result<()>) and then update every reference/call site, import, or export that uses subsribe_loop to use subscribe_loop instead (search for subsribe_loop usages in the module and tests); ensure signatures and visibility remain unchanged so callers compile.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@modelfiles/qwen`:
- Around line 1-3: Update the Dockerfile base selection to make the model choice
explicit: decide whether the downgrade to Qwen3.5-0.8B is intentional for this
PR or revert to Qwen3.5-4B-MLX-4bit; then keep exactly one authoritative FROM
line (either the active "FROM mlx-community/Qwen3.5-0.8B" or the 4B MLX variant)
and remove the other commented alternatives, or replace them with a single short
comment explaining the rationale and scope (e.g., resource-constrained testing
only, not part of device-linking PR). Reference the existing FROM entries
("mlx-community/Qwen3.5-0.8B", "mlx-community/Qwen3.5-0.8B-MLX-8bit",
"mlx-community/Qwen3.5-4B-MLX-4bit") when making the change so reviewers can
verify the intended model and constraints.
In `@tilekit/src/accounts.rs`:
- Around line 38-47: The get_secret_key function blindly uses
bytes[..64].copy_from_slice(secret_pair.as_slice()) which will panic if
secret_pair.len() != 64; add an explicit length check on secret_pair (e.g., if
secret_pair.len() != 64) before copying, and return a descriptive Err from
get_secret_key when the length is invalid so the caller can handle the failure,
then only perform the copy and create the SigningKey when length == 64
(referencing get_secret_key, secret_pair, copy_from_slice,
SigningKey::from_keypair_bytes).
In `@tiles/Cargo.toml`:
- Around line 24-31: The Cargo dependency entry for iroh-gossip is incorrect:
replace the iroh-gossip = "0.97.0" entry with iroh-gossip = "0.96.0" to match
crates.io and iroh's semver compatibility; also review the postcard dependency
(postcard = "1.1.3") and enable required features (e.g., alloc or use-std) if
the code paths in serialization/deserialization (search for postcard::to_vec,
postcard::from_bytes, or types using serde/postcard) require them.
In `@tiles/src/core/network/mod.rs`:
- Around line 100-108: The branch handling the optional ticket is using inverted
logic: after parsing LinkTicket with LinkTicket::from_str, change the check on
get_user_by_user_id(&user_db_conn, &link_ticket.did).is_err() so it treats Err
as "not linked" and Ok as "already linked"; i.e., replace the current condition
that prints "Device ... already linked" and returns when is_err() is true with
the opposite check (or adjust the messages/flow) so that when
get_user_by_user_id returns Err you proceed to link, and when it returns Ok you
log "Device ... already linked" and return early using link_ticket.nickname and
link_ticket.did for the message.
- Around line 208-210: The current synchronous call stdin.read_line(&mut input)?
will block the Tokio executor; replace it with an async-safe approach: either
wrap the blocking call in tokio::task::spawn_blocking and await its Result
(e.g., let input = tokio::task::spawn_blocking(|| { let mut s = String::new();
std::io::stdin().read_line(&mut s).map(|_| s) }).await?? ) or use Tokio's async
API by replacing std::io::stdin()/read_line with tokio::io::stdin(),
tokio::io::BufReader::new, and tokio::io::AsyncBufReadExt::read_line(&mut buf,
&mut String).await; ensure you update the surrounding code to await the future
and propagate or map errors accordingly where stdin.read_line(&mut input)? was
used.
In `@tiles/src/core/network/ticket.rs`:
- Around line 20-22: The to_bytes method on Ticket currently panics with a
misleading message; update the Ticket::to_bytes implementation to stop
referencing "serde_json" and instead include the real postcard error details
when serialization fails (e.g. replace expect("serde_json to bytes couldnt be
done") with an unwrap_or_else or expect message that references postcard and
prints the error like: panic!("postcard to bytes failed: {}", err)). Keep the fn
name to_bytes and the postcard::to_stdvec(&self) call but change the error
handling to include the actual serialization error string.
In `@tiles/src/daemon.rs`:
- Around line 36-38: stop_cmd currently hardcodes no port and calls
stop_server(None), so daemons started with start_cmd(Some(port)) can't be
stopped; change stop_cmd to accept an Option<port> (e.g., Option<u16>) and
forward that to stop_server (call stop_server(port).await) and update the CLI
entry that invokes stop_cmd to pass through the user-specified port from the CLI
flags; ensure symbol references: stop_cmd, stop_server, start_cmd are updated
consistently.
---
Outside diff comments:
In `@tiles/src/core/accounts.rs`:
- Around line 47-57: The TryFrom<String> implementation for ACCOUNT (impl
TryFrom<String> for ACCOUNT) currently only matches "local" and returns an
AccountError for any other string; update the match in try_from to also accept
"self" (e.g., add a branch matching "self" => Ok(ACCOUNT::SELF)) so values
produced by ACCOUNT::SELF's Display (and saved by save_self_account_db) parse
back correctly; keep the existing to_lowercase() behavior so matching remains
case-insensitive.
---
Nitpick comments:
In `@tilekit/src/accounts.rs`:
- Around line 54-59: Rename the function parameter in get_did_from_public_key
from publick_key to public_key to fix the typo; update the parameter name in the
function signature and all references inside the function (e.g., the call to
VerifyingKey::from_bytes) and any external call sites that pass the argument so
they use the corrected identifier public_key.
In `@tiles/src/core/accounts.rs`:
- Around line 263-295: save_self_account_db currently inserts a user without
checking for an existing (account_type, user_id) pair which will error due to
the UNIQUE constraint; modify save_self_account_db to avoid duplicate-insert
failures by either performing an existence check with get_user_by_user_id (and
returning early) or switching the SQL to a defensive insert like "INSERT OR
IGNORE" (or "INSERT ON CONFLICT DO NOTHING") for the query executed in
save_self_account_db, ensuring you still populate/return appropriate result
behavior when the row is skipped.
In `@tiles/src/core/network/mod.rs`:
- Around line 193-250: Rename the async function subsribe_loop to subscribe_loop
(fix the typo) by updating its declaration (async fn subsribe_loop(...) ->
Result<()> to async fn subscribe_loop(...) -> Result<()>) and then update every
reference/call site, import, or export that uses subsribe_loop to use
subscribe_loop instead (search for subsribe_loop usages in the module and
tests); ensure signatures and visibility remain unchanged so callers compile.
In `@tiles/src/core/network/ticket.rs`:
- Around line 8-15: The LinkTicket struct lacks a Clone derive though all its
fields (String, EndpointAddr, TopicId) implement Clone; update the struct
declaration for LinkTicket to also derive Clone so instances can be duplicated
without being consumed — add Clone alongside serde::Serialize,
serde::Deserialize, Debug in the derive list for the LinkTicket type.
In `@tiles/src/daemon.rs`:
- Line 27: The DEFAULT_PORT constant is declared as u32 but ports are 16-bit;
change DEFAULT_PORT: u16 = 1729 and update any places that reference
DEFAULT_PORT (including config structs, function signatures, parsing from env
vars, and socket bind calls) to use u16 or parse with parse::<u16>() to avoid
unnecessary casting; where APIs require a different integer type (e.g.,
SocketAddr or bind functions), perform an explicit, checked conversion (u16 ->
u16 as required or u16::into()/as u32 only after validating) so port values
remain in 0..=65535 and type safety is preserved.
- Around line 163-169: The get_port function currently matches on Option
manually; simplify it by returning port.unwrap_or(DEFAULT_PORT) instead of the
if-let block—update the get_port definition to use unwrap_or with DEFAULT_PORT
to make it idiomatic and concise while keeping the same behavior.
In `@tiles/src/main.rs`:
- Around line 226-235: The match falls back to start_server(None) when
daemon_args.command is None, which is asymmetric with DaemonCommands::Start
calling start_cmd(port); either add a brief comment above the match clarifying
that a missing daemon subcommand intentionally runs the server in foreground via
start_server(None) (so Commands::Daemon with no subcommand = direct server run),
or if you want unified behavior change the fallback to call start_cmd with a
sensible default (e.g., start_cmd(default_port) or refactor start_cmd to accept
Option<port> and call start_cmd(None)) so that daemon and explicit Start use the
same startup code; update the code accordingly around Commands::Daemon,
daemon_args.command, DaemonCommands::Start, start_cmd and start_server.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 59bcb37a-898d-4be4-bca8-0b8cb1d548e8
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (9)
modelfiles/qwentilekit/src/accounts.rstiles/Cargo.tomltiles/src/core/accounts.rstiles/src/core/mod.rstiles/src/core/network/mod.rstiles/src/core/network/ticket.rstiles/src/daemon.rstiles/src/main.rs
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
… fixes - Uses blocking process for reading from std read_line
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
tiles/src/core/network/mod.rs (5)
83-98: Redundant endpoint clones and unclear shutdown order.
epandep2are cloned fromendpointbut the usage pattern is confusing.ep2.close()is called beforerecv_router.shutdown(), but typically you'd want to shut down the router first before closing the endpoint it's bound to.♻️ Suggested simplification
} else { let endpoint = Endpoint::bind(presets::N0).await?; - let ep = endpoint.clone(); - let ep2 = endpoint.clone(); endpoint.online().await; let ping = Ping::new(); let ticket = EndpointTicket::new(endpoint.addr()); println!("ticket\n{:?}", ticket.to_string()); - let recv_router = Router::builder(ep).accept(iroh_ping::ALPN, ping).spawn(); - ep2.close().await; + let recv_router = Router::builder(endpoint.clone()) + .accept(iroh_ping::ALPN, ping) + .spawn(); recv_router.shutdown().await?; + endpoint.close().await; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 83 - 98, The endpoint cloning and shutdown ordering is confusing: remove the unnecessary second clone (ep2) and use a single clone for building the router (e.g., let ep = endpoint.clone(); let recv_router = Router::builder(ep).accept(...).spawn();), then shut down the router before closing the bound endpoint (call recv_router.shutdown().await? first, then endpoint.close().await), ensuring Endpoint::bind, Router::builder, recv_router.shutdown, and endpoint.close are used in that order to avoid closing the transport while the router is still running.
213-222: Consider simplifying the input capture fromspawn_blocking.Using
Arc<Mutex<String>>to capture input from a blocking task is more complex than necessary.spawn_blockingcan return the value directly.♻️ Simplified approach
- let input: Arc<Mutex<String>> = Arc::new(Mutex::new(String::new())); - - let input_clone = input.clone(); let stdin = io::stdin(); - spawn_blocking(move || { - let mut input_clone = input_clone.lock().unwrap(); - let _ = stdin.read_line(&mut input_clone); + let input_resp = spawn_blocking(move || { + let mut buf = String::new(); + stdin.read_line(&mut buf)?; + Ok::<_, io::Error>(buf.trim().to_owned()) }) - .await?; - let input_resp = input.lock().unwrap().trim().to_owned(); + .await??;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 213 - 222, The current code uses Arc<Mutex<String>> (symbols: input, input_clone) to capture stdin inside spawn_blocking and then reads the locked string into input_resp; simplify by having spawn_blocking return the String directly: call spawn_blocking with a closure that creates a local String, reads into it from io::stdin(), and returns that String (handle the Result inside the closure), await the JoinHandle to get the String, then trim and assign to input_resp—remove the Arc/Mutex usage and related clones.
198-198: Typo:subsribe_loopshould besubscribe_loop.The function name is misspelled at declaration (line 198) and call sites (lines 132, 182).
✏️ Proposed fix
-tokio::spawn(subsribe_loop( +tokio::spawn(subscribe_loop(-async fn subsribe_loop( +async fn subscribe_loop(Also applies to: 132-132, 182-182
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` at line 198, The function name subsribe_loop is misspelled; rename the function to subscribe_loop and update all call sites that reference subsribe_loop (notably the calls near the original locations that invoked it) to use subscribe_loop; ensure the async fn declaration (subsribe_loop) and all invocations (e.g., where subsribe_loop(...) is awaited/spawned) are changed consistently and run cargo build to verify there are no remaining references.
49-54: Inconsistent error handling betweenfrom_bytesandto_bytes.
from_bytesreturnsResultwhileto_bytesusesexpect()and panics on failure. Consider making them consistent. Ifto_bytesmust returnVec<u8>for API compatibility, at minimum useunwrap_or_elseto log context:♻️ Suggested improvement
fn to_bytes(&self) -> Vec<u8> { - postcard::to_stdvec(&self).expect("Failed to convert to bytes w postcard") + postcard::to_stdvec(&self).unwrap_or_else(|e| panic!("NetworkMessage serialization failed: {e}")) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 49 - 54, The to_bytes implementation panics on serialization failure while from_bytes returns a Result; make error handling consistent by either changing to_bytes to return Result<Vec<u8>, Error> (propagate postcard::to_stdvec error via map_err/Into like from_bytes) or, if API compatibility requires returning Vec<u8>, replace expect() with unwrap_or_else that logs the serialization error and returns a sensible fallback (e.g., empty Vec or previously defined sentinel) so the crate doesn't panic; edit the to_bytes function accordingly and use the same error conversion approach or logging facility used by from_bytes.
292-294: Unused function_get_did_from_endpoint.This function is defined but never used. Consider removing it or implementing its intended usage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 292 - 294, The function `_get_did_from_endpoint` is unused; remove this dead wrapper or replace its intended call sites to use `get_did_from_public_key(endpoint_id.as_bytes())` directly. If the wrapper was meant to be kept for clarity, make it referenced (e.g., update callers to call `_get_did_from_endpoint(endpoint_id)`) or change its visibility/name (remove the leading underscore) and add documentation; otherwise delete the `_get_did_from_endpoint` function to clean up unused code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tiles/src/core/network/mod.rs`:
- Around line 217-221: The code currently discards errors from stdin.read_line()
inside the spawn_blocking closure (spawn_blocking, stdin.read_line,
input_clone), which hides I/O failures and can leave the input empty; change the
closure to handle the Result from stdin.read_line(...) instead of using let _ =
..., e.g., match or ? the Result so errors are either logged (use your
logger/tracing with context like "failed to read stdin") or propagated out of
the async task so the caller can handle the failure; ensure the lock on
input_clone remains held while writing the read content and map any IO error
into the surrounding function's error type or return it so the failure is not
silently ignored.
---
Nitpick comments:
In `@tiles/src/core/network/mod.rs`:
- Around line 83-98: The endpoint cloning and shutdown ordering is confusing:
remove the unnecessary second clone (ep2) and use a single clone for building
the router (e.g., let ep = endpoint.clone(); let recv_router =
Router::builder(ep).accept(...).spawn();), then shut down the router before
closing the bound endpoint (call recv_router.shutdown().await? first, then
endpoint.close().await), ensuring Endpoint::bind, Router::builder,
recv_router.shutdown, and endpoint.close are used in that order to avoid closing
the transport while the router is still running.
- Around line 213-222: The current code uses Arc<Mutex<String>> (symbols: input,
input_clone) to capture stdin inside spawn_blocking and then reads the locked
string into input_resp; simplify by having spawn_blocking return the String
directly: call spawn_blocking with a closure that creates a local String, reads
into it from io::stdin(), and returns that String (handle the Result inside the
closure), await the JoinHandle to get the String, then trim and assign to
input_resp—remove the Arc/Mutex usage and related clones.
- Line 198: The function name subsribe_loop is misspelled; rename the function
to subscribe_loop and update all call sites that reference subsribe_loop
(notably the calls near the original locations that invoked it) to use
subscribe_loop; ensure the async fn declaration (subsribe_loop) and all
invocations (e.g., where subsribe_loop(...) is awaited/spawned) are changed
consistently and run cargo build to verify there are no remaining references.
- Around line 49-54: The to_bytes implementation panics on serialization failure
while from_bytes returns a Result; make error handling consistent by either
changing to_bytes to return Result<Vec<u8>, Error> (propagate
postcard::to_stdvec error via map_err/Into like from_bytes) or, if API
compatibility requires returning Vec<u8>, replace expect() with unwrap_or_else
that logs the serialization error and returns a sensible fallback (e.g., empty
Vec or previously defined sentinel) so the crate doesn't panic; edit the
to_bytes function accordingly and use the same error conversion approach or
logging facility used by from_bytes.
- Around line 292-294: The function `_get_did_from_endpoint` is unused; remove
this dead wrapper or replace its intended call sites to use
`get_did_from_public_key(endpoint_id.as_bytes())` directly. If the wrapper was
meant to be kept for clarity, make it referenced (e.g., update callers to call
`_get_did_from_endpoint(endpoint_id)`) or change its visibility/name (remove the
leading underscore) and add documentation; otherwise delete the
`_get_did_from_endpoint` function to clean up unused code.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 72bd760c-8f13-4050-bae1-40bf8b3cace9
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
tiles/src/core/network/mod.rstiles/src/core/network/ticket.rs
- Added mDNS peer discovery with hex shortcode of 8 digit - Added link utility commands such as link disable <DID> and link list-peers - Refactored such that linking with tickets and codes coexist
Local device linking with mDNS
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tiles/src/commands/mod.rs`:
- Around line 354-362: The unlink_peer function currently swallows errors from
unlink(...) and always returns Ok(()); change it to propagate failures so the
CLI exit status reflects errors. Replace the if-let Err branch that prints the
error with early return of that error (e.g., return Err(err.into()) or use the ?
operator when calling unlink), keeping the success println in the Ok branch;
ensure you preserve the outer get_db_conn? usage (get_db_conn(...)?) and convert
types as needed so unlink_peer returns the existing Result error on failure
instead of Ok(()).
In `@tiles/src/core/accounts.rs`:
- Around line 288-319: The save_self_account_db function blindly inserts a new
SELF user row which causes duplicate or conflict errors when the same
(account_type, user_id) is re-linked; update save_self_account_db to be
idempotent by either performing an upsert (INSERT ... ON CONFLICT(account_type,
user_id) DO UPDATE ...) or by first querying users for a row with account_type
== ACCOUNT::SELF and user_id == user_id and then short-circuiting (return Ok) or
updating username/updated_at if present; reference the users table, the
save_self_account_db function, and the fields
id/user_id/username/account_type/active_profile/updated_at when implementing the
select/insert/update or the on-conflict upsert.
- Around line 378-390: The unlink function currently treats a successful DELETE
with 0 rows affected as success; update unlink (in tiles/src/core/accounts.rs)
to inspect the result from db_conn.execute (the affected row count returned by
execute) and return an Err when the count is 0 (e.g., "No peer removed" or
similar), while preserving the existing self-unlink check
(get_current_user/user.user_id) and existing error path that wraps DB errors;
ensure you return Ok(()) only when the execute reports at least one row deleted.
In `@tiles/src/core/network/mod.rs`:
- Around line 151-154: The mDNS metadata parsing is unsafe: create_endpoint()
serializes DID and nickname as "did,nickname" and discovery code calls
user_data().unwrap() and splits/indexes [0]/[1], which can panic or mis-parse
nicknames containing commas; update create_endpoint() to encode metadata as a
structured format (e.g., JSON or URL-encoded key/value), and update discovery
handling (the code paths that call user_data(), split/collect and assign
did/nickname) to treat user_data() as fallible (avoid unwrap), deserialize the
structured metadata into a safe struct (returning Result/Option), validate
required fields, and skip or log invalid/malformed discoveries instead of
indexing into the split; make these changes for all discovery handling sites
(the blocks that currently split by comma and index into [0]/[1]).
- Around line 129-155: The code currently branches on the local is_online probe
which can mismatch the parsed ticket (parse_link_ticket) and causes
endpoint_id.expect(...) to panic; instead branch on the parsed endpoint_id
returned by parse_link_ticket: if endpoint_id.is_some() treat it as an online
ticket and push the endpoint_id into bootstrap_ids (use unwrap or match to avoid
expect); otherwise treat it as an offline ticket and run the offline path (call
find_offline_bootstrap_peers(&endpoint, mdns).await, parse user_data into
did/nickname) so the bootstrap mode follows the ticket, not the local is_online
check; update the block around parse_link_ticket, endpoint_id, is_online, and
find_offline_bootstrap_peers accordingly.
- Around line 141-158: The code currently derives a static topic_id from
DEVICE_LINK_TOPIC via create_topic_id(), causing all link attempts to share the
same gossip topic; change the logic in the link flow (where
create_topic_id(DEVICE_LINK_TOPIC) is called) to generate a fresh,
session-scoped topic ID for each link attempt (e.g., by including randomness or
a nonce) and pass that topic_id into create_gossip_network(), and ensure the
generated LinkTicket includes and preserves this topic_id; also update
parse_link_ticket to extract and return the topic_id (not just endpoint ID and
metadata) so receivers use the ticket-specific topic when creating or joining
the gossip network (apply same fix to the other occurrences noted around
create_gossip_network).
- Around line 255-257: Handlers that process Event::Received ->
NetworkMessage::from_bytes(...) and match on MessageBody::LinkRequest and
MessageBody::LinkAccepted are persisting did/nickname straight from the payload;
before calling save_self_account_db you must verify the transport sender matches
the claimed identity by using msg.delivered_from and
_get_did_from_endpoint(msg.delivered_from) (or equivalent) and comparing that
DID to the DID/nickname in the message; if they don't match, reject/ignore the
message and do not persist. Apply the same verification to the other spots noted
(including the handler around the block at 390-392) so no
LinkRequest/LinkAccepted can be used to forge a trusted peer.
In `@tiles/src/main.rs`:
- Around line 84-86: The no_repl flag on RunFlags is parsed everywhere but only
honored in the no-subcommand path; update the Commands::Run match arm to read
RunFlags.no_repl and either pass it into or gate the call to commands::run so
the flag is respected when running the "run" subcommand (also apply the same fix
to the other occurrence around the Commands::Run handling at the second spot).
Specifically, locate RunFlags and its no_repl field, the Commands::Run arm in
the main match, and ensure commands::run(...) receives or checks
run_flags.no_repl (or returns/avoids entering the REPL) so -x works with the run
subcommand.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 2c87ae12-2040-4939-8209-452f982957ce
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
tilekit/src/accounts.rstiles/Cargo.tomltiles/src/commands/mod.rstiles/src/core/accounts.rstiles/src/core/network/mod.rstiles/src/main.rs
✅ Files skipped from review due to trivial changes (1)
- tiles/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (1)
- tilekit/src/accounts.rs
…on + other fixes - We now verify the DID coming from the msg can be derived from the sender's Public key - Instead of hardcoded topic for every session, from now it will be unique topics (online mode only) - refactor - Build changes for 0.4.5
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (6)
tiles/src/commands/mod.rs (1)
353-362:⚠️ Potential issue | 🟠 MajorPropagate unlink failures to the CLI exit status.
The function prints errors but always returns
Ok(()), making it impossible for callers/scripts to detect failures. Also, "Succesfully" is misspelled.🐛 Proposed fix
pub fn unlink_peer(user_id: &str) -> Result<()> { let db_conn = get_db_conn(core::storage::db::DBTYPE::COMMON)?; - if let Err(err) = unlink(&db_conn, user_id) { - println!("{:?}", err) - } else { - println!("Succesfully disabled the peer") - } + unlink(&db_conn, user_id)?; + println!("Successfully disabled the peer"); Ok(()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/commands/mod.rs` around lines 353 - 362, The unlink_peer function currently swallows failures and always returns Ok, and also has a misspelled success message; change it so errors from unlink are propagated by returning Err(err.into()) (or mapping the unlink error into the function Result) instead of just printing, and only print the success message on the Ok branch using the correct spelling "Successfully disabled the peer"; reference unlink_peer, unlink and get_db_conn to locate the logic and ensure the function returns the error when unlink fails.tiles/src/core/network/mod.rs (3)
130-141:⚠️ Potential issue | 🔴 CriticalBootstrap mode should be determined by the parsed ticket, not local connectivity.
The code branches on
is_online(local network state) rather than whether the ticket contains anendpoint_id. If the joining device is online but receives an offline code,endpoint_id.expect(...)at line 131 will panic.🐛 Proposed fix
- if is_online { - bootstrap_ids.push(endpoint_id.expect("Expected an EndpointId as bootstrapId ")) - } else { + if let Some(eid) = endpoint_id { + bootstrap_ids.push(eid); + } else { println!("Searching for peers in the local network.."); let mdns = address_lookup::mdns::MdnsAddressLookup::builder().build(endpoint.id())?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 130 - 141, The branch currently uses is_online to decide bootstrap mode which causes a panic when an online device receives an offline ticket; change the condition to check the parsed ticket's endpoint_id (e.g., use endpoint_id.is_some()) so that if endpoint_id is present you push it into bootstrap_ids (use endpoint_id.unwrap()/take safely) and otherwise run find_offline_bootstrap_peers(&endpoint, mdns).await to populate bootstrap_ids and parse EndpointUserData (EndpointUserData::try_from) — also add a fallback error path if both is_online and endpoint_id are false/None to avoid unexpected panics.
406-409:⚠️ Potential issue | 🟠 MajorTreat mDNS metadata as untrusted; avoid
unwrap().Line 408 calls
user_data().unwrap()on peer-advertised data. Malformed metadata from any local peer will panic this code path.🐛 Proposed fix
- bootstrap_ids.push(endpoint_info.endpoint_id); - user_data = endpoint_info.user_data().unwrap().clone(); - break; + if let Some(ud) = endpoint_info.user_data() { + bootstrap_ids.push(endpoint_info.endpoint_id); + user_data = ud.clone(); + break; + } else { + // Skip peers with missing/invalid metadata + continue; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 406 - 409, The code calls endpoint_info.user_data().unwrap() which can panic on malformed mDNS metadata; change logic in the block that pushes bootstrap_ids to handle the Option/Result safely by checking endpoint_info.user_data() (or matching Result) instead of unwrapping, validate or parse the user data, and on failure log a warning and skip/continue (or return a controlled Err) rather than panicking; update references around bootstrap_ids, endpoint_info, and user_data() to use safe matching (e.g., if let/ match) and propagate or handle errors appropriately.
267-271:⚠️ Potential issue | 🟡 MinorSilent error handling for
stdin.read_line().Using
let _ =discards errors fromread_line(). If stdin fails, the input remains empty, causing unintended rejection. Propagate the error instead.🐛 Proposed fix
spawn_blocking(move || { let mut input_clone = input_clone.lock().unwrap(); - let _ = stdin.read_line(&mut input_clone); + stdin.read_line(&mut input_clone) }) - .await?; + .await??;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 267 - 271, The code is silently discarding errors from stdin.read_line() inside the spawn_blocking closure; change the closure to return and propagate the Result instead of using `let _ =`. Replace `let _ = stdin.read_line(&mut input_clone);` with a `stdin.read_line(&mut input_clone)?` (or assign and return the Result) so the closure returns Result<_, std::io::Error>, then propagate that error through the `.await?` call so failures from stdin.read_line() are not ignored; update the closure return type or map the io::Error to the function's error type if necessary (references: the spawn_blocking closure, stdin.read_line, input_clone).tiles/src/core/accounts.rs (1)
288-320:⚠️ Potential issue | 🟠 MajorMake peer inserts idempotent.
save_self_account_dbperforms a blindINSERTwithout checking for existing records or usingON CONFLICT. Re-linking the same DID will fail on theUNIQUE(account_type, user_id)constraint defined in the test schema (line 624).🐛 Proposed fix using upsert
pub fn save_self_account_db(db_conn: &Connection, user_id: &str, nickname: &str) -> Result<()> { + // Check if user already exists + let existing = db_conn.query_row( + "SELECT id FROM users WHERE user_id = ?1 AND account_type = 'self'", + [user_id], + |_| Ok(()), + ); + if existing.is_ok() { + // Already linked, update nickname if changed + db_conn.execute( + "UPDATE users SET username = ?1, updated_at = strftime('%s','now') WHERE user_id = ?2 AND account_type = 'self'", + [nickname, user_id], + )?; + return Ok(()); + } + let user = User { // ... existing code };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/accounts.rs` around lines 288 - 320, save_self_account_db currently does a blind INSERT which violates the UNIQUE(account_type, user_id) constraint on re-linking; make the insert idempotent by using an upsert instead: change the SQL in save_self_account_db to use an upsert (e.g., "INSERT INTO users (...) VALUES (...) ON CONFLICT(account_type, user_id) DO UPDATE SET username = excluded.username, updated_at = excluded.updated_at" or use "INSERT OR IGNORE" if you prefer to skip existing rows), and ensure you include the relevant fields (id, user_id, username, active_profile, account_type, root, updated_at) and update updated_at (and username if desired) so repeated calls with the same user_id/account_type don’t error; locate the SQL call in save_self_account_db and update accordingly while keeping the User struct fields (id, user_id, username, active_profile, account_type, root, created_at, updated_at).tiles/src/main.rs (1)
205-217:⚠️ Potential issue | 🟠 Major
--no-replflag not honored inCommands::Runarm.The
no_replflag fromRunFlagsis checked in theNonebranch (line 199) but ignored when usingtiles run. The REPL always starts regardless of-x.🐛 Proposed fix
Some(Commands::Run { modelfile_path, flags, }) => { let run_args = RunArgs { modelfile_path, relay_count: flags.relay_count, memory: flags.memory, }; core::init().inspect_err(|e| eprintln!("Tiles core init failed due to {:?}", e))?; - commands::run(&runtime, run_args) - .await - .inspect_err(|e| eprintln!("Tiles failed to run due to {:?}", e))?; + if !flags.no_repl { + commands::run(&runtime, run_args) + .await + .inspect_err(|e| eprintln!("Tiles failed to run due to {:?}", e))?; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/main.rs` around lines 205 - 217, The RunFlags.no_repl value is not propagated in the Commands::Run branch so the REPL always starts; update the RunArgs construction in the Commands::Run arm to include no_repl: flags.no_repl (or otherwise pass flags.no_repl into commands::run), and ensure the RunArgs type and the commands::run signature/implementation accept and respect this no_repl field so the REPL is skipped when no_repl is true.
🧹 Nitpick comments (3)
tiles/src/core/network/mod.rs (1)
64-66: Avoidexpect()on serialization.
to_bytes()usesexpect()which will panic if serialization fails. While postcard serialization rarely fails, propagating the error is safer.♻️ Proposed refactor
- fn to_bytes(&self) -> Vec<u8> { - postcard::to_stdvec(&self).expect("Failed to convert to bytes w postcard") - } + fn to_bytes(&self) -> Result<Vec<u8>> { + postcard::to_stdvec(&self).map_err(Into::into) + }Then update callers to handle the
Result.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 64 - 66, The to_bytes() method currently panics on serialization failure via expect(); change its signature (function to_bytes) to return a Result<Vec<u8>, E> (e.g., Result<Vec<u8>, postcard::Error> or a crate-specific error type) and replace postcard::to_stdvec(&self).expect(...) with returning the Result from postcard::to_stdvec(&self). Then update all callers of to_bytes() to handle the Result (propagate with ? or map/unwrap at call sites that can safely handle it) so serialization errors are propagated instead of panicking.tiles/src/core/accounts.rs (2)
322-330:get_user_by_user_idhas an unusual return type.Returning
Result<()>for an existence check is non-idiomatic. The function discards the fetched data and only signals existence viaOk/Err. Consider returningResult<bool>orResult<User>for clarity and to avoid misuse.♻️ Proposed refactor
-pub fn get_user_by_user_id(conn: &Connection, user_id: String) -> Result<()> { - let mut fetch_root_user = conn.prepare("select id from users where user_id = ?1")?; - - match fetch_root_user.query_one([user_id], |_row| Ok(())) { - Ok(_) => Ok(()), - Err(rusqlite::Error::QueryReturnedNoRows) => Err(anyhow!("User doesnt exist")), - Err(_err) => Err(anyhow!("Fetching user from db failed")), - } -} +pub fn user_exists(conn: &Connection, user_id: &str) -> Result<bool> { + let mut stmt = conn.prepare("SELECT 1 FROM users WHERE user_id = ?1")?; + match stmt.query_row([user_id], |_| Ok(())) { + Ok(_) => Ok(true), + Err(rusqlite::Error::QueryReturnedNoRows) => Ok(false), + Err(err) => Err(anyhow!("Fetching user from db failed: {:?}", err)), + } +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/accounts.rs` around lines 322 - 330, get_user_by_user_id currently returns Result<()> and discards fetched data; change it to return a clear value (prefer Result<bool>) that indicates existence and update the implementation in get_user_by_user_id to return Ok(true) when the query finds a row and Ok(false) when rusqlite::Error::QueryReturnedNoRows is returned, and propagate other errors as Err. Locate the function by name (get_user_by_user_id) and update its signature and callers accordingly (or alternatively return Result<User> if you need the record), making sure the query_one/query_row match arms map to Ok(true)/Ok(false) instead of Ok(()) and that error mapping preserves other DB errors.
369-373: Simplify iteration withcollect().The manual loop with
pushcan be replaced with idiomatic iterator collection.♻️ Proposed refactor
- let mut peer_list: Vec<User> = vec![]; - - for peer in user_rows { - peer_list.push(peer?); - } - - Ok(peer_list) + user_rows.collect::<Result<Vec<_>, _>>().map_err(Into::into)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/accounts.rs` around lines 369 - 373, Replace the manual loop that builds peer_list by collecting the iterator: user_rows yields Result<User, E> and the body uses peer?, so use iterator::collect to turn user_rows into a Result<Vec<User>, E> and propagate the error with ?; assign the collected Vec to peer_list (symbol names: peer_list and user_rows) to make the code idiomatic and remove the explicit for/push loop.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/bundler.sh`:
- Around line 25-39: The bundler disabled the venvstacks pipeline but still
packages SERVER_DIR into DIST_DIR/tmp so scripts/install.sh later fails when it
expects stack_export_prod artifacts; restore the venvstacks steps and ensure a
fresh stack_export_prod is produced under the server bundle. Concretely, in
scripts/bundler.sh re-enable the venvstacks commands (venvstacks lock,
venvstacks build, and venvstacks publish --tag-outputs --output-dir
../stack_export_prod) and add a pre-publish cleanup of the target (rm -rf
"${SERVER_DIR}/stack_export_prod") so publish always writes fresh *.tar.xz files
into SERVER_DIR/stack_export_prod before the cp -r "${SERVER_DIR}"
"${DIST_DIR}/tmp/"; this keeps scripts/install.sh's cd
"${SERVER_DIR}/stack_export_prod" valid.
In `@tiles/src/core/accounts.rs`:
- Line 780: The test calls save_self_account_db(&conn, "varathan",
"did:jey:varathan") with arguments swapped; save_self_account_db(db_conn,
user_id, nickname) expects the DID as user_id and the nickname as nickname. Fix
the call (the one using save_self_account_db and conn) by swapping the two
string arguments so the DID is passed as user_id and "varathan" as nickname
(i.e., call save_self_account_db(&conn, "did:jey:varathan", "varathan")).
In `@tiles/src/core/network/mod.rs`:
- Around line 458-460: The is_did_valid function currently returns
Ok(get_did_from_public_key(&pub_key)? != did) which inverts its intent; change
the logic to return true when the DID matches the public key by using equality
(get_did_from_public_key(&pub_key)? == did) and preserve the Result handling;
update any call sites (the peer validation that uses is_did_valid) if they
assume the old inverted behavior to ensure valid peers are accepted and invalid
ones rejected.
- Around line 124-128: The current assignment to topic_id can panic when
is_online is true but topic_value is None; update the logic around topic_id (the
is_online check that uses topic_value.expect) to handle a missing topic_value
gracefully instead of panicking: if is_online and topic_value.is_none(), return
or propagate an appropriate error (or Result) upstream with a clear message
about a malformed/invalid ticket; otherwise use topic_value.unwrap_or_else to
call create_topic_id(DEVICE_LINK_LOCAL_TOPIC) only for offline cases. Ensure you
modify the surrounding function signature/return path to propagate the error if
necessary and reference topic_id, is_online, topic_value, create_topic_id, and
DEVICE_LINK_LOCAL_TOPIC when making the change.
In `@tiles/src/main.rs`:
- Around line 168-173: Fix the documentation comments on the enum variants:
update the doc for Disable (symbol Disable) from "Unlink give device" to "Unlink
given device", and correct the doc for ListPeers (symbol ListPeers) so it no
longer says "Start the daemon" — replace it with a concise description like
"List peers" or "List connected peers" to accurately describe the variant.
- Line 147: The Start enum variant currently uses Option<u32> for ports but must
be Option<u16>; update Start { port: Option<u16> } and make the port type
consistent everywhere by changing DEFAULT_PORT (daemon.rs) to a u16 and updating
all function signatures and uses to u16: start_cmd, start_daemon, start_server,
stop_server, ping, wait_until_server_is_up, and get_port; adjust any internal
conversions/parsing to validate/convert to u16 (e.g., parse from string then
cast/check range) so all references to port values, constants, parameters, and
returns use u16 consistently.
---
Duplicate comments:
In `@tiles/src/commands/mod.rs`:
- Around line 353-362: The unlink_peer function currently swallows failures and
always returns Ok, and also has a misspelled success message; change it so
errors from unlink are propagated by returning Err(err.into()) (or mapping the
unlink error into the function Result) instead of just printing, and only print
the success message on the Ok branch using the correct spelling "Successfully
disabled the peer"; reference unlink_peer, unlink and get_db_conn to locate the
logic and ensure the function returns the error when unlink fails.
In `@tiles/src/core/accounts.rs`:
- Around line 288-320: save_self_account_db currently does a blind INSERT which
violates the UNIQUE(account_type, user_id) constraint on re-linking; make the
insert idempotent by using an upsert instead: change the SQL in
save_self_account_db to use an upsert (e.g., "INSERT INTO users (...) VALUES
(...) ON CONFLICT(account_type, user_id) DO UPDATE SET username =
excluded.username, updated_at = excluded.updated_at" or use "INSERT OR IGNORE"
if you prefer to skip existing rows), and ensure you include the relevant fields
(id, user_id, username, active_profile, account_type, root, updated_at) and
update updated_at (and username if desired) so repeated calls with the same
user_id/account_type don’t error; locate the SQL call in save_self_account_db
and update accordingly while keeping the User struct fields (id, user_id,
username, active_profile, account_type, root, created_at, updated_at).
In `@tiles/src/core/network/mod.rs`:
- Around line 130-141: The branch currently uses is_online to decide bootstrap
mode which causes a panic when an online device receives an offline ticket;
change the condition to check the parsed ticket's endpoint_id (e.g., use
endpoint_id.is_some()) so that if endpoint_id is present you push it into
bootstrap_ids (use endpoint_id.unwrap()/take safely) and otherwise run
find_offline_bootstrap_peers(&endpoint, mdns).await to populate bootstrap_ids
and parse EndpointUserData (EndpointUserData::try_from) — also add a fallback
error path if both is_online and endpoint_id are false/None to avoid unexpected
panics.
- Around line 406-409: The code calls endpoint_info.user_data().unwrap() which
can panic on malformed mDNS metadata; change logic in the block that pushes
bootstrap_ids to handle the Option/Result safely by checking
endpoint_info.user_data() (or matching Result) instead of unwrapping, validate
or parse the user data, and on failure log a warning and skip/continue (or
return a controlled Err) rather than panicking; update references around
bootstrap_ids, endpoint_info, and user_data() to use safe matching (e.g., if
let/ match) and propagate or handle errors appropriately.
- Around line 267-271: The code is silently discarding errors from
stdin.read_line() inside the spawn_blocking closure; change the closure to
return and propagate the Result instead of using `let _ =`. Replace `let _ =
stdin.read_line(&mut input_clone);` with a `stdin.read_line(&mut input_clone)?`
(or assign and return the Result) so the closure returns Result<_,
std::io::Error>, then propagate that error through the `.await?` call so
failures from stdin.read_line() are not ignored; update the closure return type
or map the io::Error to the function's error type if necessary (references: the
spawn_blocking closure, stdin.read_line, input_clone).
In `@tiles/src/main.rs`:
- Around line 205-217: The RunFlags.no_repl value is not propagated in the
Commands::Run branch so the REPL always starts; update the RunArgs construction
in the Commands::Run arm to include no_repl: flags.no_repl (or otherwise pass
flags.no_repl into commands::run), and ensure the RunArgs type and the
commands::run signature/implementation accept and respect this no_repl field so
the REPL is skipped when no_repl is true.
---
Nitpick comments:
In `@tiles/src/core/accounts.rs`:
- Around line 322-330: get_user_by_user_id currently returns Result<()> and
discards fetched data; change it to return a clear value (prefer Result<bool>)
that indicates existence and update the implementation in get_user_by_user_id to
return Ok(true) when the query finds a row and Ok(false) when
rusqlite::Error::QueryReturnedNoRows is returned, and propagate other errors as
Err. Locate the function by name (get_user_by_user_id) and update its signature
and callers accordingly (or alternatively return Result<User> if you need the
record), making sure the query_one/query_row match arms map to
Ok(true)/Ok(false) instead of Ok(()) and that error mapping preserves other DB
errors.
- Around line 369-373: Replace the manual loop that builds peer_list by
collecting the iterator: user_rows yields Result<User, E> and the body uses
peer?, so use iterator::collect to turn user_rows into a Result<Vec<User>, E>
and propagate the error with ?; assign the collected Vec to peer_list (symbol
names: peer_list and user_rows) to make the code idiomatic and remove the
explicit for/push loop.
In `@tiles/src/core/network/mod.rs`:
- Around line 64-66: The to_bytes() method currently panics on serialization
failure via expect(); change its signature (function to_bytes) to return a
Result<Vec<u8>, E> (e.g., Result<Vec<u8>, postcard::Error> or a crate-specific
error type) and replace postcard::to_stdvec(&self).expect(...) with returning
the Result from postcard::to_stdvec(&self). Then update all callers of
to_bytes() to handle the Result (propagate with ? or map/unwrap at call sites
that can safely handle it) so serialization errors are propagated instead of
panicking.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: f22ab7d0-618b-4b42-8f42-312db672f238
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (11)
scripts/bundler.shscripts/install.shserver/stack/requirements/app-server/packages-app-server.txtserver/stack/requirements/app-server/pylock.app-server.meta.jsonserver/stack/requirements/app-server/pylock.app-server.tomltiles/Cargo.tomltiles/src/commands/mod.rstiles/src/core/accounts.rstiles/src/core/network/mod.rstiles/src/core/network/ticket.rstiles/src/main.rs
✅ Files skipped from review due to trivial changes (4)
- server/stack/requirements/app-server/packages-app-server.txt
- scripts/install.sh
- server/stack/requirements/app-server/pylock.app-server.toml
- server/stack/requirements/app-server/pylock.app-server.meta.json
🚧 Files skipped from review as they are similar to previous changes (2)
- tiles/Cargo.toml
- tiles/src/core/network/ticket.rs
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
tiles/src/core/network/mod.rs (3)
124-131:⚠️ Potential issue | 🟠 MajorPotential panic from
expect()calls on ticket data.Lines 125 and 131 use
expect()which will panic if the ticket is malformed or if local online status doesn't match ticket content. Theis_onlinecheck is based on local connectivity, but an online ticket could be received while locally offline (or vice versa), causing theexpect()to panic.🐛 Proposed fix - branch on ticket content, not local connectivity
- let topic_id = if is_online { - topic_value.expect("Expected topicId") - } else { + let topic_id = if let Some(tid) = topic_value { + tid + } else { create_topic_id(DEVICE_LINK_LOCAL_TOPIC) }; - if is_online { - bootstrap_ids.push(endpoint_id.expect("Expected an EndpointId as bootstrapId ")) - } else { + if let Some(eid) = endpoint_id { + bootstrap_ids.push(eid); + } else { println!("Searching for peers in the local network..");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 124 - 131, The code currently uses is_online (local connectivity) to decide whether to unwrap ticket fields with expect() (topic_value and endpoint_id), which can panic if the ticket's content disagrees with local state; change the logic to branch on the ticket's content instead of is_online: inspect topic_value and endpoint_id (e.g., match or if let) and handle missing/malformed values gracefully (return an error, skip adding bootstrap IDs, or fall back to create_topic_id(DEVICE_LINK_LOCAL_TOPIC)) rather than calling expect(); update the blocks around topic_id, create_topic_id, bootstrap_ids, topic_value, endpoint_id and is_online to use safe matching and explicit error handling/logging.
267-271:⚠️ Potential issue | 🟡 MinorSilent error handling for
stdin.read_line().Using
let _ =discards errors fromread_line(), silently ignoring I/O failures. If stdin fails, the input string remains empty, leading to unintended rejection behavior.🐛 Proposed fix
spawn_blocking(move || { let mut input_clone = input_clone.lock().unwrap(); - let _ = stdin.read_line(&mut input_clone); + if let Err(e) = stdin.read_line(&mut input_clone) { + eprintln!("Failed to read input: {}", e); + } }) .await?;Or propagate the error by returning it from the closure and using
.await??.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 267 - 271, The closure passed to spawn_blocking currently swallows errors from stdin.read_line() via let _ =, causing silent failures; change the closure to return a Result by propagating the read_line error (e.g., map or ? on stdin.read_line(&mut input_clone)) and then use .await?? on the spawn_blocking future so errors propagate out, or alternatively explicitly handle/log the Err from stdin.read_line() inside the closure before returning Ok(()); references: spawn_blocking, input_clone, stdin.read_line, and the closure's .await usage.
406-408:⚠️ Potential issue | 🟠 MajorHandle missing
user_datagracefully.
endpoint_info.user_data().unwrap()will panic if the discovered peer doesn't have user data set. Malformed or misconfigured peers could cause a crash.🐛 Proposed fix - skip peers without valid user_data
mdns::DiscoveryEvent::Discovered { endpoint_info, last_updated: _, } => { if cfg!(debug_assertions) { println!("peer discoverd {:?}", endpoint_info); } + let Some(ud) = endpoint_info.user_data() else { + if cfg!(debug_assertions) { + println!("skipping peer without user_data"); + } + continue; + }; bootstrap_ids.push(endpoint_info.endpoint_id); - user_data = endpoint_info.user_data().unwrap().clone(); + user_data = ud.clone(); break; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 406 - 408, Replace the panic-causing unwrap on endpoint_info.user_data() with an Option check: instead of using endpoint_info.user_data().unwrap(), use if let Some(u) = endpoint_info.user_data() { bootstrap_ids.push(endpoint_info.endpoint_id); user_data = u.clone(); break; } else { continue; } so peers without user_data are skipped and the loop doesn't panic; apply this change around the block handling endpoint_info/bootstrap_ids/user_data in mod.rs.tiles/src/core/accounts.rs (1)
289-321:⚠️ Potential issue | 🟠 MajorMake peer inserts idempotent.
The function performs a blind
INSERTwhich will fail with a constraint violation if the same(account_type, user_id)pair is re-linked (per theUNIQUE(account_type, user_id)constraint in the schema). This is the same issue flagged forsave_self_account_db. Use an upsert pattern to handle re-linking gracefully.🐛 Proposed fix using upsert
db_conn.execute( - "insert into users (id, user_id, username, active_profile, account_type, root) values - (?1, ?2, ?3,?4, ?5, ?6)", + "INSERT INTO users (id, user_id, username, active_profile, account_type, root) + VALUES (?1, ?2, ?3, ?4, ?5, ?6) + ON CONFLICT(account_type, user_id) DO UPDATE SET + username = excluded.username, + updated_at = strftime('%s','now')", (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/accounts.rs` around lines 289 - 321, The insert in save_peer_account_db blindly inserts a user and will fail on UNIQUE(account_type, user_id); change it to an upsert: use SQL like INSERT INTO users (...) VALUES (...) ON CONFLICT(account_type, user_id) DO UPDATE SET username = excluded.username, active_profile = excluded.active_profile, root = excluded.root, updated_at = excluded.updated_at (keeping the existing id), and bind the same parameters (including updated_at) when calling db_conn.execute so re-linking the same (account_type, user_id) becomes idempotent and updates metadata instead of erroring; update the save_peer_account_db logic to build the updated_at value and use that in the upsert parameters.
🧹 Nitpick comments (5)
tiles/src/core/network/mod.rs (2)
448-451: Extract magic number for offline code length.The hardcoded
8for offline code length is fragile. Consider extracting it as a constant and documenting the expected format.♻️ Suggested improvement
+const OFFLINE_CODE_LENGTH: usize = 8; + fn parse_link_ticket( ticket: &str, ) -> Result<(Option<EndpointId>, String, String, Option<TopicId>)> { if let Ok(parsed_ticket) = LinkTicket::from_str(ticket) { // ... - } else if ticket.len() == 8 { - // NOTE: We only have len check as a "parser" for the offline code + } else if ticket.len() == OFFLINE_CODE_LENGTH { + // NOTE: Offline codes are the first segment of a UUID (8 hex chars)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 448 - 451, Extract the magic number 8 used in the offline-ticket length check into a clearly named constant (e.g., OFFLINE_CODE_LEN or OFFLINE_CODE_LENGTH) with a doc comment describing the expected offline code format, then replace the inline literal in the conditional (the branch that checks ticket.len() == 8 and returns Ok((None, String::from(""), String::from(""), None))) with that constant; update any related comments to reference the constant so the length is maintained in one place and easy to adjust.
64-66: Avoidexpect()in serialization.
to_byteswill panic if serialization fails. While unlikely for this simple struct, returningResultwould be safer and consistent withfrom_bytes.♻️ Proposed change
- fn to_bytes(&self) -> Vec<u8> { - postcard::to_stdvec(&self).expect("Failed to convert to bytes w postcard") + fn to_bytes(&self) -> Result<Vec<u8>> { + postcard::to_stdvec(&self).map_err(Into::into) }Then update call sites to handle the
Result.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/network/mod.rs` around lines 64 - 66, Change to_bytes to return a Result instead of calling expect: make fn to_bytes(&self) -> Result<Vec<u8>, postcard::Error> (or a crate-appropriate error type) and use postcard::to_stdvec(&self) directly, mirroring the existing from_bytes signature and error handling; then update all call sites (places that call to_bytes) to propagate or handle the Result (e.g., using ? or map_err) so serialization failures no longer panic.tiles/src/core/accounts.rs (3)
323-331: Rename function to reflect its purpose.
get_user_by_user_idimplies it returns aUser, but it only checks existence and returnsResult<()>. Consider renaming touser_exists_by_user_idor changing the signature to returnResult<Option<User>>for clarity.♻️ Suggested rename
-pub fn get_user_by_user_id(conn: &Connection, user_id: String) -> Result<()> { +pub fn user_exists(conn: &Connection, user_id: &str) -> Result<bool> { let mut fetch_root_user = conn.prepare("select id from users where user_id = ?1")?; match fetch_root_user.query_one([user_id], |_row| Ok(())) { - Ok(_) => Ok(()), - Err(rusqlite::Error::QueryReturnedNoRows) => Err(anyhow!("User doesnt exist")), + Ok(_) => Ok(true), + Err(rusqlite::Error::QueryReturnedNoRows) => Ok(false), Err(_err) => Err(anyhow!("Fetching user from db failed")), } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/accounts.rs` around lines 323 - 331, The function get_user_by_user_id currently only checks existence but returns Result<(), ...>, so rename it to user_exists_by_user_id and change its signature to return Result<bool, anyhow::Error>; in the body (referencing conn.prepare and fetch_root_user.query_one) return Ok(true) when a row is found and Ok(false) when rusqlite::Error::QueryReturnedNoRows is returned, and keep other errors as Err(...); update all callers to expect a bool existence result instead of Result<()> accordingly.
230-252: Consider extracting shared row-mapping logic.
get_userduplicates theUserrow-mapping logic fromget_current_user. Also, the variable namefetch_current_useris misleading since this fetches byuser_id(DID), not by active profile status.♻️ Suggested improvements
-pub fn get_user(conn: &Connection, did: &str) -> Result<User> { - let mut fetch_current_user = conn.prepare("select id, user_id, username, account_type, active_profile, root, created_at, updated_at from users where user_id= ?1")?; +pub fn get_user(conn: &Connection, did: &str) -> Result<User> { + let mut stmt = conn.prepare("select id, user_id, username, account_type, active_profile, root, created_at, updated_at from users where user_id = ?1")?;Consider extracting a helper like
fn map_user_row(row: &Row) -> rusqlite::Result<User>to DRY up the mapping acrossget_user,get_current_user, andget_peer_list.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/accounts.rs` around lines 230 - 252, get_user duplicates row-mapping from get_current_user and uses a misleading variable name; extract the shared mapping into a helper like fn map_user_row(row: &rusqlite::Row) -> rusqlite::Result<User> and have get_user, get_current_user, and get_peer_list call that helper to construct User, converting id, account_type, created_at/updated_at in one place; also rename the local prepared statement variable in get_user (e.g., fetch_user_by_did) to reflect it queries by user_id/DID.
375-381: Usecollectinstead of manual loop.The manual iteration with
pushcan be simplified usingcollect.♻️ Simplified collection
- let mut peer_list: Vec<User> = vec![]; - - for peer in user_rows { - peer_list.push(peer?); - } - - Ok(peer_list) + user_rows.collect::<Result<Vec<_>, _>>().map_err(Into::into)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tiles/src/core/accounts.rs` around lines 375 - 381, Replace the manual loop that builds peer_list by pushing peer? from user_rows with a single collect operation: use user_rows to try-collect into a Vec<User> (leveraging the Result-to-collection behavior so errors are propagated with ?) and return that Vec instead of the for-loop; target the variables/types peer_list, user_rows and User when making this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tiles/src/core/network/mod.rs`:
- Around line 389-419: The find_offline_bootstrap_peers function currently
returns an empty bootstrap_ids and default user_data if the mdns_event stream
ends without a discovery; update it to detect that case and return a clear error
(or timeout) instead of successful Ok. Specifically, wrap waiting for
mdns_event.next() with a tokio::time::timeout (or track loop completion) and if
no mdns::DiscoveryEvent::Discovered was received (bootstrap_ids.is_empty()),
return a Result::Err with a descriptive error rather than Ok((bootstrap_ids,
user_data)); reference the function name find_offline_bootstrap_peers, the
mdns_event subscription/next loop, and the bootstrap_ids/user_data variables so
the change is applied where the discovery loop lives.
---
Duplicate comments:
In `@tiles/src/core/accounts.rs`:
- Around line 289-321: The insert in save_peer_account_db blindly inserts a user
and will fail on UNIQUE(account_type, user_id); change it to an upsert: use SQL
like INSERT INTO users (...) VALUES (...) ON CONFLICT(account_type, user_id) DO
UPDATE SET username = excluded.username, active_profile =
excluded.active_profile, root = excluded.root, updated_at = excluded.updated_at
(keeping the existing id), and bind the same parameters (including updated_at)
when calling db_conn.execute so re-linking the same (account_type, user_id)
becomes idempotent and updates metadata instead of erroring; update the
save_peer_account_db logic to build the updated_at value and use that in the
upsert parameters.
In `@tiles/src/core/network/mod.rs`:
- Around line 124-131: The code currently uses is_online (local connectivity) to
decide whether to unwrap ticket fields with expect() (topic_value and
endpoint_id), which can panic if the ticket's content disagrees with local
state; change the logic to branch on the ticket's content instead of is_online:
inspect topic_value and endpoint_id (e.g., match or if let) and handle
missing/malformed values gracefully (return an error, skip adding bootstrap IDs,
or fall back to create_topic_id(DEVICE_LINK_LOCAL_TOPIC)) rather than calling
expect(); update the blocks around topic_id, create_topic_id, bootstrap_ids,
topic_value, endpoint_id and is_online to use safe matching and explicit error
handling/logging.
- Around line 267-271: The closure passed to spawn_blocking currently swallows
errors from stdin.read_line() via let _ =, causing silent failures; change the
closure to return a Result by propagating the read_line error (e.g., map or ? on
stdin.read_line(&mut input_clone)) and then use .await?? on the spawn_blocking
future so errors propagate out, or alternatively explicitly handle/log the Err
from stdin.read_line() inside the closure before returning Ok(()); references:
spawn_blocking, input_clone, stdin.read_line, and the closure's .await usage.
- Around line 406-408: Replace the panic-causing unwrap on
endpoint_info.user_data() with an Option check: instead of using
endpoint_info.user_data().unwrap(), use if let Some(u) =
endpoint_info.user_data() { bootstrap_ids.push(endpoint_info.endpoint_id);
user_data = u.clone(); break; } else { continue; } so peers without user_data
are skipped and the loop doesn't panic; apply this change around the block
handling endpoint_info/bootstrap_ids/user_data in mod.rs.
---
Nitpick comments:
In `@tiles/src/core/accounts.rs`:
- Around line 323-331: The function get_user_by_user_id currently only checks
existence but returns Result<(), ...>, so rename it to user_exists_by_user_id
and change its signature to return Result<bool, anyhow::Error>; in the body
(referencing conn.prepare and fetch_root_user.query_one) return Ok(true) when a
row is found and Ok(false) when rusqlite::Error::QueryReturnedNoRows is
returned, and keep other errors as Err(...); update all callers to expect a bool
existence result instead of Result<()> accordingly.
- Around line 230-252: get_user duplicates row-mapping from get_current_user and
uses a misleading variable name; extract the shared mapping into a helper like
fn map_user_row(row: &rusqlite::Row) -> rusqlite::Result<User> and have
get_user, get_current_user, and get_peer_list call that helper to construct
User, converting id, account_type, created_at/updated_at in one place; also
rename the local prepared statement variable in get_user (e.g.,
fetch_user_by_did) to reflect it queries by user_id/DID.
- Around line 375-381: Replace the manual loop that builds peer_list by pushing
peer? from user_rows with a single collect operation: use user_rows to
try-collect into a Vec<User> (leveraging the Result-to-collection behavior so
errors are propagated with ?) and return that Vec instead of the for-loop;
target the variables/types peer_list, user_rows and User when making this
change.
In `@tiles/src/core/network/mod.rs`:
- Around line 448-451: Extract the magic number 8 used in the offline-ticket
length check into a clearly named constant (e.g., OFFLINE_CODE_LEN or
OFFLINE_CODE_LENGTH) with a doc comment describing the expected offline code
format, then replace the inline literal in the conditional (the branch that
checks ticket.len() == 8 and returns Ok((None, String::from(""),
String::from(""), None))) with that constant; update any related comments to
reference the constant so the length is maintained in one place and easy to
adjust.
- Around line 64-66: Change to_bytes to return a Result instead of calling
expect: make fn to_bytes(&self) -> Result<Vec<u8>, postcard::Error> (or a
crate-appropriate error type) and use postcard::to_stdvec(&self) directly,
mirroring the existing from_bytes signature and error handling; then update all
call sites (places that call to_bytes) to propagate or handle the Result (e.g.,
using ? or map_err) so serialization failures no longer panic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 9e33dbc7-eb57-4cfe-8e3e-163452c5144a
📒 Files selected for processing (4)
CHANGELOG.mdtilekit/src/accounts.rstiles/src/core/accounts.rstiles/src/core/network/mod.rs
✅ Files skipped from review due to trivial changes (1)
- CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (1)
- tilekit/src/accounts.rs
No description provided.