Skip to content

Commit 5006c6b

Browse files
committed
Do not track HTLC IDs as separate MPP parts which need claiming
When we claim an MPP payment, we need to track which channels have had the preimage durably added to their `ChannelMonitor` to ensure we don't remove the preimage from any `ChannelMonitor`s until all `ChannelMonitor`s have the preimage. Previously, we tracked each MPP part, down to the HTLC ID, as a part which we needed to get the preimage on disk for. However, this is not necessary - once a `ChannelMonitor` has a preimage, it applies it to all inbound HTLCs with the same payment hash. Further, this can cause a channel to wait on itself in cases of high-latency synchronous persistence - * If we have receive an MPP payment for which multiple parts came to us over the same channel, * and claim the MPP payment, creating a `ChannelMonitorUpdate` for the first part but enqueueing the remaining HTLC claim(s) in the channel's holding cell, * and we receive a `revoke_and_ack` for the same channel before the `ChannelManager::claim_payment` method completes (as each claim waits for the `ChannelMonitorUpdate` persistence), * we will cause the `ChannelMonitorUpdate` for that `revoke_and_ack` to go into the blocked set, waiting on the MPP parts to be fully claimed, * but when `claim_payment` goes to add the next `ChannelMonitorUpdate` for the MPP claim, it will be placed in the blocked set, since the blocked set is non-empty. Thus, we'll end up with a `ChannelMonitorUpdate` in the blocked set which is needed to unblock the channel since it is a part of the MPP set which blocked the channel. Trivial conflicts resolved in `lightning/src/util/test_utils.rs`
1 parent 1c50b9e commit 5006c6b

File tree

4 files changed

+290
-26
lines changed

4 files changed

+290
-26
lines changed

lightning/src/ln/chanmon_update_fail_tests.rs

+222
Original file line numberDiff line numberDiff line change
@@ -3819,3 +3819,225 @@ fn test_claim_to_closed_channel_blocks_claimed_event() {
38193819
nodes[1].chain_monitor.complete_sole_pending_chan_update(&chan_a.2);
38203820
expect_payment_claimed!(nodes[1], payment_hash, 1_000_000);
38213821
}
3822+
3823+
#[test]
3824+
#[cfg(all(feature = "std", not(target_os = "windows")))]
3825+
fn test_single_channel_multiple_mpp() {
3826+
use std::sync::atomic::{AtomicBool, Ordering};
3827+
3828+
// Test what happens when we attempt to claim an MPP with many parts that came to us through
3829+
// the same channel with a synchronous persistence interface which has very high latency.
3830+
//
3831+
// Previously, if a `revoke_and_ack` came in while we were still running in
3832+
// `ChannelManager::claim_payment` we'd end up hanging waiting to apply a
3833+
// `ChannelMonitorUpdate` until after it completed. See the commit which introduced this test
3834+
// for more info.
3835+
let chanmon_cfgs = create_chanmon_cfgs(9);
3836+
let node_cfgs = create_node_cfgs(9, &chanmon_cfgs);
3837+
let configs = [None, None, None, None, None, None, None, None, None];
3838+
let node_chanmgrs = create_node_chanmgrs(9, &node_cfgs, &configs);
3839+
let mut nodes = create_network(9, &node_cfgs, &node_chanmgrs);
3840+
3841+
let node_7_id = nodes[7].node.get_our_node_id();
3842+
let node_8_id = nodes[8].node.get_our_node_id();
3843+
3844+
// Send an MPP payment in six parts along the path shown from top to bottom
3845+
// 0
3846+
// 1 2 3 4 5 6
3847+
// 7
3848+
// 8
3849+
//
3850+
// We can in theory reproduce this issue with fewer channels/HTLCs, but getting this test
3851+
// robust is rather challenging. We rely on having the main test thread wait on locks held in
3852+
// the background `claim_funds` thread and unlocking when the `claim_funds` thread completes a
3853+
// single `ChannelMonitorUpdate`.
3854+
// This thread calls `get_and_clear_pending_msg_events()` and `handle_revoke_and_ack()`, both
3855+
// of which require `ChannelManager` locks, but we have to make sure this thread gets a chance
3856+
// to be blocked on the mutexes before we let the background thread wake `claim_funds` so that
3857+
// the mutex can switch to this main thread.
3858+
// This relies on our locks being fair, but also on our threads getting runtime during the test
3859+
// run, which can be pretty competitive. Thus we do a dumb dance to be as conservative as
3860+
// possible - we have a background thread which completes a `ChannelMonitorUpdate` (by sending
3861+
// into the `write_blocker` mpsc) but it doesn't run until a mpsc channel sends from this main
3862+
// thread to the background thread, and then we let it sleep a while before we send the
3863+
// `ChannelMonitorUpdate` unblocker.
3864+
// Further, we give ourselves two chances each time, needing 4 HTLCs just to unlock our two
3865+
// `ChannelManager` calls. We then need a few remaining HTLCs to actually trigger the bug, so
3866+
// we use 6 HTLCs.
3867+
// Finaly, we do not run this test on Winblowz because it, somehow, in 2025, does not implement
3868+
// actual preemptive multitasking and thinks that cooperative multitasking somehow is
3869+
// acceptable in the 21st century, let alone a quarter of the way into it.
3870+
const MAX_THREAD_INIT_TIME: std::time::Duration = std::time::Duration::from_secs(1);
3871+
3872+
create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 100_000, 0);
3873+
create_announced_chan_between_nodes_with_value(&nodes, 0, 2, 100_000, 0);
3874+
create_announced_chan_between_nodes_with_value(&nodes, 0, 3, 100_000, 0);
3875+
create_announced_chan_between_nodes_with_value(&nodes, 0, 4, 100_000, 0);
3876+
create_announced_chan_between_nodes_with_value(&nodes, 0, 5, 100_000, 0);
3877+
create_announced_chan_between_nodes_with_value(&nodes, 0, 6, 100_000, 0);
3878+
3879+
create_announced_chan_between_nodes_with_value(&nodes, 1, 7, 100_000, 0);
3880+
create_announced_chan_between_nodes_with_value(&nodes, 2, 7, 100_000, 0);
3881+
create_announced_chan_between_nodes_with_value(&nodes, 3, 7, 100_000, 0);
3882+
create_announced_chan_between_nodes_with_value(&nodes, 4, 7, 100_000, 0);
3883+
create_announced_chan_between_nodes_with_value(&nodes, 5, 7, 100_000, 0);
3884+
create_announced_chan_between_nodes_with_value(&nodes, 6, 7, 100_000, 0);
3885+
create_announced_chan_between_nodes_with_value(&nodes, 7, 8, 1_000_000, 0);
3886+
3887+
let (mut route, payment_hash, payment_preimage, payment_secret) = get_route_and_payment_hash!(&nodes[0], nodes[8], 50_000_000);
3888+
3889+
send_along_route_with_secret(&nodes[0], route, &[&[&nodes[1], &nodes[7], &nodes[8]], &[&nodes[2], &nodes[7], &nodes[8]], &[&nodes[3], &nodes[7], &nodes[8]], &[&nodes[4], &nodes[7], &nodes[8]], &[&nodes[5], &nodes[7], &nodes[8]], &[&nodes[6], &nodes[7], &nodes[8]]], 50_000_000, payment_hash, payment_secret);
3890+
3891+
let (do_a_write, blocker) = std::sync::mpsc::sync_channel(0);
3892+
*nodes[8].chain_monitor.write_blocker.lock().unwrap() = Some(blocker);
3893+
3894+
// Until we have std::thread::scoped we have to unsafe { turn off the borrow checker }.
3895+
// We do this by casting a pointer to a `TestChannelManager` to a pointer to a
3896+
// `TestChannelManager` with different (in this case 'static) lifetime.
3897+
// This is even suggested in the second example at
3898+
// https://doc.rust-lang.org/std/mem/fn.transmute.html#examples
3899+
let claim_node: &'static TestChannelManager<'static, 'static> =
3900+
unsafe { std::mem::transmute(nodes[8].node as &TestChannelManager) };
3901+
let thrd = std::thread::spawn(move || {
3902+
// Initiate the claim in a background thread as it will immediately block waiting on the
3903+
// `write_blocker` we set above.
3904+
claim_node.claim_funds(payment_preimage);
3905+
});
3906+
3907+
// First unlock one monitor so that we have a pending
3908+
// `update_fulfill_htlc`/`commitment_signed` pair to pass to our counterparty.
3909+
do_a_write.send(()).unwrap();
3910+
3911+
// Then fetch the `update_fulfill_htlc`/`commitment_signed`. Note that the
3912+
// `get_and_clear_pending_msg_events` will immediately hang trying to take a peer lock which
3913+
// `claim_funds` is holding. Thus, we release a second write after a small sleep in the
3914+
// background to give `claim_funds` a chance to step forward, unblocking
3915+
// `get_and_clear_pending_msg_events`.
3916+
let do_a_write_background = do_a_write.clone();
3917+
let block_thrd2 = AtomicBool::new(true);
3918+
let block_thrd2_read: &'static AtomicBool = unsafe { std::mem::transmute(&block_thrd2) };
3919+
let thrd2 = std::thread::spawn(move || {
3920+
while block_thrd2_read.load(Ordering::Acquire) {
3921+
std::thread::yield_now();
3922+
}
3923+
std::thread::sleep(MAX_THREAD_INIT_TIME);
3924+
do_a_write_background.send(()).unwrap();
3925+
std::thread::sleep(MAX_THREAD_INIT_TIME);
3926+
do_a_write_background.send(()).unwrap();
3927+
});
3928+
block_thrd2.store(false, Ordering::Release);
3929+
let first_updates = get_htlc_update_msgs(&nodes[8], &nodes[7].node.get_our_node_id());
3930+
thrd2.join().unwrap();
3931+
3932+
// Disconnect node 6 from all its peers so it doesn't bother to fail the HTLCs back
3933+
nodes[7].node.peer_disconnected(nodes[1].node.get_our_node_id());
3934+
nodes[7].node.peer_disconnected(nodes[2].node.get_our_node_id());
3935+
nodes[7].node.peer_disconnected(nodes[3].node.get_our_node_id());
3936+
nodes[7].node.peer_disconnected(nodes[4].node.get_our_node_id());
3937+
nodes[7].node.peer_disconnected(nodes[5].node.get_our_node_id());
3938+
nodes[7].node.peer_disconnected(nodes[6].node.get_our_node_id());
3939+
3940+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &first_updates.update_fulfill_htlcs[0]);
3941+
check_added_monitors(&nodes[7], 1);
3942+
expect_payment_forwarded!(nodes[7], nodes[1], nodes[8], Some(1000), false, false);
3943+
nodes[7].node.handle_commitment_signed(node_8_id, &first_updates.commitment_signed);
3944+
check_added_monitors(&nodes[7], 1);
3945+
let (raa, cs) = get_revoke_commit_msgs(&nodes[7], &node_8_id);
3946+
3947+
// Now, handle the `revoke_and_ack` from node 5. Note that `claim_funds` is still blocked on
3948+
// our peer lock, so we have to release a write to let it process.
3949+
// After this call completes, the channel previously would be locked up and should not be able
3950+
// to make further progress.
3951+
let do_a_write_background = do_a_write.clone();
3952+
let block_thrd3 = AtomicBool::new(true);
3953+
let block_thrd3_read: &'static AtomicBool = unsafe { std::mem::transmute(&block_thrd3) };
3954+
let thrd3 = std::thread::spawn(move || {
3955+
while block_thrd3_read.load(Ordering::Acquire) {
3956+
std::thread::yield_now();
3957+
}
3958+
std::thread::sleep(MAX_THREAD_INIT_TIME);
3959+
do_a_write_background.send(()).unwrap();
3960+
std::thread::sleep(MAX_THREAD_INIT_TIME);
3961+
do_a_write_background.send(()).unwrap();
3962+
});
3963+
block_thrd3.store(false, Ordering::Release);
3964+
nodes[8].node.handle_revoke_and_ack(node_7_id, &raa);
3965+
thrd3.join().unwrap();
3966+
assert!(!thrd.is_finished());
3967+
3968+
let thrd4 = std::thread::spawn(move || {
3969+
do_a_write.send(()).unwrap();
3970+
do_a_write.send(()).unwrap();
3971+
});
3972+
3973+
thrd4.join().unwrap();
3974+
thrd.join().unwrap();
3975+
3976+
expect_payment_claimed!(nodes[8], payment_hash, 50_000_000);
3977+
3978+
// At the end, we should have 7 ChannelMonitorUpdates - 6 for HTLC claims, and one for the
3979+
// above `revoke_and_ack`.
3980+
check_added_monitors(&nodes[8], 7);
3981+
3982+
// Now drive everything to the end, at least as far as node 7 is concerned...
3983+
*nodes[8].chain_monitor.write_blocker.lock().unwrap() = None;
3984+
nodes[8].node.handle_commitment_signed(node_7_id, &cs);
3985+
check_added_monitors(&nodes[8], 1);
3986+
3987+
let (updates, raa) = get_updates_and_revoke(&nodes[8], &nodes[7].node.get_our_node_id());
3988+
3989+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &updates.update_fulfill_htlcs[0]);
3990+
expect_payment_forwarded!(nodes[7], nodes[2], nodes[8], Some(1000), false, false);
3991+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &updates.update_fulfill_htlcs[1]);
3992+
expect_payment_forwarded!(nodes[7], nodes[3], nodes[8], Some(1000), false, false);
3993+
let mut next_source = 4;
3994+
if let Some(update) = updates.update_fulfill_htlcs.get(2) {
3995+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, update);
3996+
expect_payment_forwarded!(nodes[7], nodes[4], nodes[8], Some(1000), false, false);
3997+
next_source += 1;
3998+
}
3999+
4000+
nodes[7].node.handle_commitment_signed(node_8_id, &updates.commitment_signed);
4001+
nodes[7].node.handle_revoke_and_ack(node_8_id, &raa);
4002+
if updates.update_fulfill_htlcs.get(2).is_some() {
4003+
check_added_monitors(&nodes[7], 5);
4004+
} else {
4005+
check_added_monitors(&nodes[7], 4);
4006+
}
4007+
4008+
let (raa, cs) = get_revoke_commit_msgs(&nodes[7], &node_8_id);
4009+
4010+
nodes[8].node.handle_revoke_and_ack(node_7_id, &raa);
4011+
nodes[8].node.handle_commitment_signed(node_7_id, &cs);
4012+
check_added_monitors(&nodes[8], 2);
4013+
4014+
let (updates, raa) = get_updates_and_revoke(&nodes[8], &node_7_id);
4015+
4016+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &updates.update_fulfill_htlcs[0]);
4017+
expect_payment_forwarded!(nodes[7], nodes[next_source], nodes[8], Some(1000), false, false);
4018+
next_source += 1;
4019+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &updates.update_fulfill_htlcs[1]);
4020+
expect_payment_forwarded!(nodes[7], nodes[next_source], nodes[8], Some(1000), false, false);
4021+
next_source += 1;
4022+
if let Some(update) = updates.update_fulfill_htlcs.get(2) {
4023+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, update);
4024+
expect_payment_forwarded!(nodes[7], nodes[next_source], nodes[8], Some(1000), false, false);
4025+
}
4026+
4027+
nodes[7].node.handle_commitment_signed(node_8_id, &updates.commitment_signed);
4028+
nodes[7].node.handle_revoke_and_ack(node_8_id, &raa);
4029+
if updates.update_fulfill_htlcs.get(2).is_some() {
4030+
check_added_monitors(&nodes[7], 5);
4031+
} else {
4032+
check_added_monitors(&nodes[7], 4);
4033+
}
4034+
4035+
let (raa, cs) = get_revoke_commit_msgs(&nodes[7], &node_8_id);
4036+
nodes[8].node.handle_revoke_and_ack(node_7_id, &raa);
4037+
nodes[8].node.handle_commitment_signed(node_7_id, &cs);
4038+
check_added_monitors(&nodes[8], 2);
4039+
4040+
let raa = get_event_msg!(nodes[8], MessageSendEvent::SendRevokeAndACK, node_7_id);
4041+
nodes[7].node.handle_revoke_and_ack(node_8_id, &raa);
4042+
check_added_monitors(&nodes[7], 1);
4043+
}

lightning/src/ln/channelmanager.rs

+34-26
Original file line numberDiff line numberDiff line change
@@ -1105,7 +1105,7 @@ pub(crate) enum MonitorUpdateCompletionAction {
11051105
/// A pending MPP claim which hasn't yet completed.
11061106
///
11071107
/// Not written to disk.
1108-
pending_mpp_claim: Option<(PublicKey, ChannelId, u64, PendingMPPClaimPointer)>,
1108+
pending_mpp_claim: Option<(PublicKey, ChannelId, PendingMPPClaimPointer)>,
11091109
},
11101110
/// Indicates an [`events::Event`] should be surfaced to the user and possibly resume the
11111111
/// operation of another channel.
@@ -1207,10 +1207,16 @@ impl From<&MPPClaimHTLCSource> for HTLCClaimSource {
12071207
}
12081208
}
12091209

1210+
#[derive(Debug)]
1211+
pub(crate) struct PendingMPPClaim {
1212+
channels_without_preimage: Vec<(PublicKey, OutPoint, ChannelId)>,
1213+
channels_with_preimage: Vec<(PublicKey, OutPoint, ChannelId)>,
1214+
}
1215+
12101216
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
12111217
/// The source of an HTLC which is being claimed as a part of an incoming payment. Each part is
1212-
/// tracked in [`PendingMPPClaim`] as well as in [`ChannelMonitor`]s, so that it can be converted
1213-
/// to an [`HTLCClaimSource`] for claim replays on startup.
1218+
/// tracked in [`ChannelMonitor`]s, so that it can be converted to an [`HTLCClaimSource`] for claim
1219+
/// replays on startup.
12141220
struct MPPClaimHTLCSource {
12151221
counterparty_node_id: PublicKey,
12161222
funding_txo: OutPoint,
@@ -1225,12 +1231,6 @@ impl_writeable_tlv_based!(MPPClaimHTLCSource, {
12251231
(6, htlc_id, required),
12261232
});
12271233

1228-
#[derive(Debug)]
1229-
pub(crate) struct PendingMPPClaim {
1230-
channels_without_preimage: Vec<MPPClaimHTLCSource>,
1231-
channels_with_preimage: Vec<MPPClaimHTLCSource>,
1232-
}
1233-
12341234
#[derive(Clone, Debug, PartialEq, Eq)]
12351235
/// When we're claiming a(n MPP) payment, we want to store information about that payment in the
12361236
/// [`ChannelMonitor`] so that we can replay the claim without any information from the
@@ -7017,8 +7017,15 @@ where
70177017
}
70187018
}).collect();
70197019
let pending_mpp_claim_ptr_opt = if sources.len() > 1 {
7020+
let mut channels_without_preimage = Vec::with_capacity(mpp_parts.len());
7021+
for part in mpp_parts.iter() {
7022+
let chan = (part.counterparty_node_id, part.funding_txo, part.channel_id);
7023+
if !channels_without_preimage.contains(&chan) {
7024+
channels_without_preimage.push(chan);
7025+
}
7026+
}
70207027
Some(Arc::new(Mutex::new(PendingMPPClaim {
7021-
channels_without_preimage: mpp_parts.clone(),
7028+
channels_without_preimage,
70227029
channels_with_preimage: Vec::new(),
70237030
})))
70247031
} else {
@@ -7029,7 +7036,7 @@ where
70297036
let this_mpp_claim = pending_mpp_claim_ptr_opt.as_ref().and_then(|pending_mpp_claim|
70307037
if let Some(cp_id) = htlc.prev_hop.counterparty_node_id {
70317038
let claim_ptr = PendingMPPClaimPointer(Arc::clone(pending_mpp_claim));
7032-
Some((cp_id, htlc.prev_hop.channel_id, htlc.prev_hop.htlc_id, claim_ptr))
7039+
Some((cp_id, htlc.prev_hop.channel_id, claim_ptr))
70337040
} else {
70347041
None
70357042
}
@@ -7375,7 +7382,7 @@ This indicates a bug inside LDK. Please report this error at https://github.com/
73757382
for action in actions.into_iter() {
73767383
match action {
73777384
MonitorUpdateCompletionAction::PaymentClaimed { payment_hash, pending_mpp_claim } => {
7378-
if let Some((counterparty_node_id, chan_id, htlc_id, claim_ptr)) = pending_mpp_claim {
7385+
if let Some((counterparty_node_id, chan_id, claim_ptr)) = pending_mpp_claim {
73797386
let per_peer_state = self.per_peer_state.read().unwrap();
73807387
per_peer_state.get(&counterparty_node_id).map(|peer_state_mutex| {
73817388
let mut peer_state = peer_state_mutex.lock().unwrap();
@@ -7386,24 +7393,17 @@ This indicates a bug inside LDK. Please report this error at https://github.com/
73867393
if *pending_claim == claim_ptr {
73877394
let mut pending_claim_state_lock = pending_claim.0.lock().unwrap();
73887395
let pending_claim_state = &mut *pending_claim_state_lock;
7389-
pending_claim_state.channels_without_preimage.retain(|htlc_info| {
7396+
pending_claim_state.channels_without_preimage.retain(|(cp, op, cid)| {
73907397
let this_claim =
7391-
htlc_info.counterparty_node_id == counterparty_node_id
7392-
&& htlc_info.channel_id == chan_id
7393-
&& htlc_info.htlc_id == htlc_id;
7398+
*cp == counterparty_node_id && *cid == chan_id;
73947399
if this_claim {
7395-
pending_claim_state.channels_with_preimage.push(htlc_info.clone());
7400+
pending_claim_state.channels_with_preimage.push((*cp, *op, *cid));
73967401
false
73977402
} else { true }
73987403
});
73997404
if pending_claim_state.channels_without_preimage.is_empty() {
7400-
for htlc_info in pending_claim_state.channels_with_preimage.iter() {
7401-
let freed_chan = (
7402-
htlc_info.counterparty_node_id,
7403-
htlc_info.funding_txo,
7404-
htlc_info.channel_id,
7405-
blocker.clone()
7406-
);
7405+
for (cp, op, cid) in pending_claim_state.channels_with_preimage.iter() {
7406+
let freed_chan = (*cp, *op, *cid, blocker.clone());
74077407
freed_channels.push(freed_chan);
74087408
}
74097409
}
@@ -14232,8 +14232,16 @@ where
1423214232
if payment_claim.mpp_parts.is_empty() {
1423314233
return Err(DecodeError::InvalidValue);
1423414234
}
14235+
let mut channels_without_preimage = payment_claim.mpp_parts.iter()
14236+
.map(|htlc_info| (htlc_info.counterparty_node_id, htlc_info.funding_txo, htlc_info.channel_id))
14237+
.collect::<Vec<_>>();
14238+
// If we have multiple MPP parts which were received over the same channel,
14239+
// we only track it once as once we get a preimage durably in the
14240+
// `ChannelMonitor` it will be used for all HTLCs with a matching hash.
14241+
channels_without_preimage.sort_unstable();
14242+
channels_without_preimage.dedup();
1423514243
let pending_claims = PendingMPPClaim {
14236-
channels_without_preimage: payment_claim.mpp_parts.clone(),
14244+
channels_without_preimage,
1423714245
channels_with_preimage: Vec::new(),
1423814246
};
1423914247
let pending_claim_ptr_opt = Some(Arc::new(Mutex::new(pending_claims)));
@@ -14266,7 +14274,7 @@ where
1426614274

1426714275
for part in payment_claim.mpp_parts.iter() {
1426814276
let pending_mpp_claim = pending_claim_ptr_opt.as_ref().map(|ptr| (
14269-
part.counterparty_node_id, part.channel_id, part.htlc_id,
14277+
part.counterparty_node_id, part.channel_id,
1427014278
PendingMPPClaimPointer(Arc::clone(&ptr))
1427114279
));
1427214280
let pending_claim_ptr = pending_claim_ptr_opt.as_ref().map(|ptr|

lightning/src/ln/functional_test_utils.rs

+20
Original file line numberDiff line numberDiff line change
@@ -779,6 +779,26 @@ pub fn get_revoke_commit_msgs<CM: AChannelManager, H: NodeHolder<CM=CM>>(node: &
779779
})
780780
}
781781

782+
/// Gets a `UpdateHTLCs` and `revoke_and_ack` (i.e. after we get a responding `commitment_signed`
783+
/// while we have updates in the holding cell).
784+
pub fn get_updates_and_revoke<CM: AChannelManager, H: NodeHolder<CM=CM>>(node: &H, recipient: &PublicKey) -> (msgs::CommitmentUpdate, msgs::RevokeAndACK) {
785+
let events = node.node().get_and_clear_pending_msg_events();
786+
assert_eq!(events.len(), 2);
787+
(match events[0] {
788+
MessageSendEvent::UpdateHTLCs { ref node_id, ref updates } => {
789+
assert_eq!(node_id, recipient);
790+
(*updates).clone()
791+
},
792+
_ => panic!("Unexpected event"),
793+
}, match events[1] {
794+
MessageSendEvent::SendRevokeAndACK { ref node_id, ref msg } => {
795+
assert_eq!(node_id, recipient);
796+
(*msg).clone()
797+
},
798+
_ => panic!("Unexpected event"),
799+
})
800+
}
801+
782802
#[macro_export]
783803
/// Gets an RAA and CS which were sent in response to a commitment update
784804
///

0 commit comments

Comments
 (0)