-
Notifications
You must be signed in to change notification settings - Fork 0
Add support for websocket swap updates proxying #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
danielgranhao
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good!
| upstreamURL string | ||
| clients map[*websocket.Conn]map[string]bool // Tracks swap IDs for each client | ||
| subscribers map[string]map[*websocket.Conn]bool // Tracks clients for each swap ID | ||
| mu sync.Mutex |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't say we need to do it now, but we should keep in mind the possibility of changing to an RWMutex or even per-swap or per-conn locking in case we see contention issues.
dangeross
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Agree with @danielgranhao the "ping" message is also needed
JssDWt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
danielgranhao
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
roeierez
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! a few small comments.
| p.mu.Unlock() | ||
|
|
||
| if len(swapIDs) > 0 { | ||
| resubscribeMsg := map[string]any{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think just as a safety mechanism we should check with boltz if they have any limitation on the message size or the number of swaps can be sent in one message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Boltz devs confirmed that there is no limit on the number of swap ids
| // Notify clients | ||
| for client, updates := range clientUpdates { | ||
| // Create a single notification message for the client | ||
| notification := map[string]any{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you consider taking the message as is from the upstream and sent it to the subscribers in stead of creating the notification object?
|
|
||
| // Optionally, send an unsubscribe message to the upstream server | ||
| if p.upstream != nil { | ||
| unsubscribeMsg := map[string]any{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unsubscribe supports multiple swaps like the code does in subscribe. It seems better to aggregate the swap ids and call once at the end of the loop to unsubscribe.
No description provided.