Skip to content

Potential infinite loop with canvas recording web worker #13743

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
billyvg opened this issue Sep 20, 2024 · 23 comments
Closed

Potential infinite loop with canvas recording web worker #13743

billyvg opened this issue Sep 20, 2024 · 23 comments
Assignees

Comments

@billyvg
Copy link
Member

billyvg commented Sep 20, 2024

Seeing a potential cycle with canvas replay web worker. Stack trace looks something like this:

sendBufferedReplayOrFlush
startRecording
...
getCanvasMananger
new CanvasManager
initFPSWorker
new window.Worker

Then the part that seems to cycle:

sendBufferedReplayOrFlush
stopRecording
_stopRecording
...
???.reset (mutation buffer i think?)
CanvasManager.reset
initFPSWorker

Customer says this generally happens after a user comes back to the tab from a long period of idling.

Zendesk ticket

@trogau
Copy link

trogau commented Sep 20, 2024

Hi folks, I reported this issue via support & they advised to come here. Happy to provide more information as needed.

To explain in a bit more detail:

We have an internal application built on node.js with Vue.js. We are running the Vue Sentry package, version 8.25.0 (which we realise is a couple versions behind current).

This issue was reported by our internal users who are having tabs in Chrome (latest Chrome version, running on Chromebooks - Intel i5s with 8GB of RAM) freeze when performing certain actions. Some of these actions are triggering console errors, which might be contributing to the behaviour, but I'm not sure about this.

When looking at a frozen tab, there's not much we can diagnose - devtools is locked up. We can see in Chrome Task Manager that there are many, many dedicated workers running under the frozen Chrome process, and memory usage seems significantly higher than normal.

The tab will remain frozen, with periodic dialogs from Chrome asking if we want to wait or exit. I think waiting does nothing but simply try to spin up more dedicated workers, though it's hard to tell because the machine is a bit unwieldy by this point and there are so many it is hard to see what is going on - the only recovery is to close the tab.

We made a little Chrome extension to override the 'new Worker' class just to see if we could identify the issue, and captured the stack trace when the workers were created, and it showed something like the following, repeated over and over again:

2024-09-20T16:54:21+10:00 | https://app.example.com | [ip address redacted] | [16:54:20] Creating worker with script: blob:https://app.explorate.co/4899ff93-b770-4fa7-8345-bc6aaa98fa2d, Stack trace: Error
    at new <anonymous> (chrome-extension://egfiddhbdemmalmbdeockdnknmffohmg/injected.js:12:32)
    at Nrt.initFPSWorker (https://app.explorate.co/assets/index-dbcae719.js:750:9710)
    at Nrt.reset (https://app.explorate.co/assets/index-dbcae719.js:750:7943)
    at Xet.reset (https://app.explorate.co/assets/index-dbcae719.js:746:14920)
    at https://app.explorate.co/assets/index-dbcae719.js:746:27620
    at Array.forEach (<anonymous>)
    at https://app.explorate.co/assets/index-dbcae719.js:746:27607
    at https://app.explorate.co/assets/index-dbcae719.js:746:15369
    at https://app.explorate.co/assets/index-dbcae719.js:746:43554
    at Array.forEach (<anonymous>)
    at Fg._stopRecording (https://app.explorate.co/assets/index-dbcae719.js:746:43542)
    at Fg.stopRecording (https://app.explorate.co/assets/index-dbcae719.js:748:6110)
    at Fg.stop (https://app.explorate.co/assets/index-dbcae719.js:748:6417)
    at Fg._refreshSession (https://app.explorate.co/assets/index-dbcae719.js:748:9881)
    at Fg._checkSession (https://app.explorate.co/assets/index-dbcae719.js:748:9801)
    at Fg.checkAndHandleExpiredSession (https://app.explorate.co/assets/index-dbcae719.js:748:8242)
    at Fg._doChangeToForegroundTasks (https://app.explorate.co/assets/index-dbcae719.js:748:11563)
    at _handleWindowFocus (https://app.explorate.co/assets/index-dbcae719.js:748:11189)
    at r (https://app.explorate.co/assets/index-dbcae719.js:741:4773)

I was able to reproduce this on my machine by:

  1. Loading our application
  2. Moving to a different tab and/or leaving the PC for an hour or so
  3. Coming back to the application and resuming activity in the initial tab

Doing that would regularly trigger a burst of worker creation. On my more powerful laptop (i7 / 32GB) I triggered about 100 workers being created at once, though it didn't cause any noticeable performance issues.

My guess is that on the lower spec machines, when a lot of workers are created it simply crawls to a halt and then crashes, and that there is a loop or race condition that is triggering endless worker creations in the Sentry Replay code, either as a direct result of something weird in our code or just a random bug somewhere.

There are two things we have on our TODO to try here:

  1. Upgrade to the latest version of the Sentry/vue package
  2. Disable the canvas recording

Open to any other suggestions as well if it helps zero in on the issue.

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 Sep 20, 2024
@billyvg
Copy link
Member Author

billyvg commented Sep 20, 2024

Thanks for the detailed description @trogau -- just want to clarify a few details:

  • what are your replay sample rates (session and onError)?
  • regarding your chrome extension when overriding the worker class: is it throwing an error and causing session replays to try to capture a new replay?
  • do the other stack traces also have _handleWindowFocus at the bottom of the trace?

@trogau
Copy link

trogau commented Sep 20, 2024

  1. Sample rates are 0.05 for session and 1.0 for errors
  2. No, doesn't throw an error when a worker is created, only logs the event & sends it to our remote endpoint to catch the data.
  3. No, sorry - actually _handleWindowFocus actually only seems to show up in a couple of the most recent events when I was doing some testing yesterday. The below one is more representative of what we're seeing:
2024-09-16T14:51:29+10:00 | https://app.example.com | [ip address redacted] | [14:51:28] Creating worker with script: blob:https://app.explorate.co/ae32033e-b5b8-4299-acf6-6173dde42e7f, Stack trace: Error
    at new window.Worker (chrome-extension://mfenbcgblaedimllfnpabdkgcbggfcml/injected.js:11:32)
    at Nrt.initFPSWorker (https://app.explorate.co/assets/index-da618cdf.js:750:9710)
    at Nrt.reset (https://app.explorate.co/assets/index-da618cdf.js:750:7943)
    at Xet.reset (https://app.explorate.co/assets/index-da618cdf.js:746:14920)
    at https://app.explorate.co/assets/index-da618cdf.js:746:27620
    at Array.forEach (<anonymous>)
    at https://app.explorate.co/assets/index-da618cdf.js:746:27607
    at https://app.explorate.co/assets/index-da618cdf.js:746:15369
    at https://app.explorate.co/assets/index-da618cdf.js:746:43554
    at Array.forEach (<anonymous>)
    at Fg._stopRecording (https://app.explorate.co/assets/index-da618cdf.js:746:43542)
    at Fg.stopRecording (https://app.explorate.co/assets/index-da618cdf.js:748:6110)
    at Fg.stop (https://app.explorate.co/assets/index-da618cdf.js:748:6417)
    at Fg._runFlush (https://app.explorate.co/assets/index-da618cdf.js:748:13600)

I should note: I have not yet actually captured a stack trace from an actual crash; we haven't had one for a few days where the extension was actually running and logging data. The events we've been capturing so far - which again show up to around ~100 workers getting created, which doesn't seem like enough to cause a crash even on the Chromebooks - are happening relatively frequently though.

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 Sep 20, 2024
@trogau
Copy link

trogau commented Sep 23, 2024

We captured a stack trace from a freeze this morning & seems to confirm it is mass creation of workers that causes the problem. Attached is a log snippet showing about 1008 workers created in ~3 seconds, which froze the browser tab. Not sure how helpful it is but just thought I'd include it for reference.

log.txt

@chargome
Copy link
Member

@trogau thanks for the insights, could you also specify which tasks you are running on the canvas? Is it like a continuous animation or a static canvas – this might help reproducing the issue.

@trogau
Copy link

trogau commented Sep 23, 2024

@trogau thanks for the insights, could you also specify which tasks you are running on the canvas? Is it like a continuous animation or a static canvas – this might help reproducing the issue.

@chargome : I'm double checking with our team but AFAIK the pages where we're seeing this happen do not have any canvas elements at all. We do have /some/ pages with canvas (a MapBox map component) but this isn't loaded on the page where we're seeing the majority of these issues.

We do have Sentry.replayCanvasIntegration() being set in our Sentry.init() though.

@trogau
Copy link

trogau commented Sep 25, 2024

FYI we've upgraded to v8.31.0 and still seeing large numbers of workers created (just had one instance of 730 created in a few seconds - not enough for it to crash the tab so the user didn't notice but we see it in the logging. The magic number seems to be about 1000 workers being enough to freeze the tab on these devices.

@billyvg
Copy link
Member Author

billyvg commented Oct 2, 2024

@trogau Thanks for your help, I believe I've identified the issue here: #13855 -- can you actually try downgrading to 8.25.0 to see if that's affected?

edit Also, do you do anything custom with the replay integration (i.e. call replay.flush() somewhere?)

@trogau
Copy link

trogau commented Oct 4, 2024

Hi @billyvg - we don't do anything custom with the Replay integration - just set it up in init and that's it.

v8.25.0 is what we were using initially that definitely did have the problem - happy to downgrade if there's something specific we can test, but I can confirm v8.25.0 was where we first experienced the issue.

@trogau
Copy link

trogau commented Oct 16, 2024

@billyvg : FYI just had our first freeze on v8.34.0 - can see it triggered ~1000 workers created in ~2 seconds which crashed the machine.

@billyvg
Copy link
Member Author

billyvg commented Oct 16, 2024

@trogau ok, can you try two things:

  • 8.35.0-beta.0
  • Add a beforeErrorSampling callback to replayIntegration options and log the event to your backend. I want to verify this is the callsite that triggers sendBufferedReplayOrFlush
replayIntegration({
  beforeErrorSampling: event => {
    // TODO: log to backend service
    return event;
  },
});

@trogau
Copy link

trogau commented Oct 22, 2024

@billyvg : we've just deployed this & have it logging now. I've got one sample from an unrelated error but it contains a lot of info including stuff that might be sensitive/internal so I'm a bit reluctant to post it publicly - is there a way to send it to you directly?

I assume the goal is to try to see if there's an error generated when we have another crash/freeze incident though?

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 Oct 22, 2024
@billyvg
Copy link
Member Author

billyvg commented Oct 22, 2024

@trogau Yeah exactly, want to see if there are errors causing it to stop recording and freeze. Feel free to email me at billy at sentry.io

@trogau
Copy link

trogau commented Oct 23, 2024

Just sent the first log file through!

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 Oct 23, 2024
@billyvg
Copy link
Member Author

billyvg commented Oct 24, 2024

Thanks! I'll take a look

@trogau
Copy link

trogau commented Oct 25, 2024

@billyvg : we are having an increase in the frequency & spread of this issue amongst our staff - not sure if it's due to code changes in the last couple versions, but it's causing increasing disruptions to their workflow, so we might have to disable it for a while unfortunately until there's some concrete progress. This will also help us confirm 100% that it's Replays that are responsible - it seems pretty likely but we haven't disabled them completely yet so this will at least rule that out.

If we're doing this, in the interests of testing the most useful way: can we just set replaysSessionSampleRate and replaysOnErrorSampleRate to 0 and be confident that will "turn it off" enough, or should we remove the Sentry.replayIntegration() section?

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 Oct 25, 2024
@billyvg
Copy link
Member Author

billyvg commented Oct 25, 2024

@trogau yeah that seems reasonable, an alternative would be to only remove the canvas recording (though I don't know how much of your replays depend on that). Setting the sample rates to 0 will be enough to turn off (provided you don't have any custom SDK calls to start it).

@billyvg
Copy link
Member Author

billyvg commented Nov 21, 2024

@trogau sorry for the delay, we've released 8.39.0 which does not re-create workers once recording stops.

@trogau
Copy link

trogau commented Nov 22, 2024

Awesome thanks so much, we'll look at upgrading the next time we have a window (with Xmas approaching might not be til next year now unfortunately but will see how we go).

FWIW we disabled the canvas recoding and as suspected that immediately made the problem go away. We actually don't really need the canvas recording so are likely to leave it disabled in any case, but we'll continue to upgrade.

Thanks for the effort!

@andreiborza
Copy link
Member

@trogau glad to hear. I'll go ahead and close this issue then, please feel free to open up another one if you encounter problems after upgrading.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

5 participants