-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Min Initial Latency Selector #3402
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #3402 +/- ##
===================================================
+ Coverage 32.11405% 32.15283% +0.03878%
===================================================
Files 147 147
Lines 40789 40830 +41
===================================================
+ Hits 13099 13128 +29
- Misses 26916 26927 +11
- Partials 774 775 +1
... and 1 file with indirect coverage changes Continue to review full report in Codecov by Sentry.
|
@rickstaa I think this is better for batch AI jobs as well. @leszko I was exploring something very similar here: baf178e. Couple questions:
|
IMO it will work better for you as well. You may just need to adapt using the LatencyScore 👇
Short answer is that for Live AI Video we don't have LatencyScore. We don't measure the time between when the segment is sent until the processed segments is received. The reason for that it's not trivial to do it, that Live AI Video operates on stream not individual segments. So, I'd keep it that way. In any case, this is just sorting, the actual selection should be done in the selection algorithm here.
Good spot, I updated it to sort also after executing
I think we could add |
Currently, the selection logic (both for Transcoding and AI) uses the following logic:
knownSessions
knownSessions
For the Live video, it can be suboptimal, because the cached
knownSession
is not always the best Orchestrator to use. For example:This PR introduces a new Selector which is way simpler than the currently used
MinLSSelector
. The new Selector doesn't cache anything and does not favor known sessions. It always selects an O with the lowest InitialLatency.fix https://linear.app/livepeer/issue/ENG-2454/startup-time-suboptimal-g-o-selection