Replies: 2 comments
-
I also realise an option like won't will require the runner software to be installed on boot and not be pre-installed. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Resolved under: #1859 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I realise the implementation of something like this may not be trivial, but wanted to open a channel of discussion for it.
Context:
As part of our CI, we need GPU powered HPC (P3) instances. That is one of the current use cases of self-hosted runners for us. The issue right now is that the instances are decently beefy and it gets very expensive to scale them up especially during peak hours (we hit our account quota too if we scale too high). The queue will eventually work it's way through but I wanted a way to have larger throughput without deploying more runners.
I am also not sure how this would work with ephemeral runners (so maybe it can be mutually exclusive to that?), but an option like
jobs_per_runner = N
would be very helpful. And that EC2 instance would only get torn down if all runner instances inside are idle.It would be similar to how Jenkins has multiple executors per node.
Beta Was this translation helpful? Give feedback.
All reactions