-
Notifications
You must be signed in to change notification settings - Fork 34
Open
Labels
enhancementNew feature or requestNew feature or request
Description
✨ Is your feature request related to a problem? Please describe.
Current inference processing in SceneScape does not leverage batching across streams from multiple cameras, resulting in suboptimal hardware utilization for scenarios with several cameras.
💡 Describe the solution you'd like
Implement cross-stream batching, allowing frames from multiple video streams (cameras) to be grouped and processed together for inference. This should make more efficient use of GPU resources and improve performance for users with many cameras.
- Ideally, batching should be transparent to users, requiring minimal manual configuration or stream interruptions.
- In the MVP phase, a simpler approach with some required manual configuration and stream interruptions (e.g., when adding or removing a camera) is acceptable, with a roadmap to reduce these drawbacks in the final implementation.
🔄 Describe alternatives you've considered
- Keeping per-stream (per-camera) inference – this leads to fragmented HW utilization and lower throughput/speed.
- Manual scripting or outside-the-platform batch management (not user friendly).
📄 Additional context
- Should work for multi-camera deployments, especially at larger scale
- Should document trade-offs between MVP and final seamless batching
- This issue is a sub-issue of run inference on GPU #683 (run inference on GPU)
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request