|
39 | 39 |
|
40 | 40 | AlphaRTC is a fork of Google's WebRTC project using ML-based bandwidth estimation, delivered by the OpenNetLab team. By equipping WebRTC with a more accurate bandwidth estimator, our mission is to eventually increase the quality of transmission. |
41 | 41 |
|
42 | | -AlphaRTC replaces Google Congestion Control (GCC) with ONNXInfer, an ML-powered bandwidth estimator, which takes in an ONNX model to make bandwidth estimation more accurate. ONNXInfer is proudly powered by Microsoft's [ONNXRuntime](https://github.com/microsoft/onnxruntime). |
| 42 | +AlphaRTC replaces Google Congestion Control (GCC) with two customized congestion control interfaces, PyInfer and ONNXInfer. The PyInfer provides an opportunity to load external bandwidth estimator written by Python. The external bandwidth estimator could be based on ML framework, like PyTorch or TensorFlow, or a pure Python algorithm without any dependencies. And the ONNXInfer is an ML-powered bandwidth estimator, which takes in an ONNX model to make bandwidth estimation more accurate. ONNXInfer is proudly powered by Microsoft's [ONNXRuntime](https://github.com/microsoft/onnxruntime). |
43 | 43 |
|
44 | 44 | ## Environment |
45 | 45 |
|
| 46 | +**We recommend you directly fetch the pre-provided Docker images from [Github release](https://github.com/OpenNetLab/AlphaRTC/releases/latest/download/alphartc.tar.gz)** |
| 47 | + |
46 | 48 | Ubuntu 18.04 is the only officially supported distro at this moment. For other distros, you may be able to compile your own binary, or use our pre-provided Docker images. |
47 | 49 |
|
48 | 50 | ## Compilation |
@@ -107,7 +109,7 @@ Note: all commands below work for both Linux (sh) and Windows (pwsh), unless oth |
107 | 109 | gn gen out/Default |
108 | 110 | ``` |
109 | 111 |
|
110 | | -5. Comile |
| 112 | +5. Compile |
111 | 113 | ```shell |
112 | 114 | ninja -C out/Default peerconnection_serverless |
113 | 115 | ``` |
@@ -151,9 +153,6 @@ This section describes required fields for the json configuration file. |
151 | 153 |
|
152 | 154 | - **bwe_feedback_duration**: The duration the receiver sends its estimated target rate every time(*in millisecond*) |
153 | 155 |
|
154 | | -- **onnx** |
155 | | - - **onnx_model_path**: The path of the [onnx](https://www.onnxruntime.ai/) model |
156 | | - |
157 | 156 | - **video_source** |
158 | 157 | - **video_disabled**: |
159 | 158 | - **enabled**: If set to `true`, the client will not take any video source as input |
@@ -188,11 +187,57 @@ This section describes required fields for the json configuration file. |
188 | 187 | - **fps**: Frames per second of the output video file |
189 | 188 | - **file_path**: The file path of the output video file in YUV format |
190 | 189 |
|
| 190 | +#### Use PyInfer or ONNXInfer |
| 191 | + |
| 192 | +##### PyInfer |
| 193 | + |
| 194 | +The default bandwidth estimator is PyInfer, You should implement your Python class named `Estimator` with required methods `report_states` and `get_estimated_bandwidth` in Python file `BandwidthEstimator.py ` and put this file in your workspace. |
| 195 | +There is an example of Estimator with fixed estimated bandwidth 1Mbps. Here is an example [BandwidthEstimator.py](examples/peerconnection/serverless/corpus/BandwidthEstimator.py). |
| 196 | + |
| 197 | +```python |
| 198 | +class Estimator(object): |
| 199 | + def report_states(self, stats: dict): |
| 200 | + ''' |
| 201 | + stats is a dict with the following items |
| 202 | + { |
| 203 | + "send_time_ms": uint, |
| 204 | + "arrival_time_ms": uint, |
| 205 | + "payload_type": int, |
| 206 | + "sequence_number": uint, |
| 207 | + "ssrc": int, |
| 208 | + "padding_length": uint, |
| 209 | + "header_length": uint, |
| 210 | + "payload_size": uint |
| 211 | + } |
| 212 | + ''' |
| 213 | + pass |
| 214 | +
|
| 215 | + def get_estimated_bandwidth(self)->int: |
| 216 | + return int(1e6) # 1Mbps |
| 217 | +
|
| 218 | +``` |
| 219 | + |
| 220 | +##### ONNXInfer |
| 221 | + |
| 222 | +If you want to use the ONNXInfer as the bandwidth estimator, you should specify the path of onnx model in the config file. Here is an example configuration [receiver.json](examples/peerconnection/serverless/corpus/receiver.json) |
| 223 | + |
| 224 | +- **onnx** |
| 225 | + - **onnx_model_path**: The path of the [onnx](https://www.onnxruntime.ai/) model |
| 226 | + |
| 227 | + |
191 | 228 | #### Run peerconnection_serverless |
192 | 229 | - Dockerized environment |
193 | 230 |
|
194 | 231 | To better demonstrate the usage of peerconnection_serverless, we provide an all-inclusive corpus in `examples/peerconnection/serverless/corpus`. You can use the following commands to execute a tiny example. After these commands terminates, you will get `outvideo.yuv` and `outaudio.wav`. |
195 | | - |
| 232 | + |
| 233 | + |
| 234 | + PyInfer: |
| 235 | + ```shell |
| 236 | + sudo docker run -d --rm -v `pwd`/examples/peerconnection/serverless/corpus:/app -w /app --name alphartc alphartc peerconnection_serverless receiver_pyinfer.json |
| 237 | + sudo docker exec alphartc peerconnection_serverless sender_pyinfer.json |
| 238 | + ``` |
| 239 | + |
| 240 | + ONNXInfer: |
196 | 241 | ``` shell |
197 | 242 | sudo docker run -d --rm -v `pwd`/examples/peerconnection/serverless/corpus:/app -w /app --name alphartc alphartc peerconnection_serverless receiver.json |
198 | 243 | sudo docker exec alphartc peerconnection_serverless sender.json |
|
0 commit comments