-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explain the new software camera synchronisation feature #4019
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very minor comments. Looks good otherwise!
documentation/asciidoc/computers/camera/rpicam_apps_multicam.adoc
Outdated
Show resolved
Hide resolved
documentation/asciidoc/computers/camera/rpicam_apps_multicam.adoc
Outdated
Show resolved
Hide resolved
|
||
**Clients** | ||
|
||
Clients listen out for server timing messages and, when they receive one, will shorten or lengthen a camera frame by the required amount so that subsequent frames will start, as far as possible, at the same moment as the server's. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/camera frame/camera frame duration/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All done!
@nathan-contino are we ok to merge this update? |
Also add related options, plus a few more options that seem to have been undocumented for a while.
f89d3c2
to
8c05c55
Compare
documentation/asciidoc/computers/camera/rpicam_apps_multicam.adoc
Outdated
Show resolved
Hide resolved
Taking a look at this today, will make some minor copy edits and merge. Looks solid so far but I want to make sure everything complies with the style guide. |
documentation/asciidoc/computers/camera/rpicam_apps_multicam.adoc
Outdated
Show resolved
Hide resolved
$ rpicam-vid -n -t 20s --camera 0 --codec libav -o server.mp4 --sync server | ||
---- | ||
|
||
This will run for 20 seconds but with the default settings (100 frames at 30fps) will give clients just over 3 seconds to get synchronised before anything is recorded. So the final video file will contain slightly under 17 seconds of video. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"default settings" -> "default synchronisation settings" ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep.
In practical operation there are a few final points to be aware of: | ||
|
||
* The fixed framerate needs to be below the maximum framerate at which the camera can operate (in the camera mode that is being used). This is because the synchronisation algorithm may need to _shorten_ camera frames so that clients can catch up with the server, and this will fail if it is already running as fast as it can. | ||
* Whilst cameras frames should be correctly synchronised, at higher framerates, or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually easier simply to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues, or reducing system load (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option].) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should "cameras frames" be "camera's frames" or "camera frames"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, "camera frames", I think!
documentation/asciidoc/computers/camera/rpicam_apps_multicam.adoc
Outdated
Show resolved
Hide resolved
In practical operation there are a few final points to be aware of: | ||
|
||
* The fixed framerate needs to be below the maximum framerate at which the camera can operate (in the camera mode that is being used). This is because the synchronisation algorithm may need to _shorten_ camera frames so that clients can catch up with the server, and this will fail if it is already running as fast as it can. | ||
* Whilst cameras frames should be correctly synchronised, at higher framerates, or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually easier simply to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues, or reducing system load (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option].) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"usually easier simply to try" -> "usually easier to try" ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, will re-word a bit. Maybe "usually simpler to try".
documentation/asciidoc/computers/camera/rpicam_configuration.adoc
Outdated
Show resolved
Hide resolved
documentation/asciidoc/computers/camera/rpicam_configuration.adoc
Outdated
Show resolved
Hide resolved
documentation/asciidoc/computers/camera/rpicam_options_common.adoc
Outdated
Show resolved
Hide resolved
I'm just going to convert this to "draft" for a bit. I'd like to fix the timer issue with rpicam-vid synchronisation (that the timer doesn't count from when the sync happens) first, then I'll update the PR again. |
All done now. |
@@ -50,19 +50,19 @@ First we should start the client: | |||
$ rpicam-vid -n -t 20s --camera 1 --codec libav -o client.mp4 --sync client | |||
---- | |||
|
|||
Note the `--sync client` parameter. This will record for 20 seconds in total but note that this _includes_ the time to start the server and achieve synchronisation. So while the start of the recordings, and all the frames, will be synchronised, the end of the recordings is not. | |||
Note the `--sync client` parameter. This will record for 20 seconds but _only_ once the sychronisation point has been reached. If necessary, it will wait indefinitely for the first server message. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"sychronisation" typo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I keep doing that. A search for "sych" found a couple more!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sick! 😆
* Whilst camera frames should be correctly synchronised, at higher framerates or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually simpler to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues, or reducing system load (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option].) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to move the "(see the xref:camera_software.adoc#buffer-count[--buffer-count
option])" to appear just after "increasing the number of buffers being allocated to the camera queues", rather than after "reducing system load" ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree.
@@ -38,7 +38,7 @@ Raspberry Pi OS recognises the following overlays in `/boot/firmware/config.txt` | |||
|
|||
To use one of these overlays, you must disable automatic camera detection. To disable automatic detection, set `camera_auto_detect=0` in `/boot/firmware/config.txt`. If `config.txt` already contains a line assigning an `camera_auto_detect` value, change the value to `0`. Reboot your Raspberry Pi with `sudo reboot` to load your changes. | |||
|
|||
If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or CM4, for example), then you can specify which one you are referring to by adding `,cam0` or `,cam1` (don't add any spaces) to the `dtoverlay` that you used from the table above. If you do not add either of these, it will default to checking camera connector 1 (`cam1`). But note that for official Raspberry PI camera modules, auto-detection will correctly identify all the cameras connected to your device. | |||
If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or one of the compute modules, for example), then you can specify which one you are referring to by adding `,cam0` or `,cam1` (don't add any spaces) to the `dtoverlay` that you used from the table above. If you do not add either of these, it will default to checking camera connector 1 (`cam1`). But note that for official Raspberry Pi camera modules, auto-detection will correctly identify all the cameras connected to your device. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nitpick, but I think we tend to capitalise them as "Compute Modules" rather than "compute modules"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
Also some other minor corrections.
67c19d7
to
9088335
Compare
@nathan-contino I think this is ready to be merged now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with some small fixes!
|
||
[WARNING] | ||
==== | ||
This guide no longer covers the _legacy camera stack_ which was available in Bullseye and earlier Raspberry Pi OS releases. The legacy camera stack, using applications like `raspivid`, `raspistill` and the original `Picamera` (_not_ `Picamera2`) Python library, has been deprecated for many years, and is now unsupported. If you are using the legacy camera stack, it will only have support for the Camera Module 1, Camera Module 2 and the High Quality Camera, and will never support any newer camera modules. Nothing in this document is applicable to the legacy camera stack. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This guide no longer covers the _legacy camera stack_ which was available in Bullseye and earlier Raspberry Pi OS releases. The legacy camera stack, using applications like `raspivid`, `raspistill` and the original `Picamera` (_not_ `Picamera2`) Python library, has been deprecated for many years, and is now unsupported. If you are using the legacy camera stack, it will only have support for the Camera Module 1, Camera Module 2 and the High Quality Camera, and will never support any newer camera modules. Nothing in this document is applicable to the legacy camera stack. | |
This guide covers the current version of the camera stack provided in Raspberry Pi OS Bookworm and later. This guide does not cover the legacy camera stack, which used applications like `raspivid`, `raspistill`, and `Picamera` (which has been replaced by `Picamera2`). |
@@ -12,3 +12,8 @@ Raspberry Pi produces several official camera modules, including: | |||
For more information about camera hardware, see the xref:../accessories/camera.adoc#about-the-camera-modules[camera hardware documentation]. | |||
|
|||
First, xref:../accessories/camera.adoc#install-a-raspberry-pi-camera[install your camera module]. Then, follow the guides in this section to put your camera module to use. | |||
|
|||
[WARNING] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[WARNING] | |
[IMPORTANT] |
@@ -89,9 +89,19 @@ Alias: `-t` | |||
Default value: 5000 milliseconds (5 seconds) | |||
Specify how long the application runs before closing. This applies to both video recording and preview windows. When capturing a still image, the application shows a preview window for `timeout` milliseconds before capturing the output image. | |||
Specify how long the application runs before closing. This value is interpreted as a number of milliseconds unless an optional suffix is used to change the unit. The suffix may be one of: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Specify how long the application runs before closing. This value is interpreted as a number of milliseconds unless an optional suffix is used to change the unit. The suffix may be one of: | |
Specify how long the application runs before closing. This value is interpreted as a number of milliseconds unless an optional suffix is used to change the unit. The suffix may be one of the following: |
@@ -132,3 +132,10 @@ Records exactly the specified number of frames. Any non-zero value overrides xre | |||
|
|||
Records exactly the specified framerate. Accepts a nonzero integer. | |||
|
|||
==== `low-latency` | |||
|
|||
On a Pi 5, the `--low-latency` option will reduce the encoding latency, which may be beneficial for real-time streaming applications, in return for (slightly) less good coding efficiency (for example, B frames and arithmethic coding will no longer be used). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a Pi 5, the `--low-latency` option will reduce the encoding latency, which may be beneficial for real-time streaming applications, in return for (slightly) less good coding efficiency (for example, B frames and arithmethic coding will no longer be used). | |
On Raspberry Pi 5, use the `--low-latency` option to reduce the encoding latency at the expense of encoding efficiency (removes B frames and arithmetic coding). This can be beneficial for real-time streaming applications. |
|
||
The reason for specifying "baseline" profile on a Pi 5 is that MediaMTX doesn't support B frames, so we need to stop the encoder from producing them. On earlier devices, with hardware encoders, B frames are never generated so there is no issue. On a Pi 5 you could alternatively remove this option and replace it with `--low-latency` which will also prevent B frames, and produce a (slightly less well compressed) stream with reduced latency. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for specifying "baseline" profile on a Pi 5 is that MediaMTX doesn't support B frames, so we need to stop the encoder from producing them. On earlier devices, with hardware encoders, B frames are never generated so there is no issue. On a Pi 5 you could alternatively remove this option and replace it with `--low-latency` which will also prevent B frames, and produce a (slightly less well compressed) stream with reduced latency. | |
On Raspberry Pi 5, always specify the "baseline" profile. This stops the encoder from producing B frames, which MediaMTX doesn't support. Earlier devices with hardware encoders never generate B frames. Alternatively, pass the `--low-latency` flag to disable B frames and limit compression. |
==== Low latency video with the Pi 5 | ||
|
||
Pi 5 uses software video encoders. These generally output frames with a longer latency than the old hardware encoders, and this can sometimes be an issue for real-time streaming applications. | ||
|
||
In this case, please add the option `--low-latency` to the `rpicam-vid` command. This will alter certain encoder options to output the encoded frame more quickly. | ||
|
||
The downside is that coding efficiency is (slightly) less good, and that the processor's multiple cores may be used (slightly) less efficiently. The maximum framerate that can be encoded may be slightly reduced (though it will still easily achieve 1080p30). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
==== Low latency video with the Pi 5 | |
Pi 5 uses software video encoders. These generally output frames with a longer latency than the old hardware encoders, and this can sometimes be an issue for real-time streaming applications. | |
In this case, please add the option `--low-latency` to the `rpicam-vid` command. This will alter certain encoder options to output the encoded frame more quickly. | |
The downside is that coding efficiency is (slightly) less good, and that the processor's multiple cores may be used (slightly) less efficiently. The maximum framerate that can be encoded may be slightly reduced (though it will still easily achieve 1080p30). | |
==== Low latency video with Raspberry Pi 5 | |
Raspberry Pi 5 uses software video encoders instead of hardware encoders. As a result, frames typically output with higher latency than the hardware encoders on Raspberry Pi 4 and earlier. This can sometimes cause issues for real-time streaming applications. | |
To reduce the encoder latency, pass the `--low-latency` flag to the `rpicam-vid` command. This slightly reduces encoding efficiency and removes B frames to limit latency. The maximum framerate that can be encoded may be slightly reduced, but you should expect to output at least 1080p resolution at 30 FPS. |
On Raspberry Pi 5, you can output to the MP4 container format directly by specifying the `mp4` file extension for your output file: | ||
|
||
[source,console] | ||
---- | ||
$ rpicam-vid -t 10s -o test.mp4 | ||
---- | ||
|
||
On Raspberry Pi 4, or earlier devices, you can save MP4 files using: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On Raspberry Pi 4, or earlier devices, you can save MP4 files using: | |
On Raspberry Pi 4 or earlier, run the following command to save MP4 files: |
[WARNING] | ||
==== | ||
Older versions of vlc were able to play H.264 files correctly, but recent versions do not - displaying only a few, or possibly garbled, frames. You should either use a different media player, or save your files in a more widely supported container format - such as MP4 (see below). | ||
==== |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[WARNING] | |
==== | |
Older versions of vlc were able to play H.264 files correctly, but recent versions do not - displaying only a few, or possibly garbled, frames. You should either use a different media player, or save your files in a more widely supported container format - such as MP4 (see below). | |
==== | |
[IMPORTANT] | |
==== | |
Older versions of VLC could play H.264 files correctly, but recent versions do not. Instead, they display only a few, possibly garbled, frames. You should either use a different media player, or save your files in a more widely supported container format such as MP4 (see below). | |
==== |
@@ -38,6 +38,8 @@ Raspberry Pi OS recognises the following overlays in `/boot/firmware/config.txt` | |||
|
|||
To use one of these overlays, you must disable automatic camera detection. To disable automatic detection, set `camera_auto_detect=0` in `/boot/firmware/config.txt`. If `config.txt` already contains a line assigning an `camera_auto_detect` value, change the value to `0`. Reboot your Raspberry Pi with `sudo reboot` to load your changes. | |||
|
|||
If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or one of the Compute Modules, for example), then you can specify which one you are referring to by adding `,cam0` or `,cam1` (don't add any spaces) to the `dtoverlay` that you used from the table above. If you do not add either of these, it will default to checking camera connector 1 (`cam1`). But note that for official Raspberry Pi camera modules, auto-detection will correctly identify all the cameras connected to your device. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or one of the Compute Modules, for example), then you can specify which one you are referring to by adding `,cam0` or `,cam1` (don't add any spaces) to the `dtoverlay` that you used from the table above. If you do not add either of these, it will default to checking camera connector 1 (`cam1`). But note that for official Raspberry Pi camera modules, auto-detection will correctly identify all the cameras connected to your device. | |
If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or a Compute Module connected to an IO Board, for example), specify which one you are referring to by adding `,cam0` or `,cam1` (no spaces) to the `dtoverlay` that you used from the table above. If you do not add either of these, your device will default to camera connector 1 (`cam1`). Official Raspberry Pi camera modules support auto-detection, making this step unnecessary for official camera modules unless you connect multiple cameras simultaneously. |
==== Software Camera Synchronisation | ||
|
||
Raspberry Pi's _libcamera_ implementation has the ability to synchronise the frames of different cameras using only software. This will cause one camera to adjust it's frame timing so as to coincide as closely as possible with the frames of another camera. No soldering or hardware connections are required, and it will work with all of Raspberry Pi's camera modules, and even third party ones so long as their drivers implement frame duration control correctly. | ||
|
||
**How it works** | ||
|
||
The scheme works by designating one camera to be the _server_. The server will broadcast timing messages onto the network at regular intervals, such as once a second. Meanwhile other cameras, known as _clients_, can listen to these messages whereupon they may lengthen or shorten frame times slightly so as to pull them into sync with the server. This process is continual, though after the first adjustment, subsequent adjustments are normally small. | ||
|
||
The client cameras may be attached to the same Raspberry Pi device as the server, or they may be attached to different Raspberry Pis on the same network. The camera model on the clients may match the server, or they may be different. | ||
|
||
Clients and servers need to be set running at the same nominal framerate (such as 30fps). Note that there is no back-channel from the clients back to the server. It is solely the clients' responsibility to be up and running in time to match the server, and the server is completely unaware whether clients have synchronised successfully, or indeed whether there are any clients at all. | ||
|
||
In normal operation, running the same make of camera on the same Raspberry Pi, we would expect the frame start times of the camera images to match within "several tens of microseconds". When the camera models are different this could be significantly larger as the cameras will probably not be able to match framerates exactly and will therefore be continually drifting apart (and brought back together with every timing message). | ||
|
||
When cameras are on different devices, the system clocks should be synchronised using NTP (normally the case by default for Raspberry Pi OS), or if this is insufficiently precise, another protocol like PTP might be used. Any discrepancy between system clocks will feed directly into extra error in frame start times - even though the advertised timestamps on the frames will not tell you. | ||
|
||
**The Server** | ||
|
||
The server, as previously explained, broadcasts timing messages onto the network, by default every second. The server will run for a fixed number of frames, by default 100, after which it will inform the camera application on the device that the "synchronisation point" has been reached. At this moment, the application will start using the frames, so in the case of `rpicam-vid`, they will start being encoded and recorded. Recall that the behaviour and even existence of clients has no bearing on this. | ||
|
||
If required, there can be several servers on the same network so long as they are broadcasting timing messages to different network addresses. Clients, of course, will have to be configured to listen for the correct address. | ||
|
||
**Clients** | ||
|
||
Clients listen out for server timing messages and, when they receive one, will shorten or lengthen a camera frame duration by the required amount so that subsequent frames will start, as far as possible, at the same moment as the server's. | ||
|
||
The clients learn the correct "synchronisation point" from the server's messages, and just like the server, will signal the camera application at the same moment that it should start using the frames. So in the case of `rpicam-vid`, this is once again the moment at which frames will start being recorded. | ||
|
||
Normally it makes sense to start clients _before_ the server, as the clients will simply wait (the "synchronisation point" has not been reached) until a server is seen broadcasting onto the network. This obviously avoids timing problems where a server might reach its "synchronisation point" even before all the clients have been started! | ||
|
||
**Usage in `rpicam-vid`** | ||
|
||
We can use software camera synchronisation with `rpicam-vid` to record videos that are synchronised frame-by-frame. We're going to assume we have two cameras attached, and we're going to use camera 0 as the server, and camera 1 as the client. `rpicam-vid` defaults to a fixed 30 frames per second, which will be fine for us. | ||
|
||
First we should start the client: | ||
[source,console] | ||
---- | ||
$ rpicam-vid -n -t 20s --camera 1 --codec libav -o client.mp4 --sync client | ||
---- | ||
|
||
Note the `--sync client` parameter. This will record for 20 seconds but _only_ once the synchronisation point has been reached. If necessary, it will wait indefinitely for the first server message. | ||
|
||
To start the server: | ||
[source,console] | ||
---- | ||
$ rpicam-vid -n -t 20s --camera 0 --codec libav -o server.mp4 --sync server | ||
---- | ||
|
||
This too will run for 20 seconds counting from when the synchronisation point is reached and the recording starts. With the default synchronisation settings (100 frames at 30fps) this means there will be just over 3 seconds for clients to get synchronised. | ||
|
||
The server's broadcast address and port, the frequency of the timing messages and the number of frames to wait for clients to synchronise, can all be changed in the camera tuning file. Clients only pay attention to the broadcast address here which should match the server's; the other information will be ignored. Please refer to the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera tuning guide] for more information. | ||
|
||
In practical operation there are a few final points to be aware of: | ||
|
||
* The fixed framerate needs to be below the maximum framerate at which the camera can operate (in the camera mode that is being used). This is because the synchronisation algorithm may need to _shorten_ camera frames so that clients can catch up with the server, and this will fail if it is already running as fast as it can. | ||
* Whilst camera frames should be correctly synchronised, at higher framerates or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually simpler to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option]), or reducing system load. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
==== Software Camera Synchronisation | |
Raspberry Pi's _libcamera_ implementation has the ability to synchronise the frames of different cameras using only software. This will cause one camera to adjust it's frame timing so as to coincide as closely as possible with the frames of another camera. No soldering or hardware connections are required, and it will work with all of Raspberry Pi's camera modules, and even third party ones so long as their drivers implement frame duration control correctly. | |
**How it works** | |
The scheme works by designating one camera to be the _server_. The server will broadcast timing messages onto the network at regular intervals, such as once a second. Meanwhile other cameras, known as _clients_, can listen to these messages whereupon they may lengthen or shorten frame times slightly so as to pull them into sync with the server. This process is continual, though after the first adjustment, subsequent adjustments are normally small. | |
The client cameras may be attached to the same Raspberry Pi device as the server, or they may be attached to different Raspberry Pis on the same network. The camera model on the clients may match the server, or they may be different. | |
Clients and servers need to be set running at the same nominal framerate (such as 30fps). Note that there is no back-channel from the clients back to the server. It is solely the clients' responsibility to be up and running in time to match the server, and the server is completely unaware whether clients have synchronised successfully, or indeed whether there are any clients at all. | |
In normal operation, running the same make of camera on the same Raspberry Pi, we would expect the frame start times of the camera images to match within "several tens of microseconds". When the camera models are different this could be significantly larger as the cameras will probably not be able to match framerates exactly and will therefore be continually drifting apart (and brought back together with every timing message). | |
When cameras are on different devices, the system clocks should be synchronised using NTP (normally the case by default for Raspberry Pi OS), or if this is insufficiently precise, another protocol like PTP might be used. Any discrepancy between system clocks will feed directly into extra error in frame start times - even though the advertised timestamps on the frames will not tell you. | |
**The Server** | |
The server, as previously explained, broadcasts timing messages onto the network, by default every second. The server will run for a fixed number of frames, by default 100, after which it will inform the camera application on the device that the "synchronisation point" has been reached. At this moment, the application will start using the frames, so in the case of `rpicam-vid`, they will start being encoded and recorded. Recall that the behaviour and even existence of clients has no bearing on this. | |
If required, there can be several servers on the same network so long as they are broadcasting timing messages to different network addresses. Clients, of course, will have to be configured to listen for the correct address. | |
**Clients** | |
Clients listen out for server timing messages and, when they receive one, will shorten or lengthen a camera frame duration by the required amount so that subsequent frames will start, as far as possible, at the same moment as the server's. | |
The clients learn the correct "synchronisation point" from the server's messages, and just like the server, will signal the camera application at the same moment that it should start using the frames. So in the case of `rpicam-vid`, this is once again the moment at which frames will start being recorded. | |
Normally it makes sense to start clients _before_ the server, as the clients will simply wait (the "synchronisation point" has not been reached) until a server is seen broadcasting onto the network. This obviously avoids timing problems where a server might reach its "synchronisation point" even before all the clients have been started! | |
**Usage in `rpicam-vid`** | |
We can use software camera synchronisation with `rpicam-vid` to record videos that are synchronised frame-by-frame. We're going to assume we have two cameras attached, and we're going to use camera 0 as the server, and camera 1 as the client. `rpicam-vid` defaults to a fixed 30 frames per second, which will be fine for us. | |
First we should start the client: | |
[source,console] | |
---- | |
$ rpicam-vid -n -t 20s --camera 1 --codec libav -o client.mp4 --sync client | |
---- | |
Note the `--sync client` parameter. This will record for 20 seconds but _only_ once the synchronisation point has been reached. If necessary, it will wait indefinitely for the first server message. | |
To start the server: | |
[source,console] | |
---- | |
$ rpicam-vid -n -t 20s --camera 0 --codec libav -o server.mp4 --sync server | |
---- | |
This too will run for 20 seconds counting from when the synchronisation point is reached and the recording starts. With the default synchronisation settings (100 frames at 30fps) this means there will be just over 3 seconds for clients to get synchronised. | |
The server's broadcast address and port, the frequency of the timing messages and the number of frames to wait for clients to synchronise, can all be changed in the camera tuning file. Clients only pay attention to the broadcast address here which should match the server's; the other information will be ignored. Please refer to the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera tuning guide] for more information. | |
In practical operation there are a few final points to be aware of: | |
* The fixed framerate needs to be below the maximum framerate at which the camera can operate (in the camera mode that is being used). This is because the synchronisation algorithm may need to _shorten_ camera frames so that clients can catch up with the server, and this will fail if it is already running as fast as it can. | |
* Whilst camera frames should be correctly synchronised, at higher framerates or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually simpler to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option]), or reducing system load. | |
==== Software Camera Synchronisation | |
Raspberry Pi's `libcamera` implementation can synchronise the frames of different cameras using only software. Using this synchronisation, one camera adjusts its frame timing to coincide as closely as possible with the frames of another camera. This technique requires no soldering or additional hardware connections, and works with all official Raspberry Pi camera modules as well as third party modules, as long as their drivers correctly implement frame duration control. | |
The scheme works by designating one camera to be the **server**. The server broadcasts timing messages onto the network at regular intervals (e.g. once a second). Other cameras, known as **clients**, listen to these messages. Clients lengthen or shorten frame times slightly to gradually synchronise with the server. | |
Clients may be attached to the same Raspberry Pi device as the server, or they may be attached to separate Raspberry Pis on the same network. Clients can use different camera module hardware than the server. | |
Clients and servers must run at the same nominal framerate (e.g. 30 FPS). Clients do not communicate back to the server; the server is completely unaware whether clients have synchronised successfully, or indeed whether there are any clients at all. | |
In normal operation, running the same make of camera on the same Raspberry Pi, we would expect the frame start times of the camera images to match within several tens of microseconds. When the camera models differ, this could be significantly larger, since the cameras may not be able to match framerates exactly and will therefore continually drift apart and re-synchronise. | |
When using cameras connected to separate Raspberry Pis on the same network, the system clocks should be synchronised using NTP. Raspberry Pi uses NTP to set the system time by default. If NTP is insufficiently precise, you could use another protocol, like PTP. Any discrepancy between system clocks feeds directly into extra error in frame start times - even though the advertised timestamps on the frames will not tell you. | |
===== The Server | |
By default, the server broadcasts one timing message every second. The server runs for a fixed number of frames, by default 100, after which it informs the camera application that the **synchronisation point** has been reached. Once the server reaches the synchronisation point, the application starts consuming frames (e.g. `rpicam-vid` will start encoding and recording). | |
If required, you can run multiple servers on the same network as long as they broadcast timing messages to different network addresses. When running multiple servers, you must configure each client to listen to the correct address. | |
===== Clients | |
Clients listen for server timing messages. When a client receives a message, the client shortens or lengthens a camera frame duration by the required amount so that subsequent frames start, as close as possible, at the same moment as the subsequent frame on the server. | |
The clients learn the correct synchronisation point from the server's messages. Just like the server, clients signal the camera application at the same moment that it can consume frames. | |
For the best results, start clients before the server. Clients will wait until a server broadcasts onto the network. This avoids timing problems where a server might reach its synchronisation point before the clients have even started. | |
===== Usage in `rpicam-vid` | |
We can use software camera synchronisation with `rpicam-vid` to record videos that are synchronised frame-by-frame. Consider the following: | |
* we have two cameras attached | |
* camera 0 is the server | |
* camera 1 is the client | |
* `rpicam-vid` defaults to a fixed 30 frames per second | |
First, run the following command to start the client: | |
[source,console] | |
---- | |
$ rpicam-vid -n -t 20s --camera 1 --codec libav -o client.mp4 --sync client | |
---- | |
Note the `--sync client` parameter. This records for 20 seconds _once the synchronisation point has been reached_. If necessary, this client will wait indefinitely for the first server message. | |
To start the server], run the following command: | |
[source,console] | |
---- | |
$ rpicam-vid -n -t 20s --camera 0 --codec libav -o server.mp4 --sync server | |
---- | |
This too will run for 20 seconds _once the synchronisation point has been reached_. The default synchronisation settings (100 frames at 30 FPS) provide just over 3 seconds for clients to synchronise. | |
You can customise the following settings in the camera tuning file: | |
* server broadcast address | |
* server broadcast port | |
* frequency of the timing messages | |
* the number of frames to wait for clients to synchronise | |
Clients only pay attention to the broadcast address specified in the tuning file, which should match the server's. For more information about tuning files, see https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera tuning guide]. | |
When configuring your tuning file, keep the following tips in mind: | |
* The fixed framerate must not exceed the maximum framerate at which the camera can operate in the camera mode used. The synchronisation algorithm may need to _shorten_ camera frames so that clients can catch up with the server, and this will fail if it is already running as fast as it can. | |
* Whilst camera frames should be correctly synchronised, at higher framerates or depending on system load, the clients or server could drop frames. In these cases the frame timestamps will help an application to work out what has happened, though it's usually simpler to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option]), or reducing system load. |
I've also added some other rather overdue updates, notably to the streaming section.
Also a warning banner at the top that this has nothing to do with the legacy stack. I still get folks complaining about this, all these years later...
@naushir You might want to read through the camera sync stuff too!