Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explain the new software camera synchronisation feature #4019

Merged
merged 5 commits into from
Mar 6, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions documentation/asciidoc/computers/camera/camera_usage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,8 @@ Raspberry Pi produces several official camera modules, including:
For more information about camera hardware, see the xref:../accessories/camera.adoc#about-the-camera-modules[camera hardware documentation].

First, xref:../accessories/camera.adoc#install-a-raspberry-pi-camera[install your camera module]. Then, follow the guides in this section to put your camera module to use.

[WARNING]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[WARNING]
[IMPORTANT]

====
This guide no longer covers the _legacy camera stack_ which was available in Bullseye and earlier Raspberry Pi OS releases. The legacy camera stack, using applications like `raspivid`, `raspistill` and the original `Picamera` (_not_ `Picamera2`) Python library, has been deprecated for many years, and is now unsupported. If you are using the legacy camera stack, it will only have support for the Camera Module 1, Camera Module 2 and the High Quality Camera, and will never support any newer camera modules. Nothing in this document is applicable to the legacy camera stack.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This guide no longer covers the _legacy camera stack_ which was available in Bullseye and earlier Raspberry Pi OS releases. The legacy camera stack, using applications like `raspivid`, `raspistill` and the original `Picamera` (_not_ `Picamera2`) Python library, has been deprecated for many years, and is now unsupported. If you are using the legacy camera stack, it will only have support for the Camera Module 1, Camera Module 2 and the High Quality Camera, and will never support any newer camera modules. Nothing in this document is applicable to the legacy camera stack.
This guide covers the current version of the camera stack provided in Raspberry Pi OS Bookworm and later. This guide does not cover the legacy camera stack, which used applications like `raspivid`, `raspistill`, and `Picamera` (which has been replaced by `Picamera2`).

====
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,61 @@

To list all the cameras available on your platform, use the xref:camera_software.adoc#list-cameras[`list-cameras`] option. To choose which camera to use, pass the camera index to the xref:camera_software.adoc#camera[`camera`] option.

NOTE: `libcamera` does not yet provide stereoscopic camera support. When running two cameras simultaneously, they must be run in separate processes. This means there is no way to synchronise sensor framing or 3A operation between them. As a workaround, you could synchronise the cameras through an external sync signal for the HQ (IMX477) camera, and switch the 3A to manual mode if necessary.
NOTE: `libcamera` does not yet provide stereoscopic camera support. When running two cameras simultaneously, they must be run in separate processes, meaning there is no way to synchronise 3A operation between them. As a workaround, you could synchronise the cameras through an external sync signal for the HQ (IMX477) camera or use the software camera synchronisation support that is described below, switching the 3A to manual mode if necessary.

==== Software Camera Synchronisation

Raspberry Pi's _libcamera_ implementation has the ability to synchronise the frames of different cameras using only software. This will cause one camera to adjust it's frame timing so as to coincide as closely as possible with the frames of another camera. No soldering or hardware connections are required, and it will work with all of Raspberry Pi's camera modules, and even third party ones so long as their drivers implement frame duration control correctly.

**How it works**

The scheme works by designating one camera to be the _server_. The server will broadcast timing messages onto the network at regular intervals, such as once a second. Meanwhile other cameras, known as _clients_, can listen to these messages whereupon they may lengthen or shorten frame times slightly so as to pull them into sync with the server. This process is continual, though after the first adjustment, subsequent adjustments are normally small.

The client cameras may be attached to the same Raspberry Pi device as the server, or they may be attached to different Raspberry Pis on the same network. The camera model on the clients may match the server, or they may be different.

Clients and servers need to be set running at the same nominal framerate (such as 30fps). Note that there is no back-channel from the clients back to the server. It is solely the clients' responsibility to be up and running in time to match the server, and the server is completely unaware whether clients have synchronised successfully, or indeed whether there are any clients at all.

In normal operation, running the same make of camera on the same Raspberry Pi, we would expect the frame start times of the camera images to match within "several tens of microseconds". When the camera models are different this could be significantly larger as the cameras will probably not be able to match framerates exactly and will therefore be continually drifting apart (and brought back together with every timing message).

When cameras are on different devices, the system clocks should be synchronised using NTP (normally the case by default for Raspberry Pi OS), or if this is insufficiently precise, another protocol like PTP might be used. Any discrepancy between system clocks will feed directly into extra error in frame start times - even though the advertised timestamps on the frames will not tell you.

**The Server**

The server, as previously explained, broadcasts timing messages onto the network, by default every second. The server will run for a fixed number of frames, by default 100, after which it will inform the camera application on the device that the "synchronisation point" has been reached. At this moment, the application will start using the frames, so in the case of `rpicam-vid`, they will start being encoded and recorded. Recall that the behaviour and even existence of clients has no bearing on this.

If required, there can be several servers on the same network so long as they are broadcasting timing messages to different network addresses. Clients, of course, will have to be configured to listen for the correct address.

**Clients**

Clients listen out for server timing messages and, when they receive one, will shorten or lengthen a camera frame duration by the required amount so that subsequent frames will start, as far as possible, at the same moment as the server's.

The clients learn the correct "synchronisation point" from the server's messages, and just like the server, will signal the camera application at the same moment that it should start using the frames. So in the case of `rpicam-vid`, this is once again the moment at which frames will start being recorded.

Normally it makes sense to start clients _before_ the server, as the clients will simply wait (the "synchronisation point" has not been reached) until a server is seen broadcasting onto the network. This obviously avoids timing problems where a server might reach its "synchronisation point" even before all the clients have been started!

**Usage in `rpicam-vid`**

We can use software camera synchronisation with `rpicam-vid` to record videos that are synchronised frame-by-frame. We're going to assume we have two cameras attached, and we're going to use camera 0 as the server, and camera 1 as the client. `rpicam-vid` defaults to a fixed 30 frames per second, which will be fine for us.

First we should start the client:
[source,console]
----
$ rpicam-vid -n -t 20s --camera 1 --codec libav -o client.mp4 --sync client
----

Note the `--sync client` parameter. This will record for 20 seconds but _only_ once the synchronisation point has been reached. If necessary, it will wait indefinitely for the first server message.

To start the server:
[source,console]
----
$ rpicam-vid -n -t 20s --camera 0 --codec libav -o server.mp4 --sync server
----

This too will run for 20 seconds counting from when the synchronisation point is reached and the recording starts. With the default synchronisation settings (100 frames at 30fps) this means there will be just over 3 seconds for clients to get synchronised.

The server's broadcast address and port, the frequency of the timing messages and the number of frames to wait for clients to synchronise, can all be changed in the camera tuning file. Clients only pay attention to the broadcast address here which should match the server's; the other information will be ignored. Please refer to the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera tuning guide] for more information.

In practical operation there are a few final points to be aware of:

* The fixed framerate needs to be below the maximum framerate at which the camera can operate (in the camera mode that is being used). This is because the synchronisation algorithm may need to _shorten_ camera frames so that clients can catch up with the server, and this will fail if it is already running as fast as it can.
* Whilst camera frames should be correctly synchronised, at higher framerates or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually simpler to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option]), or reducing system load.
Comment on lines +13 to +68
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
==== Software Camera Synchronisation
Raspberry Pi's _libcamera_ implementation has the ability to synchronise the frames of different cameras using only software. This will cause one camera to adjust it's frame timing so as to coincide as closely as possible with the frames of another camera. No soldering or hardware connections are required, and it will work with all of Raspberry Pi's camera modules, and even third party ones so long as their drivers implement frame duration control correctly.
**How it works**
The scheme works by designating one camera to be the _server_. The server will broadcast timing messages onto the network at regular intervals, such as once a second. Meanwhile other cameras, known as _clients_, can listen to these messages whereupon they may lengthen or shorten frame times slightly so as to pull them into sync with the server. This process is continual, though after the first adjustment, subsequent adjustments are normally small.
The client cameras may be attached to the same Raspberry Pi device as the server, or they may be attached to different Raspberry Pis on the same network. The camera model on the clients may match the server, or they may be different.
Clients and servers need to be set running at the same nominal framerate (such as 30fps). Note that there is no back-channel from the clients back to the server. It is solely the clients' responsibility to be up and running in time to match the server, and the server is completely unaware whether clients have synchronised successfully, or indeed whether there are any clients at all.
In normal operation, running the same make of camera on the same Raspberry Pi, we would expect the frame start times of the camera images to match within "several tens of microseconds". When the camera models are different this could be significantly larger as the cameras will probably not be able to match framerates exactly and will therefore be continually drifting apart (and brought back together with every timing message).
When cameras are on different devices, the system clocks should be synchronised using NTP (normally the case by default for Raspberry Pi OS), or if this is insufficiently precise, another protocol like PTP might be used. Any discrepancy between system clocks will feed directly into extra error in frame start times - even though the advertised timestamps on the frames will not tell you.
**The Server**
The server, as previously explained, broadcasts timing messages onto the network, by default every second. The server will run for a fixed number of frames, by default 100, after which it will inform the camera application on the device that the "synchronisation point" has been reached. At this moment, the application will start using the frames, so in the case of `rpicam-vid`, they will start being encoded and recorded. Recall that the behaviour and even existence of clients has no bearing on this.
If required, there can be several servers on the same network so long as they are broadcasting timing messages to different network addresses. Clients, of course, will have to be configured to listen for the correct address.
**Clients**
Clients listen out for server timing messages and, when they receive one, will shorten or lengthen a camera frame duration by the required amount so that subsequent frames will start, as far as possible, at the same moment as the server's.
The clients learn the correct "synchronisation point" from the server's messages, and just like the server, will signal the camera application at the same moment that it should start using the frames. So in the case of `rpicam-vid`, this is once again the moment at which frames will start being recorded.
Normally it makes sense to start clients _before_ the server, as the clients will simply wait (the "synchronisation point" has not been reached) until a server is seen broadcasting onto the network. This obviously avoids timing problems where a server might reach its "synchronisation point" even before all the clients have been started!
**Usage in `rpicam-vid`**
We can use software camera synchronisation with `rpicam-vid` to record videos that are synchronised frame-by-frame. We're going to assume we have two cameras attached, and we're going to use camera 0 as the server, and camera 1 as the client. `rpicam-vid` defaults to a fixed 30 frames per second, which will be fine for us.
First we should start the client:
[source,console]
----
$ rpicam-vid -n -t 20s --camera 1 --codec libav -o client.mp4 --sync client
----
Note the `--sync client` parameter. This will record for 20 seconds but _only_ once the synchronisation point has been reached. If necessary, it will wait indefinitely for the first server message.
To start the server:
[source,console]
----
$ rpicam-vid -n -t 20s --camera 0 --codec libav -o server.mp4 --sync server
----
This too will run for 20 seconds counting from when the synchronisation point is reached and the recording starts. With the default synchronisation settings (100 frames at 30fps) this means there will be just over 3 seconds for clients to get synchronised.
The server's broadcast address and port, the frequency of the timing messages and the number of frames to wait for clients to synchronise, can all be changed in the camera tuning file. Clients only pay attention to the broadcast address here which should match the server's; the other information will be ignored. Please refer to the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera tuning guide] for more information.
In practical operation there are a few final points to be aware of:
* The fixed framerate needs to be below the maximum framerate at which the camera can operate (in the camera mode that is being used). This is because the synchronisation algorithm may need to _shorten_ camera frames so that clients can catch up with the server, and this will fail if it is already running as fast as it can.
* Whilst camera frames should be correctly synchronised, at higher framerates or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually simpler to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option]), or reducing system load.
==== Software Camera Synchronisation
Raspberry Pi's `libcamera` implementation can synchronise the frames of different cameras using only software. Using this synchronisation, one camera adjusts its frame timing to coincide as closely as possible with the frames of another camera. This technique requires no soldering or additional hardware connections, and works with all official Raspberry Pi camera modules as well as third party modules, as long as their drivers correctly implement frame duration control.
The scheme works by designating one camera to be the **server**. The server broadcasts timing messages onto the network at regular intervals (e.g. once a second). Other cameras, known as **clients**, listen to these messages. Clients lengthen or shorten frame times slightly to gradually synchronise with the server.
Clients may be attached to the same Raspberry Pi device as the server, or they may be attached to separate Raspberry Pis on the same network. Clients can use different camera module hardware than the server.
Clients and servers must run at the same nominal framerate (e.g. 30 FPS). Clients do not communicate back to the server; the server is completely unaware whether clients have synchronised successfully, or indeed whether there are any clients at all.
In normal operation, running the same make of camera on the same Raspberry Pi, we would expect the frame start times of the camera images to match within several tens of microseconds. When the camera models differ, this could be significantly larger, since the cameras may not be able to match framerates exactly and will therefore continually drift apart and re-synchronise.
When using cameras connected to separate Raspberry Pis on the same network, the system clocks should be synchronised using NTP. Raspberry Pi uses NTP to set the system time by default. If NTP is insufficiently precise, you could use another protocol, like PTP. Any discrepancy between system clocks feeds directly into extra error in frame start times - even though the advertised timestamps on the frames will not tell you.
===== The Server
By default, the server broadcasts one timing message every second. The server runs for a fixed number of frames, by default 100, after which it informs the camera application that the **synchronisation point** has been reached. Once the server reaches the synchronisation point, the application starts consuming frames (e.g. `rpicam-vid` will start encoding and recording).
If required, you can run multiple servers on the same network as long as they broadcast timing messages to different network addresses. When running multiple servers, you must configure each client to listen to the correct address.
===== Clients
Clients listen for server timing messages. When a client receives a message, the client shortens or lengthens a camera frame duration by the required amount so that subsequent frames start, as close as possible, at the same moment as the subsequent frame on the server.
The clients learn the correct synchronisation point from the server's messages. Just like the server, clients signal the camera application at the same moment that it can consume frames.
For the best results, start clients before the server. Clients will wait until a server broadcasts onto the network. This avoids timing problems where a server might reach its synchronisation point before the clients have even started.
===== Usage in `rpicam-vid`
We can use software camera synchronisation with `rpicam-vid` to record videos that are synchronised frame-by-frame. Consider the following:
* we have two cameras attached
* camera 0 is the server
* camera 1 is the client
* `rpicam-vid` defaults to a fixed 30 frames per second
First, run the following command to start the client:
[source,console]
----
$ rpicam-vid -n -t 20s --camera 1 --codec libav -o client.mp4 --sync client
----
Note the `--sync client` parameter. This records for 20 seconds _once the synchronisation point has been reached_. If necessary, this client will wait indefinitely for the first server message.
To start the server], run the following command:
[source,console]
----
$ rpicam-vid -n -t 20s --camera 0 --codec libav -o server.mp4 --sync server
----
This too will run for 20 seconds _once the synchronisation point has been reached_. The default synchronisation settings (100 frames at 30 FPS) provide just over 3 seconds for clients to synchronise.
You can customise the following settings in the camera tuning file:
* server broadcast address
* server broadcast port
* frequency of the timing messages
* the number of frames to wait for clients to synchronise
Clients only pay attention to the broadcast address specified in the tuning file, which should match the server's. For more information about tuning files, see https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera tuning guide].
When configuring your tuning file, keep the following tips in mind:
* The fixed framerate must not exceed the maximum framerate at which the camera can operate in the camera mode used. The synchronisation algorithm may need to _shorten_ camera frames so that clients can catch up with the server, and this will fail if it is already running as fast as it can.
* Whilst camera frames should be correctly synchronised, at higher framerates or depending on system load, the clients or server could drop frames. In these cases the frame timestamps will help an application to work out what has happened, though it's usually simpler to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option]), or reducing system load.

Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,8 @@ Raspberry Pi OS recognises the following overlays in `/boot/firmware/config.txt`

To use one of these overlays, you must disable automatic camera detection. To disable automatic detection, set `camera_auto_detect=0` in `/boot/firmware/config.txt`. If `config.txt` already contains a line assigning an `camera_auto_detect` value, change the value to `0`. Reboot your Raspberry Pi with `sudo reboot` to load your changes.

If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or one of the Compute Modules, for example), then you can specify which one you are referring to by adding `,cam0` or `,cam1` (don't add any spaces) to the `dtoverlay` that you used from the table above. If you do not add either of these, it will default to checking camera connector 1 (`cam1`). But note that for official Raspberry Pi camera modules, auto-detection will correctly identify all the cameras connected to your device.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or one of the Compute Modules, for example), then you can specify which one you are referring to by adding `,cam0` or `,cam1` (don't add any spaces) to the `dtoverlay` that you used from the table above. If you do not add either of these, it will default to checking camera connector 1 (`cam1`). But note that for official Raspberry Pi camera modules, auto-detection will correctly identify all the cameras connected to your device.
If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or a Compute Module connected to an IO Board, for example), specify which one you are referring to by adding `,cam0` or `,cam1` (no spaces) to the `dtoverlay` that you used from the table above. If you do not add either of these, your device will default to camera connector 1 (`cam1`). Official Raspberry Pi camera modules support auto-detection, making this step unnecessary for official camera modules unless you connect multiple cameras simultaneously.


[[tuning-files]]
==== Tweak camera behaviour with tuning files

Expand Down
22 changes: 20 additions & 2 deletions documentation/asciidoc/computers/camera/rpicam_options_common.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -89,9 +89,19 @@ Alias: `-t`

Default value: 5000 milliseconds (5 seconds)

Specify how long the application runs before closing. This applies to both video recording and preview windows. When capturing a still image, the application shows a preview window for `timeout` milliseconds before capturing the output image.
Specify how long the application runs before closing. This value is interpreted as a number of milliseconds unless an optional suffix is used to change the unit. The suffix may be one of:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Specify how long the application runs before closing. This value is interpreted as a number of milliseconds unless an optional suffix is used to change the unit. The suffix may be one of:
Specify how long the application runs before closing. This value is interpreted as a number of milliseconds unless an optional suffix is used to change the unit. The suffix may be one of the following:


To run the application indefinitely, specify a value of `0`.
* `min` - minutes
* `s` or `sec` - seconds
* `ms` - milliseconds (the default if no suffix used)
* `us` - microseconds
* `ns` - nanoseconds.

This time applies to both video recording and preview windows. When capturing a still image, the application shows a preview window for the length of time specified by the `timeout` parameter before capturing the output image.

To run the application indefinitely, specify a value of `0`. Floating point values are also permitted.

Example: `rpicam-hello -t 0.5min` would run for 30 seconds.

==== `preview`

Expand Down Expand Up @@ -553,3 +563,11 @@ Flushes output files to disk as soon as a frame finishes writing, instead of wai
Specifies a JSON file that configures the post-processing applied by the imaging pipeline. This applies to camera images _before_ they reach the application. This works similarly to the legacy `raspicam` "image effects". Accepts a file name path as input.

Post-processing is a large topic and admits the use of third-party software like OpenCV and TensorFlowLite to analyse and manipulate images. For more information, see xref:camera_software.adoc#post-processing-with-rpicam-apps[post-processing].

==== `buffer-count`

The number of buffers to allocate for still image capture or for video recording. The default value of zero lets each application choose a reasonable number for its own use case (1 for still image capture, and 6 for video recording). Increasing the number can sometimes help to reduce the number of frame drops, particularly at higher framerates.

==== `viewfinder-buffer-count`

As the `buffer-count` option, but applies when running in preview mode (that is `rpicam-hello` or the preview, not capture, phase of `rpicam-still`).
Original file line number Diff line number Diff line change
Expand Up @@ -132,3 +132,10 @@ Records exactly the specified number of frames. Any non-zero value overrides xre

Records exactly the specified framerate. Accepts a nonzero integer.

==== `low-latency`

On a Pi 5, the `--low-latency` option will reduce the encoding latency, which may be beneficial for real-time streaming applications, in return for (slightly) less good coding efficiency (for example, B frames and arithmethic coding will no longer be used).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
On a Pi 5, the `--low-latency` option will reduce the encoding latency, which may be beneficial for real-time streaming applications, in return for (slightly) less good coding efficiency (for example, B frames and arithmethic coding will no longer be used).
On Raspberry Pi 5, use the `--low-latency` option to reduce the encoding latency at the expense of encoding efficiency (removes B frames and arithmetic coding). This can be beneficial for real-time streaming applications.


==== `sync`

Run the camera in software synchronisation mode, where multiple cameras synchronise frames to the same moment in time. The `sync` mode can be set to either `client` or `server`. For more information, please refer to the detailed explanation of xref:camera_software.adoc#software-camera-synchronisation[how software synchronisation works].
24 changes: 22 additions & 2 deletions documentation/asciidoc/computers/camera/rpicam_vid.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,20 +11,32 @@ For example, the following command writes a ten-second video to a file named `te
$ rpicam-vid -t 10s -o test.h264
----

You can play the resulting file with VLC and other video players:
You can play the resulting file with ffplay and other video players:

[source,console]
----
$ vlc test.h264
$ ffplay test.h264
----

[WARNING]
====
Older versions of vlc were able to play H.264 files correctly, but recent versions do not - displaying only a few, or possibly garbled, frames. You should either use a different media player, or save your files in a more widely supported container format - such as MP4 (see below).
====
Comment on lines +21 to +24
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[WARNING]
====
Older versions of vlc were able to play H.264 files correctly, but recent versions do not - displaying only a few, or possibly garbled, frames. You should either use a different media player, or save your files in a more widely supported container format - such as MP4 (see below).
====
[IMPORTANT]
====
Older versions of VLC could play H.264 files correctly, but recent versions do not. Instead, they display only a few, possibly garbled, frames. You should either use a different media player, or save your files in a more widely supported container format such as MP4 (see below).
====


On Raspberry Pi 5, you can output to the MP4 container format directly by specifying the `mp4` file extension for your output file:

[source,console]
----
$ rpicam-vid -t 10s -o test.mp4
----

On Raspberry Pi 4, or earlier devices, you can save MP4 files using:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
On Raspberry Pi 4, or earlier devices, you can save MP4 files using:
On Raspberry Pi 4 or earlier, run the following command to save MP4 files:


[source,console]
----
$ rpicam-vid -t 10s --codec libav -o test.mp4
----

==== Encoders

`rpicam-vid` supports motion JPEG as well as both uncompressed and unformatted YUV420:
Expand Down Expand Up @@ -76,3 +88,11 @@ To enable the `libav` backend, pass `libav` to the xref:camera_software.adoc#cod
----
$ rpicam-vid --codec libav --libav-format avi --libav-audio --output example.avi
----

==== Low latency video with the Pi 5

Pi 5 uses software video encoders. These generally output frames with a longer latency than the old hardware encoders, and this can sometimes be an issue for real-time streaming applications.

In this case, please add the option `--low-latency` to the `rpicam-vid` command. This will alter certain encoder options to output the encoded frame more quickly.

The downside is that coding efficiency is (slightly) less good, and that the processor's multiple cores may be used (slightly) less efficiently. The maximum framerate that can be encoded may be slightly reduced (though it will still easily achieve 1080p30).
Comment on lines +92 to +98
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
==== Low latency video with the Pi 5
Pi 5 uses software video encoders. These generally output frames with a longer latency than the old hardware encoders, and this can sometimes be an issue for real-time streaming applications.
In this case, please add the option `--low-latency` to the `rpicam-vid` command. This will alter certain encoder options to output the encoded frame more quickly.
The downside is that coding efficiency is (slightly) less good, and that the processor's multiple cores may be used (slightly) less efficiently. The maximum framerate that can be encoded may be slightly reduced (though it will still easily achieve 1080p30).
==== Low latency video with Raspberry Pi 5
Raspberry Pi 5 uses software video encoders instead of hardware encoders. As a result, frames typically output with higher latency than the hardware encoders on Raspberry Pi 4 and earlier. This can sometimes cause issues for real-time streaming applications.
To reduce the encoder latency, pass the `--low-latency` flag to the `rpicam-vid` command. This slightly reduces encoding efficiency and removes B frames to limit latency. The maximum framerate that can be encoded may be slightly reduced, but you should expect to output at least 1080p resolution at 30 FPS.

Loading