You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: documentation/asciidoc/computers/camera/rpicam_apps_multicam.adoc
+5-5
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ NOTE: `libcamera` does not yet provide stereoscopic camera support. When running
12
12
13
13
==== Software Camera Synchronisation
14
14
15
-
Raspberry Pi's _libcamera_ implementation has the ability to synchronise the frames of different cameras using only software. This will cause one camera to adjust it's frame timing so as to coincide as closely as possible with the frames of another camera. No soldering or hardware connections are required, and it will work with all Raspberry Pi's camera modules, and even third party ones so long as their drivers implement frame duration control correctly.
15
+
Raspberry Pi's _libcamera_ implementation has the ability to synchronise the frames of different cameras using only software. This will cause one camera to adjust it's frame timing so as to coincide as closely as possible with the frames of another camera. No soldering or hardware connections are required, and it will work with all of Raspberry Pi's camera modules, and even third party ones so long as their drivers implement frame duration control correctly.
16
16
17
17
**How it works**
18
18
@@ -38,7 +38,7 @@ Clients listen out for server timing messages and, when they receive one, will s
38
38
39
39
The clients learn the correct "synchronisation point" from the server's messages, and just like the server, will signal the camera application at the same moment that it should start using the frames. So in the case of `rpicam-vid`, this is once again the moment at which frames will start being recorded.
40
40
41
-
Normally it makes sense to start clients _before_ the server, as the clients will simply wait (the "syncrhonisation point" has not been reached) until a server is seen broadcasting onto the network. This obviously avoids timing problems where a server might reach its "synchronisation point" even before all the clients have been started!
41
+
Normally it makes sense to start clients _before_ the server, as the clients will simply wait (the "synchronisation point" has not been reached) until a server is seen broadcasting onto the network. This obviously avoids timing problems where a server might reach its "synchronisation point" even before all the clients have been started!
42
42
43
43
**Usage in `rpicam-vid`**
44
44
@@ -50,19 +50,19 @@ First we should start the client:
Note the `--sync client` parameter. This will record for 20 seconds in total but note that this _includes_ the time to start the server and achieve synchronisation. So while the start of the recordings, and all the frames, will be synchronised, the end of the recordings is not.
53
+
Note the `--sync client` parameter. This will record for 20 seconds but _only_ once the sychronisation point has been reached. If necessary, it will wait indefinitely for the first server message.
This will run for 20 seconds but with the default settings (100 frames at 30fps) will give clients just over 3 seconds to get synchronised before anything is recorded. So the final video file will contain slightly under 17 seconds of video.
61
+
This too will run for 20 seconds counting from when the synchronisation point is reached and the recording starts. With the default synchronisation settings (100 frames at 30fps) this means there will be just over 3 seconds for clients to get synchronised.
62
62
63
63
The server's broadcast address and port, the frequency of the timing messages and the number of frames to wait for clients to synchronise, can all be changed in the camera tuning file. Clients only pay attention to the broadcast address here which should match the server's; the other information will be ignored. Please refer to the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera tuning guide] for more information.
64
64
65
65
In practical operation there are a few final points to be aware of:
66
66
67
67
* The fixed framerate needs to be below the maximum framerate at which the camera can operate (in the camera mode that is being used). This is because the synchronisation algorithm may need to _shorten_ camera frames so that clients can catch up with the server, and this will fail if it is already running as fast as it can.
68
-
* Whilst cameras frames should be correctly synchronised, at higher framerates, or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually easier simply to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues, or reducing system load (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option].)
68
+
* Whilst camera frames should be correctly synchronised, at higher framerates or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually simpler to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues, or reducing system load (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option].)
Copy file name to clipboardexpand all lines: documentation/asciidoc/computers/camera/rpicam_configuration.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ Raspberry Pi OS recognises the following overlays in `/boot/firmware/config.txt`
38
38
39
39
To use one of these overlays, you must disable automatic camera detection. To disable automatic detection, set `camera_auto_detect=0` in `/boot/firmware/config.txt`. If `config.txt` already contains a line assigning an `camera_auto_detect` value, change the value to `0`. Reboot your Raspberry Pi with `sudo reboot` to load your changes.
40
40
41
-
If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or CM4, for example), then you can specify which one you are referring to by adding `,cam0` or `,cam1` (don't add any spaces) to the `dtoverlay` that you used from the table above. If you do not add either of these, it will default to checking camera connector 1 (`cam1`). But note that for official Raspberry PI camera modules, auto-detection will correctly identify all the cameras connected to your device.
41
+
If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or one of the compute modules, for example), then you can specify which one you are referring to by adding `,cam0` or `,cam1` (don't add any spaces) to the `dtoverlay` that you used from the table above. If you do not add either of these, it will default to checking camera connector 1 (`cam1`). But note that for official Raspberry Pi camera modules, auto-detection will correctly identify all the cameras connected to your device.
Copy file name to clipboardexpand all lines: documentation/asciidoc/computers/camera/rpicam_options_common.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -556,7 +556,7 @@ Post-processing is a large topic and admits the use of third-party software like
556
556
557
557
==== `buffer-count`
558
558
559
-
The number of buffers to allocate for still image capture or for video recording. The default value of zero lets each application choose a value for itself (1 for still image capture, and 6 for video recording). Increasing the number can sometimes help to reduce the number of frame drops, particularly at higher framerates.
559
+
The number of buffers to allocate for still image capture or for video recording. The default value of zero lets each application choose a reasonable number for its own use case (1 for still image capture, and 6 for video recording). Increasing the number can sometimes help to reduce the number of frame drops, particularly at higher framerates.
Copy file name to clipboardexpand all lines: documentation/asciidoc/computers/camera/rpicam_vid.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ $ ffplay test.h264
20
20
21
21
[WARNING]
22
22
====
23
-
Older versions of vlc used to play H.264 files correctly, but recent versions do not - displaying only a few, or possibly garbled, frames. You should either use a different media player, or save your files in a more widely supported container format - such as MP4 (see below).
23
+
Older versions of vlc were able to play H.264 files correctly, but recent versions do not - displaying only a few, or possibly garbled, frames. You should either use a different media player, or save your files in a more widely supported container format - such as MP4 (see below).
24
24
====
25
25
26
26
On Raspberry Pi 5, you can output to the MP4 container format directly by specifying the `mp4` file extension for your output file:
As noted previously, `vlc` no longer handles unencapsulated h264 streams.
20
+
As noted previously, `vlc` no longer handles unencapsulated H.264 streams.
21
21
22
-
In fact, support for unencapsulated h264 can generally be quite poor so it is often better to send an MPEG-2 Transport Stream instead. Making use of `libav`, this can be accomplished with:
22
+
In fact, support for unencapsulated H.264 can generally be quite poor so it is often better to send an MPEG-2 Transport Stream instead. Making use of `libav`, this can be accomplished with:
23
23
24
24
[source,console]
25
25
----
@@ -35,7 +35,7 @@ $ vlc udp://@:<port>
35
35
36
36
=== TCP
37
37
38
-
You can also stream video over TCP. As before, we can send an unencapsulated h264 stream over the network. To use a Raspberry Pi as a server:
38
+
You can also stream video over TCP. As before, we can send an unencapsulated H.264 stream over the network. To use a Raspberry Pi as a server:
https://gstreamer.freedesktop.org/[GStreamer] is a Linux framework for reading, processing and playing multimedia files. We can also use it in conjunction with `rpicam-vid` for network streaming.
112
112
113
-
This setup uses `rpicam-vid` to output an encoded h.264 bitstream to stdout. As we've done previously, we're going to encapsulate this in an MPEG-2 Transport Stream for better downstream compatibility.
113
+
This setup uses `rpicam-vid` to output an H.264 bitstream to stdout, though as we've done previously, we're going to encapsulate it in an MPEG-2 Transport Stream for better downstream compatibility.
114
114
115
115
Then, we use the GStreamer `fdsrc` element to receive the bitstream, and extra GStreamer elements to send it over the network. On the server, run the following command to start the stream, replacing the `<ip-addr>` placeholder with the IP address of the client or multicast address and replacing the `<port>` placeholder with the port you would like to use for streaming:
0 commit comments