Apply point cloud from realsense into OpenCV Surface matching algorithm #13689
-
So i have try to export the point cloud from the RealSense camera (I am using model D435i). I base on this link and try to replicate it by using point cloud from D435i https://docs.opencv.org/4.x/d9/d25/group__surface__matching.html and this is the reference i get from the opencv-contrib python here is the link to the document https://github.com/opencv/opencv_contrib/blob/4.x/modules/surface_matching/samples/ppf_load_match.py
but i stuck at the step where i try to load the |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
Hi @anewworl I have not seen an example of a RealSense ply file being imported directly into an OpenCV script. My research indicates that imported ply data is typically converted into an OpenCV mat, like in the discussion at the link below. https://stackoverflow.com/questions/29435273/pointcloud-data-to-mat-data |
Beta Was this translation helpful? Give feedback.
-
On the RealSense Viewer depth image, the background is likely missing because the Viewer has the threshold post-processing filter enabled by default, which restricts the maximum distance of the depth that can be displayed to 4 meters from the camera. Going to Stereo Module > Post-Processing, expanding open the list of post-processing filters and disabling the Threshold filter will enable depth up to 10 meters away to be included on the depth image. The amount of detail on the depth image may also increase and the number of holes / gaps decrease if you maximize the value of the Laser Power setting to '360'. This setting can be found under Stereo Module > Controls. Calculating roll, pitch and yaw with pyrealsense2 code by accessing the camera's IMU component is discussed at #4391 |
Beta Was this translation helpful? Give feedback.
-
If #4391 is not suitable for your requirements for calculating roll-pitch-yaw with Python code then the official OpenCV documentation does provide a Python pose estimation tutorial that makes use of an OpenCV SolvePNP algorithm to obtain pose. https://docs.opencv.org/4.x/d7/d53/tutorial_py_pose.html If ROS is an option for you then you could consider the DOPE method. Such a system uses an RGB image to recognize the object as a box and then calculates its 6 Degrees of Freedom (6DOF) pose. A couple of RealSense-compatible example projects of this type are in the links below. https://github.com/pauloabelha/Deep_Object_Pose Another way to obtain the pose of an object with a 400 Series camera is to attach an ArUco image tag to the object. https://github.com/Jphartogi/ipa_marker_detection However, if you have a Python solution already that can obtain the pose of an object and it works well then I would recommend continuing with it. |
Beta Was this translation helpful? Give feedback.
If #4391 is not suitable for your requirements for calculating roll-pitch-yaw with Python code then the official OpenCV documentation does provide a Python pose estimation tutorial that makes use of an OpenCV SolvePNP algorithm to obtain pose.
https://docs.opencv.org/4.x/d7/d53/tutorial_py_pose.html
If ROS is an option for you then you could consider the DOPE method. Such a system uses an RGB image to recognize the object as a box and then calculates its 6 Degrees of Freedom (6DOF) pose. A couple of RealSense-compatible example projects of this type are in the links below.
https://github.com/pauloabelha/Deep_Object_Pose
https://github.com/avinashsen707/AUBOi5-D435-ROS-DOPE
Another way to …