-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KITTI with Ground Truth Poses #15
Comments
Hi, Thank you for your interest. Sorry for this late reply. Currently, the ground truth msg is only used to align the first pose of SLAM with it so that we can visualize the drift of SLAM online. We save the pose of the ground truth msg for offline evaluation. I understand your demand. If you want to do so, I think you may try to use the ground truth msg as the odometry msg (remap the topic, set |
Thanks for the reply. I will try that and get back to you with more questions if I have any. |
Hi, thank you so much for providing this perfect program. |
Do you mean radar rather than lidar? Can you show me a frame of your data, both vertical and horizontal?
…________________________________
发件人: Rustli11 ***@***.***>
发送时间: 2024年2月29日 16:59
收件人: RuanJY/SLAMesh ***@***.***>
抄送: RUAN, Jianyuan [Student] ***@***.***>; Comment ***@***.***>
主题: Re: [RuanJY/SLAMesh] KITTI with Ground Truth Poses (Issue #15)
Hi, thank you so much for providing this perfect program.
I am trying to use my own vertical radar scan data for model reconstruction, which will get a cleaner mesh. But the matching process always shifts. I referred the solution from issue #15<#15>. Then I tried the pose data obtained from the horizontal radar of the same machine to correct it.
It runs without errors but never draws the mesh. The result of the run looks like this
[ INFO] [1709195543.923199405]: PointCloud seq: [3428]
[ INFO] [1709195543.973204881]: PointCloud seq: [3429]
[ INFO] [1709195544.024206786]: PointCloud seq: [3430]
[ INFO] [1709195544.074742817]: PointCloud seq: [3431]
Hopefully I can get some luck with your help.
―
Reply to this email directly, view it on GitHub<#15 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AF3YWJD5RQHVZBPRFK6XIS3YV3WVRAVCNFSM6AAAAAA5G7UF2KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZQGY4TCMJXGI>.
You are receiving this because you commented.Message ID: ***@***.***>
[https://www.polyu.edu.hk/emaildisclaimer/PolyU_Email_Signature.jpg]
Disclaimer:
This message (including any attachments) contains confidential information intended for a specific individual and purpose. If you are not the intended recipient, you should delete this message and notify the sender and The Hong Kong Polytechnic University (the University) immediately. Any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited and may be unlawful.
The University specifically denies any responsibility for the accuracy or quality of information obtained through University E-mail Facilities. Any views and opinions expressed are only those of the author(s) and do not necessarily represent those of the University and the University accepts no liability whatsoever for any losses or damages incurred or caused to any party as a result of the use of such information.
|
Sorry for the delayed response; If you see [ INFO] [1709195543.923199405]: PointCloud seq: [3428] repeatedly, the SLAM doesn't start. It is the info from the callback function: SLAMesh/src/slamesher_node.cpp Line 729 in 2002165
I think maybe your odometry topic is not set correctly and the algorithm is waiting for it. You can uncomment this line and see if you can receive the print from odometry callback function: SLAMesh/src/slamesher_node.cpp Line 723 in 2002165
You should remap the odometry topic in the launch file like:
By the way, I have made some changes to the code. Would you please test the new code again using vertical lidar directly and check if it still drift? |
Hi! Thank you for such an awesome work.
I saw that you have some parameters
grt_available
andGroundTruthCallback
, but didn't see any interface or any mentions in the README for feeding ground truth poses to SLAMesh to test only the meshing part. I'm wondering if I have to feed gt poses through rosbag play, or load it like how you load lidar points for KITTI.Any help is much appreciated. Thanks!
The text was updated successfully, but these errors were encountered: