Skip to content

Commit 133a579

Browse files
Update paper link
1 parent 00ac208 commit 133a579

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/pages/publications.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ import * as features from '@site/src/components/PublicationFeatures';
2525
</li>
2626
<li>
2727
<features.ConferenceItem conference="NeurIPS Workshop OWA (Oral)"/>
28-
<features.PaperTitle paperLink="https://arxiv.org/abs/2410.01273" title="Integrating Visual and Linguistic Instructions for Context-Aware Navigation Agents"/>
28+
<features.PaperTitle paperLink="https://openreview.net/forum?id=U6wyOnPt1U" title="Integrating Visual and Linguistic Instructions for Context-Aware Navigation Agents"/>
2929
<features.AuthorItem authors={["Suhwan Choi", "Yongjun Cho", "Minchan Kim", "Jaeyoon Jung", "Myunchul Joe", "Yubeen Park", "Minseo Kim", "Sungwoong Kim", "Sungjae Lee", "Hwiseong Park", "Jiwan Chung", "Youngjae Yu"]} numFirstAuthor={4} isBrainTeam={[true, true, true, true, true, false, false, false, false, false, false, false]}/>
3030
<features.PaperDescription preview="Real-life robot navigation involves more than simply reaching a destination; it requires optimizing movements while considering scenario-specific goals. "
3131
description="Humans often express these goals through abstract cues, such as verbal commands or rough sketches. While this guidance may be vague or noisy, we still expect robots to navigate as intended. For robots to interpret and execute these abstract instructions in line with human expectations, they need to share a basic understanding of navigation concepts with humans. To address this challenge, we introduce CANVAS, a novel framework that integrates both visual and linguistic instructions for commonsense-aware navigation. CANVAS leverages imitation learning, enabling robots to learn from human navigation behavior. We also present COMMAND, a comprehensive dataset that includes human-annotated navigation results spanning over 48 hours and 219 kilometers, specifically designed to train commonsense-aware navigation systems in simulated environments. Our experiments demonstrate that CANVAS outperforms the strong rule-based ROS NavStack system across all environments, excelling even with noisy instructions. In particular, in the orchard environment where ROS NavStack achieved a 0% success rate, CANVAS reached a 67% success rate. CANVAS also closely aligns with human demonstrations and commonsense constraints, even in unseen environments. Moreover, real-world deployment of CANVAS shows impressive Sim2Real transfer, with a total success rate of 69%, highlighting the potential of learning from human demonstrations in simulated environments for real-world applications."/>

0 commit comments

Comments
 (0)