Skip to content

Commit abdf578

Browse files
committed
adding projects
1 parent af7a273 commit abdf578

File tree

12 files changed

+160
-6
lines changed

12 files changed

+160
-6
lines changed

_data/people.yml

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -665,6 +665,59 @@ james:
665665
webpage: https://www.mcgill.ca/mecheng/james-forbes
666666

667667

668+
yxiu:
669+
display_name: Yuliang Xiu
670+
role: collab
671+
webpage: https://xiuyuliang.cn/
672+
673+
674+
scholkopf:
675+
display_name: Bernard Schölkopf
676+
role: collab
677+
webpage: https://is.mpg.de/~bs
678+
679+
corban:
680+
display_name: Corban Rivera
681+
role: collab
682+
webpage: https://www.jhuapl.edu/work/our-organization/research-and-exploratory-development/red-staff-directory/corban-rivera
683+
684+
william_paul:
685+
display_name: William Paul
686+
role: collab
687+
webpage: https://scholar.google.com/citations?user=92bmh84AAAAJ
688+
689+
690+
rama_chellappa:
691+
display_name: Rama Chellappa
692+
role: collab
693+
webpage: https://engineering.jhu.edu/faculty/rama-chellappa/
668694

669695

696+
chuang_gan:
697+
display_name: Chuang Gan
698+
role: collab
699+
webpage: https://people.csail.mit.edu/ganchuang/
700+
701+
roger_girgis:
702+
display_name: Roger Girgis
703+
role: collab
704+
webpage: https://mila.quebec/en/person/roger-girgis/
705+
706+
anthony_gosselin:
707+
display_name: Anothony Gosselin
708+
role: collab
709+
webpage: https://www.linkedin.com/in/anthony-gosselin-098b7a1a1/?originalSubdomain=ca
710+
711+
712+
bruno_carrez:
713+
display_name: Bruno Carrez
714+
role: collab
715+
webpage: https://mila.quebec/en/person/bruno-carrez/
716+
717+
felix_heide:
718+
display_name: Felix Heide
719+
role: collab
720+
webpage: https://www.cs.princeton.edu/~fheide/
721+
670722

723+

_projects/01-gradslam.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: gradslam
2+
title: "∇SLAM: Dense SLAM meets Automatic Differentiation"
33

44
notitle: false
55

_projects/conceptgraphs.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
---
2+
title: "ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and Planning"
3+
4+
# status: active
5+
6+
notitle: false
7+
8+
description: |
9+
ConceptGraphs builds an open-vocabular scene graph from a sequence of posed RGB-D images. Compared to our previous approach, ConceptFusion, this representation is more sparse and has a better understanding of relationship between entities and objects in the graph.
10+
11+
people:
12+
- ali-k
13+
- sacha
14+
- bipasha
15+
- aditya
16+
- kirsty
17+
- liam
18+
19+
collaborators:
20+
- qiao
21+
- krishna
22+
- corban
23+
- william_paul
24+
- rama_chellappa
25+
- chuang_gan
26+
- celso
27+
- tenenbaum
28+
- torralba
29+
- shkurti
30+
31+
layout: project
32+
image: /img/papers/concept-graphs.png
33+
link: https://concept-graphs.github.io/
34+
last-updated: 2024-09-23
35+
---
36+
37+
## ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and Planning
38+
39+
For robots to perform a wide variety of tasks, they require a 3D representation of the world that is semantically rich, yet compact and efficient for task-driven perception and planning. Recent approaches have attempted to leverage features from large vision-language models to encode semantics in 3D representations. However, these approaches tend to produce maps with per-point feature vectors, which do not scale well in larger environments, nor do they contain semantic spatial relationships between entities in the environment, which are useful for downstream planning. In this work, we propose ConceptGraphs, an open-vocabulary graph-structured representation for 3D scenes. ConceptGraphs is built by leveraging 2D foundation models and fusing their output to 3D by multi-view association. The resulting representations generalize to novel semantic classes, without the need to collect large 3D datasets or finetune models. We demonstrate the utility of this representation through a number of downstream planning tasks that are specified through abstract (language) prompts and require complex reasoning over spatial and semantic concepts.

_projects/ctcnet.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Self-supervised visual odometry estimation
2+
title: Geometric Consistency for Self-Supervised End-to-End Visual Odometry
33

44
description: |
55
A self-supervised deep network for visual odometry estimation from monocular imagery.

_projects/ctrl-sim.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
---
2+
title: "CtRL-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning"
3+
# status: active
4+
5+
notitle: false
6+
7+
description: |
8+
CtRL-Sim, a framework that leverages return-conditioned offline reinforcement learning (RL) to enable reactive, closed-loop, and controllable behaviour simulation within a physics-enhanced Nocturne environment.
9+
10+
people:
11+
- luke
12+
- liam
13+
14+
collaborators:
15+
- roger_girgis
16+
- anothony_gosselin
17+
- bruno_carrez
18+
- florian
19+
- felix_heide
20+
- chris
21+
22+
23+
layout: project
24+
image: /img/papers/ctrl-sim.png
25+
link: https://montrealrobotics.ca/ctrlsim/
26+
last-updated: 2024-09-25
27+
---
28+
29+
## CtRL-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning
30+
31+
Evaluating autonomous vehicle stacks (AVs) in simulation typically involves replaying driving logs from real-world recorded traffic. However, agents replayed from offline data are not reactive and hard to intuitively control. Existing approaches address these challenges by proposing methods that rely on heuristics or generative models of real-world data but these approaches either lack realism or necessitate costly iterative sampling procedures to control the generated behaviours. In this work, we take an alternative approach and propose CtRL-Sim, a method that leverages return-conditioned offline reinforcement learning to efficiently generate reactive and controllable traffic agents. Specifically, we process real-world driving data through a physics-enhanced Nocturne simulator to generate a diverse offline reinforcement learning dataset, annotated with various reward terms. With this dataset, we train a return-conditioned multi-agent behaviour model that allows for fine-grained manipulation of agent behaviours by modifying the desired returns for the various reward components. This capability enables the generation of a wide range of driving behaviours beyond the scope of the initial dataset, including adversarial behaviours. We demonstrate that CtRL-Sim can generate diverse and realistic safety-critical scenarios while providing fine-grained control over agent behaviours.

_projects/gradsim.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: gradsim
2+
title: "∇Sim: Differentiable Simulation for System Identification and Visuomotor Control"
33

44
notitle: false
55

_projects/gshell3d.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
---
2+
title: "Ghost on the Shell: An Expressive Representation of General 3D Shapes"
3+
4+
# status: active
5+
6+
notitle: false
7+
8+
description: |
9+
G-Shell models both watertight and non-watertight meshes of different shape topology in a differentiable way. Mesh extraction with G-Shell is stable -- no need to compute MLP gradients but simply do sign checks on grid vertices.
10+
11+
people:
12+
- zhen
13+
- liam
14+
15+
collaborators:
16+
- yfeng
17+
- yxiu
18+
- wyliu
19+
- mjb
20+
- scholkopf
21+
22+
layout: project
23+
image: /img/papers/gshell.png
24+
link: https://gshell3d.github.io/
25+
last-updated: 2024-09-24
26+
---
27+
28+
## Ghost on the Shell: An Expressive Representation of General 3D Shapes
29+
30+
The creation of photorealistic virtual worlds requires the accurate modeling of 3D surface geometry for a wide range of objects. For this, meshes are appealing since they 1) enable fast physics-based rendering with realistic material and lighting, 2) support physical simulation, and 3) are memory-efficient for modern graphics pipelines. Recent work on reconstructing and statistically modeling 3D shape, however, has critiqued meshes as being topologically inflexible. To capture a wide range of object shapes, any 3D representation must be able to model solid, watertight, shapes as well as thin, open, surfaces. Recent work has focused on the former, and methods for reconstructing open surfaces do not support fast reconstruction with material and lighting or unconditional generative modelling. Inspired by the observation that open surfaces can be seen as islands floating on watertight surfaces, we parameterize open surfaces by defining a manifold signed distance field on watertight templates. With this parameterization, we further develop a grid-based and differentiable representation that parameterizes both watertight and non-watertight meshes of arbitrary topology. Our new representation, called Ghost-on-the-Shell (G-Shell), enables two important applications: differentiable rasterization-based reconstruction from multiview images and generative modelling of non-watertight meshes. We empirically demonstrate that G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks, while also performing effectively for watertight meshes.

_projects/lamaml.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: La-MAML
2+
title: "La-MAML: Look-ahead Meta Learning for Continual Learning"
33

44
notitle: false
55

@@ -16,10 +16,11 @@ collaborators:
1616

1717

1818
layout: project
19-
image: "https://mila.quebec/wp-content/uploads/2020/11/lamaml_jpg.gif"
19+
image: /img/papers/lamaml.png
2020
link: https://mila.quebec/en/article/la-maml-look-ahead-meta-learning-for-continual-learning/
2121
last-updated: 2020-11-19
2222
---
2323

24-
## La-MAML
24+
## La-MAML: Look-ahead Meta Learning for Continual Learning
2525

26+
The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks. While meta-learning shows great potential for reducing interference between old and new tasks, the current training procedures tend to be either slow or offline, and sensitive to many hyper-parameters. In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory. Our proposed modulation of per-parameter learning rates in our meta-learning update allows us to draw connections to prior work on hypergradients and meta-descent. This provides a more flexible and efficient way to mitigate catastrophic forgetting compared to conventional prior-based methods. La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks.

img/papers/concept-graphs.png

759 KB
Loading

img/papers/ctrl-sim.png

76.9 KB
Loading

0 commit comments

Comments
 (0)