OpenSUN3D

3rd Workshop on
Open-Vocabulary 3D Scene Understanding

in conjunction with ECCV 2024 in Milan, Italy.

September 29, Sunday 14:00 - 17:30 - Room: Amber 4

Introduction

The ability to perceive, understand and interact with arbitrary 3D environments is a long-standing goal in research with applications in AR/VR, health, robotics and so on. Current 3D scene understanding models are largely limited to low-level recognition tasks such as object detection or semantic segmentation, and do not generalize well beyond the a pre-defined set of training labels. More recently, large visual-language models (VLM), such as CLIP, have demonstrated impressive capabilities trained solely on internet-scale image-language pairs. Some initial works have shown that these models have the potential to extend 3D scene understanding not only to open set recognition, but also offer additional applications such as affordances, materials, activities, and properties of unseen environments. The goal of this workshop is to bundle these efforts and to discuss and establish clear task definitions, evaluation metrics, and benchmark datasets.

Schedule

14:00 - 14:15 Welcome & Introduction
14:15 - 14:45 Keynote 1 Tim Meinhardt (NVIDIA)
14:45 - 15:15 Keynote 2 Or Litany (Technion)
15:15 - 15:30 Spotlight Mihai Dusmanu (Microsoft)
15:30 - 16:30 Poster Session & Coffee Break
16:30 - 17:00 Keynote 3 Alex Bewley (Google)
17:00 - 17:30 Keynote 4 Krishna Murthy (Meta)
16:45 - 17:30 Concluding Remarks

Keynote Speakers

Tim Meinhardt is a research scientist at the NVIDIA Dynamic Vision and Learning (DVL) group. His work focuses on video and scene understanding, specifically, multi-object tracking and video segmentation. In 2009, his academic journey started with a B.Sc in physics at the Ludwig Maximilian University (LMU) of Munich. After spending a semester abroad at the Boğaziçi University in Istanbul and finishing his Bachelor in 2013, he worked in Berlin and Munich as a software engineer for web development and robotics. He returned to academia in 2014 for a M.Sc. in Computer Science at the LMU of Munich. For his master’s thesis, he joined the Computer Vision & Artificial Intelligence (CVAI) chair at the Technical University of Munich (TUM). Under the supervision of Prof. Dr. Laura Leal-Taixé and as a member of her TUM DVL group, he continued his research as a Ph.D. candidate until 2023.

Or Litany is a Senior Research Scientist at NVIDIA and an Assistant Professor at the Technion, specializing in 3D computer vision and generative AI. He is honored to be a 2023 Azrieli Faculty Fellow and a Taub Fellow. Previously, he conducted postdoctoral research at Stanford University under Prof. Leonidas Guibas, and at FAIR, hosted by Prof. Jitendra Malik. His PhD is from Tel-Aviv University, advised by Prof. Alex Bronstein. My B.Sc. in Physics and Mathematics is from the Hebrew University.

Alex Bewley, a Senior Researcher at Google DeepMind, works at the intersection of computer vision and robotics, focusing on combining language, 3D scene understanding, and methods that leverage spatial or temporal structure for high-speed agile robotics. He has explored the use of large datasets in an open vocabulary setting, notably through his work on Scene Graph ViT and Video OWL-ViT. He also co-authored the award-winning Open X-Embodiment dataset, contributing to the advancement of large-scale robotic learning. Alex obtained his PhD in robot vision from the Queensland University of Technology in Australia. His goal is to enable multi-purpose robots that can handle everyday tasks and adapt to new ones described in natural language.

Krishna Murthy Jatavallabhula is a AI research scientist at Meta. Previously, he was a postdoc at MIT CSAIL with Antonio Torralba and Josh Tenenbaum. Prior to that, I received my PhD at Universite de Montreal and Mila, advised by Liam Paull. His research focuses on building world models for robots and other physical agents to enable them to perceive, reason, and act just as humans are able. My work spans the robotics, computer vision, and machine learning communities. His work has been recognized with PhD fellowship awards from NVIDIA and Google, and a best-paper award from IEEE RAL.

Accepted Papers

The following papers are accepted for poster presentation during the workshop.

SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding
Baoxiong Jia, Yixin Chen, Huangyue Yu, Yan Wang, Xuesong Niu, Tengyu Liu, Qing Li, Siyuan Huang

Unifying 3D Vision-Language Understanding via Promptable Queries
Ziyu Zhu, Zhuofan Zhang, Xiaojian Ma, Xuesong Niu, Yixin Chen, Baoxiong Jia, Siyuan Huang, Qing Li

OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation
Zhening Huang, Xiaoyang Wu, Xi CHEN, Hengshuang Zhao, Lei Zhu, Joan Lasenby

Task-oriented Sequential Grounding in 3D Scenes
Zhuofan Zhang, Ziyu Zhu, Pengxiang Li, Tengyu Liu, Xiaojian Ma, Yixin Chen, Baoxiong Jia, Siyuan Huang, Qing Li

Space3D-Bench: Spatial 3D Question Answering Benchmark
Emilia Szymanska, Mihai Dusmanu, Mahdi Rad, Marc Pollefeys

Dates

Paper Track: We accept novel full 14-page papers for publication in the proceedings, and either shorter 4-page extended abstracts or 14-page papers of novel or previously published work that will not be included in the proceedings. Full papers should use the official ECCV 2024 template. Extended abstracts are not subject to the ECCV rules, so they can be in any template but, as a rule to not be considered a publication in terms of double submission policies, they should be 4 pages in CVPR template format.

Organizers

This website is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
It borrows the source code of this website. We would like to thank Utkarsh Sinha and Keunhong Park.