Deep learning models are usually developed and tested under the implicit assumption that the training and test data are drawn independently and identically distributed (IID) from the same distribution. Overlooking out-of-distribution (OOD) images can result in poor performance in unseen or adverse viewing conditions, which is common in real-world scenarios.

In this workshop, we are interested in discussing the performance of computer vision models on OOD images which follows a different distribution than the training images. Also given the current trend of web-scale pretrained computer vision models, it is of interest to better understand their performance in the OOD or rare scenarios.

Our workshop will be featuring three competitions, OOD generalization on the OOD-CV dataset, open-set recognition and generalized category discovery on the Semantic-shift benchmark.

Submission Information

We invite submissions of both long and short papers on the topic of out-of-distribution generalization in computer vision. Long papers are limited to 8 pages, and the submission deadline is July 28th, 2023 (AoE). Short papers are limited to 4 pages, and the submission deadline is August 28th, 2023 (AoE). Both should use the ICCV template. Only accepted long papers will be included in the ICCV 2023 proceedings. Both accepted long and short papers will be presented as either an oral or poster presentation. At least one author of each accepted submission must present the paper at the workshop. The topics include but are not limited to:
  • Discussion of OOD generalization in the context of internet scale pretrained models
  • Improving generalization of computer vision systems in OOD scenarios
  • Research at the intersection of biological and machine vision
  • Generative causal models for image analysis
  • Domain generalization
  • Novel architectures with robustness to occlusion, viewpoint and other real-world domain shifts
  • Domain adaptation techniques for robust vision system in the real world
  • Datasets for evaluating model robustness
Please see here for submission informations.

Challenge Information

This workshop will feature two challenges: Please see here for more information.

Each challenge will have two leaderboards, one will limit the use of pretrained models to self-supervised pretrained models on ImageNet-1k and the other will allow the use of any self-supervised pretrained models.

The top-3 teams are required to open source their code and models after the competition to ensure reproducibility.

Workshop Program

Please join us from this zoom meeting: Passcode: 5t7fUumn

3rd of October, Tuesday, Time Zone UTC+2, @room S01
9:00 Opening
9:10 Invited talk: Xiaojuan Qi 30mins
9:40 Invited talk: Florian Tramer 30mins
10:10 Invited talk: Wieland Brendel 30mins
10:40 Break
11:00 Oral Presentation #1 LORD: Leveraging Open-Set Recognition with Unknown Data
11:10 Oral Presentation #2 Confusing Large Models by Confusing Small Models
11:20 Oral Presentation #3 Raising the Bar on the Evaluation of Out-of-Distribution Detection
11:30 Poster Session All poster papers will have a 2 minutes slot for presentation, remote presentations are available. Paper list and the order of the presentation is released [here].
12:30 Lunch Break
13:30 Invited talk: Mario Fritz 30mins
14:00 Invited talk: Kate Saenko 30mins
14:30 Invited talk: Alan Yuille 30mins
15:00 Coffee Break
15:30 Challenge Introduction 30mins
16:00 Challenge Winners Presentation 30mins
16:30 Oral Presentation #4 Intriguing Properties of Generative Classifiers
16:40 Oral Presentation #5 Language Plays a Pivotal Role in the Object-Attribute Compositional Generalization of CLIP
16:50 Oral Presentation #6 Group-Balanced Mixup for Out of Distribution Generalization
17:00 Poster Session All Papers

Organizing Committee

Program Committee

Kibok Lee, Yulong Cao, Shuo Chen, Lifeng Huang, Haoran Wang, Yuxiang Lai, Angtian Wang, Salah Ghamizi, Xin Wen, Xiaoding Yuan, Pengliang Ji, Zexin He, Umar Khalid, Jiahui Liu, Guofeng Zhang, Zihao Xiao, Jike Zhong, Junfei Xiao, Alexander Robey, Wei Hao, Junbo Li, Ziyun Li
Please contact or if you have any questions.