Challenge Instruction

Overview: Our workshop will be featuring two challenges: OOD-CV and SSB. The two challenges are hosted on CodaLab. For all challenge tracks, we will have two separate leaderboards: one for the ImageNet-1k only pretrained models (can only pretrain models on ImageNet-1k dataset) and one for self-supervised pretrained models (can pretrain the models on any dataset but can only use self-supervised learning). The challenge tracks are described below.

OOD-CV
The OOD-CV challenge focuses on out-of-distribution generalization in computer vision, the out-of-distribution shift can The data can be accessed from https://github.com/OOD-CV/OOD-CV or [here]. Additionally, a tool and a baseline for the 3D pose-estimation is available from https://github.com/OOD-CV/OOD-CV-Pose. You can learn more about the dataset from [here]. If you wish to access the full dataset and replicate the experiments in our paper, please see [here].
Track-1: Object Classification: This track evaluates the performance of a classification model on out-of-distribution (OOD) data using the top-1 accuracy. The ranking will be determined by the average top-1 accuracy on the OOD datasets.
Track-2: Object Detection: This track evaluates the performance of a detection model on OOD data. Evaluation will be done using the mAP metric, the final ranking will be determined by the average mAP on the OOD datasets.
Track-3: 3D Pose Estimation: This track evaluates the performance of a 3D pose estimation model on OOD data. The ranking will be determined by the average Acc@π/6 on the OOD datasets.

SSB
The Semantic Shift Benchmark (SSB) challenge focuses on open-set recognition and the generalized category discovery problem. The SSB benchmark can be accessed from [here] or [here]. For an understanding of the generalized category discovery problem, the readers can refer to the works [here] and [here].
Track-1: Open-Set Recognition: This track evaluates the ability of the model to identify open-set examples. This track has only one leaderboard, only models that are not trained on ImageNet-22k can be submitted. The ranking will be determined based on the average score of FPR and AUROC. A baseline is provided [here].
Track-2: Generalized Category Discovery: This track evaluates the ability of the model to discover and recognize novel concepts within an unlabeled dataset. This track has two leaderboard, one for self-supervised pretrained models on ImageNet-1k, one for any self-supervised pretrained models. The ranking will be determined based on the average clustering accuracy on all three datasets in the FGVC dataset from SSB benchmark. We provide a baseline for GCD [here].


Please find the challenge report template [here]. Please submit the finished challenge report to this email.

Challenge Reports


Track Links
OOD-CV: Classification [ImageNet-1k only] [Any Self-Supervised]
OOD-CV: Detection [ImageNet-1k only] [Any Self-Supervised]
OOD-CV: Pose Estimation [ImageNet-1k only] [Any Self-Supervised]
SSB: Open-Set Recognition [ImageNet-1k only]
SSB: Generalized Category Discovery [ImageNet-1k only] [Any Self-Supervised]

Challenge Servers


Track Links
OOD-CV: Classification [ImageNet-1k only] [Any Self-Supervised]
OOD-CV: Detection [ImageNet-1k only] [Any Self-Supervised]
OOD-CV: Pose Estimation [ImageNet-1k only] [Any Self-Supervised]
SSB: Open-Set Recognition [ImageNet-1k only]
SSB: Generalized Category Discovery [ImageNet-1k only] [Any Self-Supervised]

Important Dates

Description Date
Phase-1 starts June 20th, 2023
Phase-2 ends September 12th, 2023
Challenge report and code deadline September 18th, 2023