Object detection is of significant value to the Computer Vision and Pattern Recognition communities as it is one of the fundamental vision problems. In this workshop, we will introduce two new benchmarks for the object detection task: Objects365 and CrowdHuman, both of which are designed and collected in the wild. Objects365 benchmark targets to address the large-scale detection with 365 object categories. There will be two tracks: full track with all of 365 object categories on the 600K training images and tiny track to address the 100 challenging categories on sub-set of the training images. CrowdHuman, on the other hand, is targeting the problem of human detection in the crowd. We hope these two datasets can provide the diverse and practical benchmarks to advance the research of object detection. Also, this workshop can serve as a platform to push the upper-bound of object detection research.

[News] The workshop website is now online.


Important Dates

Challenge Launch Date April 16, 2019
Testing Data Release May 10, 2019
Result Submissions Deadline June 12, 2019
Workshop date June 17, 2019


Workshop Overview

The competitions platform is provided by Biendata. The entrances of registration and submission are about to open. Wait a moment.

Objects365 Challenge Track

The Objects365 is a brand new dataset, designed to spur object detection research with a focus on diverse objects in the Wild. Objects365 has 365 object classes annotated on 638K images, totally with more than 10 million bounding boxes in the training set. Thus the annotations cover common objects occurring in all kinds of scene categories. Challenge for Objects365 is proposed to have two tracks:

  • Full Track. The goal of Full Track is to explore the upper-bound performance of object detection systems, given all the 365 classes and 600K+ training images. 30K images are used for validation and another 100K images are used for testing. To evaluate the performance of object detection, the evaluation criteria for COCO (IOU from 0.5 to 0.95) benchmark will be adopted.
  • Tiny Track. Tiny Track is to lower the entry threshold, accelerate the algorithm iteration speed, and study the long tail category detection problem.From the Objects365 dataset, 65 categories are selected, and contestants can training model using 10K training data.

CrowdHuman Challenge Track

The CrowdHuman dataset is large, rich-annotated and contains high diversity. It contains 15000, 4370 and 5000 images for training, validation, and testing, respectively. There are a total of 340K human instances and 22.6 persons per image from the train set, with various kinds of occlusions in the dataset. Each human instance is annotated with a head bounding-box, human visible-region bounding-box and human full-body bounding-box. We believe this dataset will serve as a solid baseline and help promote future research in human detection tasks, especially in the crowded environment.


Organizers

Contact us at info@objects365.org.

Gang Yu

Megvii Technology

Shuai Shao

Megvii Technology

Jian Sun

Megvii Technology

Committee

Xiangyu Zhang

Megvii Technology

Zeming Li

Megvii Technology

Yichen Wei

Megvii Technology

Jifeng Dai

Microsoft

Shaoqing Ren

Momenta.ai

Junsong Yuan

SUNY at Buffalo

Gang Hua

Wormpex AI Research

Jie Tang

Tsinghua University

Tiejun Huang

Peking University


Sponsors

Megvii Technology
Beijing Academy of Artificial Intelligence