Object detection is of significant value to the Computer Vision and Pattern Recognition communities as it is one of the fundamental vision problems. In this workshop, we will introduce two new benchmarks for the object detection task: Objects365 and CrowdHuman, both of which are designed and collected in the wild. Objects365 benchmark targets to address the large-scale detection with 365 object categories. There will be two tracks: full track with all of 365 object categories on the 600K training images and tiny track to address the 100 challenging categories on sub-set of the training images. CrowdHuman, on the other hand, is targeting the problem of human detection in the crowd. We hope these two datasets can provide the diverse and practical benchmarks to advance the research of object detection. Also, this workshop can serve as a platform to push the upper-bound of object detection research.

[News] The workshop website is now online.


Important Dates

Challenge Launch Date April 16, 2019
Testing Data Release May 10, 2019
Result Submissions Deadline June 12, 2019
Workshop date June 17, 2019


Schedule

June 17, 2019
Hyatt Regency E

We invite the top three teams of each track to share the experience of the competition.

13:30-13:50 Introduction for DIW2019 Gang Yu, MEGVII
13:50-14:10 Outstanding team talk of DIW 2019-Objects365 track. [slides] Hengkai Guo, Bytedance AI Lab
14:10-14:30 Invited talk: Feature Selective Anchor Free Module for Single Shot Object Detection. [slides] Chenchen Zhu, CMU
14:30-14:50 Invited talk: Bounding Box Regression with Uncertainty for Accurate Object Detection. [slides] Yihui He, CMU
14:50-15:15 Outstanding team talk of DIW 2019-Objects365 track. [slides] Dongliang He, Baidu VIS
15:15-16:00 Coffee Break
16:00-16:40 Invited talk: Deformation Modeling in Convnets. [slides] Jifeng Dai, MSRA
16:40-17:05 Outstanding team talk of DIW 2019-CrowdHuman Track. [slides] Zequn Jie, Tencent AI Lab


Invited Speakers

Chenchen Zhu is a Ph.D. student from Department of Electrical & Computer Engineering (ECE) at Carnegie Mellon University (CMU). He works with Prof. Marios Savvides at the CyLab Biometrics Center. His research interest mainly lies in computer vision and deep learning, with applications on object detection in general and facial analysis.

Yihui He, He is a CMU master student, with his interest focus on Computer Vision and Deep Learning. During he undergrad study he was fortunate to be an intern with Jian Sun (Megvii/Face++), Song Han (MIT) and Alan Yuille (JHU). He has a track record of contributing to CNN efficient inference. Particularly, he designed channel pruning to effectively prune channels. He further proposed AMC to sample the design space of channel pruning via reinforcement learning, which greatly improved the performance. He served as a reviewer for ICCV'19, CVPR'19, ICLR'19, NIPS'18, TIP and IJCV.

Jifeng Dai, Senior Researcher in the Visual Computing Group of Microsoft Research Asia. He received the B.S. degree and the Ph.D. degree from Tsinghua University with honor in 2009 and 2014 respectively. He was also a visiting student in University of California, Los Angeles (UCLA) from 2012 to 2013. He is the author of R-FCN and Deformable ConvNets. His citation in google scholar has exceeded 4000. His team won the COCO competition championship in 2015 and 2016 and won a third place in 2017. And he served as the Senior PC Member of the AAAI 2018.


Workshop Overview

The competitions platform is provided by Biendata. The entrances of registration and submission are about to open. Wait a moment.

DIW 2019-Objects365 Challenge Track

The Objects365 is a large dataset, designed to spur object detection research with a focus on diverse objects in the Wild. Objects365 has 365 object classes annotated on 638K images, totally with more than 10 million bounding boxes in the training set. Thus the annotations cover common objects occurring in all kinds of scene categories. Challenge for DIW 2019-Objects365 is proposed to have two tracks:

  • Full Track. The goal of Full Track is to explore the upper-bound performance of object detection systems, given all the 365 classes and 600K+ training images. 30K images are used for validation and another 100K images are used for testing. To evaluate the performance of object detection, the evaluation criteria for COCO (IOU from 0.5 to 0.95) benchmark will be adopted.
  • Tiny Track. Tiny Track is to lower the entry threshold, accelerate the algorithm iteration speed, and study the long tail category detection problem.From the Objects365 dataset, 65 categories are selected, and contestants can training model using 10K training data.

DIW 2019-CrowdHuman Challenge Track

The CrowdHuman dataset is large, rich-annotated and contains high diversity. It contains 15000, 4370 and 5000 images for training, validation, and testing, respectively. There are a total of 340K human instances and 22.6 persons per image from the train set, with various kinds of occlusions in the dataset. Each human instance is annotated with a head bounding-box, human visible-region bounding-box and human full-body bounding-box. We believe this dataset will serve as a solid baseline and help promote future research in human detection tasks, especially in the crowded environment.


Leader Board

Please send us an email to add or modify the content.

DIW 2019-Objects365 Full Track

DIW 2019-Objects365 Tiny Track

DIW 2019-CrowdHuman Track


Organizers

Contact us at info@objects365.org.

Gang Yu

Megvii Technology

Shuai Shao

Megvii Technology

Jian Sun

Megvii Technology

Committee

Xiangyu Zhang

Megvii Technology

Zeming Li

Megvii Technology

Yichen Wei

Megvii Technology

Jifeng Dai

Microsoft

Shaoqing Ren

Momenta.ai

Junsong Yuan

SUNY at Buffalo

Gang Hua

Wormpex AI Research

Jie Tang

Tsinghua University

Tiejun Huang

Peking University


Sponsors

Megvii Technology
Beijing Academy of Artificial Intelligence