Object detection is of significant value to the Computer Vision and Pattern Recognition communities as it is one of the fundamental vision problems. In this workshop, we will introduce two new benchmarks for the object detection task: Objects365 and CrowdHuman, both of which are designed and collected in the wild. Objects365 benchmark targets to address the large-scale detection with 365 object categories. There will be two tracks: full track with all of 365 object categories on the 600K training images and tiny track to address the 100 challenging categories on sub-set of the training images. CrowdHuman, on the other hand, is targeting the problem of human detection in the crowd. We hope these two datasets can provide the diverse and practical benchmarks to advance the research of object detection. Also, this workshop can serve as a platform to push the upper-bound of object detection research.
[News] The workshop website is now online.
Challenge Launch Date | |
Testing Data Release | |
Result Submissions Deadline | |
Workshop date | |
We invite the top three teams of each track to share the experience of the competition.
13:30-13:50 | Introduction for DIW2019 | Gang Yu, MEGVII |
13:50-14:10 | Outstanding team talk of DIW 2019-Objects365 track. [slides] | Hengkai Guo, Bytedance AI Lab |
14:10-14:30 | Invited talk: Feature Selective Anchor Free Module for Single Shot Object Detection. [slides] | Chenchen Zhu, CMU |
14:30-14:50 | Invited talk: Bounding Box Regression with Uncertainty for Accurate Object Detection. [slides] | Yihui He, CMU |
14:50-15:15 | Outstanding team talk of DIW 2019-Objects365 track. [slides] | Dongliang He, Baidu VIS |
15:15-16:00 | Coffee Break | |
16:00-16:40 | Invited talk: Deformation Modeling in Convnets. [slides] | Jifeng Dai, MSRA |
16:40-17:05 | Outstanding team talk of DIW 2019-CrowdHuman Track. [slides] | Zequn Jie, Tencent AI Lab |
Chenchen Zhu is a Ph.D. student from Department of Electrical & Computer Engineering (ECE) at Carnegie Mellon University (CMU). He works with Prof. Marios Savvides at the CyLab Biometrics Center. His research interest mainly lies in computer vision and deep learning, with applications on object detection in general and facial analysis.
Yihui He, He is a CMU master student, with his interest focus on Computer Vision and Deep Learning. During he undergrad study he was fortunate to be an intern with Jian Sun (Megvii/Face++), Song Han (MIT) and Alan Yuille (JHU). He has a track record of contributing to CNN efficient inference. Particularly, he designed channel pruning to effectively prune channels. He further proposed AMC to sample the design space of channel pruning via reinforcement learning, which greatly improved the performance. He served as a reviewer for ICCV'19, CVPR'19, ICLR'19, NIPS'18, TIP and IJCV.
Jifeng Dai, Senior Researcher in the Visual Computing Group of Microsoft Research Asia. He received the B.S. degree and the Ph.D. degree from Tsinghua University with honor in 2009 and 2014 respectively. He was also a visiting student in University of California, Los Angeles (UCLA) from 2012 to 2013. He is the author of R-FCN and Deformable ConvNets. His citation in google scholar has exceeded 4000. His team won the COCO competition championship in 2015 and 2016 and won a third place in 2017. And he served as the Senior PC Member of the AAAI 2018.
The Objects365 is a large dataset, designed to spur object detection research with a focus on diverse objects in the Wild. Objects365 has 365 object classes annotated on 638K images, totally with more than 10 million bounding boxes in the training set. Thus the annotations cover common objects occurring in all kinds of scene categories. Challenge for DIW 2019-Objects365 is proposed to have two tracks:
The CrowdHuman dataset is large, rich-annotated and contains high diversity. It contains 15000, 4370 and 5000 images for training, validation, and testing, respectively. There are a total of 340K human instances and 22.6 persons per image from the train set, with various kinds of occlusions in the dataset. Each human instance is annotated with a head bounding-box, human visible-region bounding-box and human full-body bounding-box. We believe this dataset will serve as a solid baseline and help promote future research in human detection tasks, especially in the crowded environment.
Please send us an email to add or modify the content.
DIW 2019-Objects365 Full Track
Contact us at info@objects365.org.
Megvii Technology
Megvii Technology
Megvii Technology
Megvii Technology
Megvii Technology
Megvii Technology
Microsoft
Momenta.ai
SUNY at Buffalo
Wormpex AI Research
Tsinghua University
Peking University