Introduction

Weakly supervised learning refers to a variety of studies that attempt to address the challenging pattern recognition tasks by learning from weak or imperfect supervision. Supervised learning methods including Deep Convolutional Neural Networks (DCNNs) have significantly improved the performance in many problems in the field of computer vision, thanks to the rise of large-scale annotated data set and the advance in computing hardware. However, these supervised learning approaches are notorious "data hungry", which makes them are sometimes not practical in many real-world industrial applications. We are often facing the problem that we are not able to acquire enough amount of perfect annotations (e.g., object bounding boxes and pixel-wise masks) for reliable training models. To address this problem, many efforts in so-called weakly supervised learning approaches have been made to improve the DCNNs training to deviate from traditional paths of supervised learning using imperfect data. For instance, various approaches have proposed new loss functions or novel training schemes. Weakly supervised learning is a popular research direction in Computer Vision and Machine Learning communities, many research works have been devoted to related topics, leading to rapid growth of related publications in the top-tier conferences and journals such as CVPR, ICCV, ECCV, T-IP, and T-PAMI. We organize this workshop to investigate current ways of building industry level AI system relying on learning from imperfect data. We hope this workshop will attract attention and discussions from both industry and academic people.

Call for Papers


The workshop are expected to deal with data in a weakly supervised manner, which will cover but not limit to the following topics:

  • Weakly supervised learning algorithms
  • One/few shot learning for computer vision
  • Learning with noisy webly data
  • Weakly supervised learning for medical images
  • Real world image applications, e.g. object semantic segmentation/detection/localization, scene parsing, etc.
  • Real world video applications, e.g. action recognition, event detection, object tracking, video object detection/segmentation, etc.
  • New datasets and metrics to evaluate the benefit of the weakly supervised approaches for the specific vision problems.

Format : Papers are limited to 8 pages, including figures and tables, in the CVPR style. Additional pages containing only cited references are allowed.
Location: Long Beach, United States
Submission Site: https://cmt3.research.microsoft.com/LID2019/Submission/Index
Latex/Word Templates: http://cvpr2019.thecvf.com/files/cvpr2019AuthorKit.zip
*Note: Paper should be prepared in blind-submission review-formatted template.

Challenge

We will organize the first Learning from Imperfect Data (LID) challenge on object semantic segmentation and scene parsing, which includes two competition tracks(challenge deadline: June 1, 2019):

Track1: Object semantic segmentation with image-level supervision

In this track, image-level annotations are provided for supervision and the target is performing pixel-level classification. The dataset is built upon the image detection track of ImageNet Large Scale Visual Recognition Competition (ILSVRC). We provide pixel-level annotations of 15K images (validation/testing: 5K/10K) from 200 basic-level categories for evaluation. To the best of our knowledge, this is the most diverse dataset to evaluate the semantic segmentation in the weakly supervised manner.
*Note:

Track2: Scene parsing with point-based supervision

Beyond object regions, background categories such as wall, road, sky need to be further specified for the scene parsing, which is a much challenging task compared with object semantic segmentation. Thus, it will be more difficult and expensive to manually annotate pixel-level mask for this task. In this track, we propose to leverage several labeled points that are much easier to obtain to guide the training process. The dataset is built upon the well-known ADE 20K, which include 20,210 training images from 150 categories. We provide the point-based annotations on the training set and the evaluation is performed on the original validation (2K) and testing (2K) sets.
*Note:

Important Dates

Description Date
Paper Submission Deadline May 1, 2019
Notification to Authors May 5, 2019
Camera-Ready Deadline June 1, 2019
Challenge Deadline June 1, 2019

Workshop Schedule

Time Description
08:20-08:30 Opening remarks and welcome and the LID challenge summary
08:30-09:15 Invited talk 1: Andrea Vedaldi, Associate Professor, University of Oxford
09:15-10:00 Invited talk 2: Rogerio Feris, Research Manager, IBM
10:00-10:20 coffee break
10:20-10:40 Invited talk 3: Jiashi Feng, Assistant Professor, National University of Singapore
10:40-11:00 Invited talk 4: Bolei Zhou, Assistant Professor, The Chinese University of Hong Kong
11:00-11:20 Invited talk 5: Hakan Bilen, Assistant Professor, Edinburgh University
11:20-11:40 Invited talk 6: Yanping Huang, Software Engineer, Google Brain
11:40-12:00 Invited talk 7: Lu Jiang, Research scientist, Google Cloud AI
12:00-14:00 Lunch
14:00-14:20 Invited talk 8: Bernardino Romera-Paredes, Research scientist, Google Deepmind
14:20-14:40 Invited talk 9: Anurag Arnab, PhD student, University of Oxford
14:40-15:00 coffee break
15:00-15:30 Oral talk 1: Winner of Track1: Object semantic segmentation with image-level supervision
15:30-16:00 Oral talk 2: Winner of Track2: Scene parsing with point-level supervision
16:00-16:15 Awards & Future Plans

Invited Speakers

Andrea Vedaldi
Associate Professor, University of Oxford
Rogerio Feris
Research Manager, IBM T.J. Watson Research Center
Jiashi Feng
Assistant Professor, National University of Singapore
Bolei Zhou
Assistant Professor, The Chinese University of Hong Kong
Yanping Huang
Software Engineer, Google Brain
Lu Jiang
Research Scientist, Google Cloud AI
Bernardino Romera-Paredes
Research Scientist, Google Deepmind
Hakan Bilen
Assistant Professor, Edinburgh University
Anurag Arnab
PhD Student, University of Oxford

Organizers

Webmaster

Zheng Lin
Ting Liu

Contact

yunchao@illinois.edu, szhengcvpr@gmail.com