Loading...
Menu
Bridging the Gap Between Computational Photography and Visual Recognition:
UG2+ Prize Challenge
CVPR 2019
$60K in prizes

The advantages of collecting images from outdoor camera platforms, like UAVs, surveillance cameras and outdoor robots, are evident and clear. For instance, man-portable UAV systems can be launched from safe positions to survey difficult or dangerous terrain, acquiring hours of video without putting human lives at risk. What is unclear is how to automate the interpretation of these images - a necessary measure in the face of millions of frames containing artifacts unique to the operation of the sensor and optics platform in outdoor, unconstrained, and usually visually degraded environments.

Continuing the success of the 1st UG2 Prize Challenge workshop held at CVPR 2018, UG2+ provides an integrated forum for researchers to review the recent progress of handling various adverse visual conditions in real-world scenes, in robust, effective and task-oriented ways. Beyond the human vision-driven restorations, we also extend particular attention to the degradation models and the related inverse recovery processes that may benefit successive machine vision tasks. We embrace the most advanced deep learning systems, but are still open to classical physically grounded models, as well as any well-motivated combination of the two streams. The workshop will consist of four invited talks, together with peer-reviewed regular papers (oral and poster), and talks associated with winning prize challenge contributions.

Original high-quality contributions are solicited on the following topics:
  • Novel algorithms for robust object detection, segmentation or recognition on outdoor mobility platforms, such as UAVs, gliders, autonomous cars, outdoor robots, etc.
  • Novel algorithms for robust object detection and/or recognition in the presence of one or more real-world adverse conditions, such as haze, rain, snow, hail, dust, underwater, low-illumination, low resolution, etc.
  • The potential models and theories for explaining, quantifying, and optimizing the mutual influence between the low-level computational photography (image reconstruction, restoration, or enhancement) tasks and various high-level computer vision tasks.
  • Novel physically grounded and/or explanatory models, for the underlying degradation and recovery processes, of real-world images going through complicated adverse visual conditions.
  • Novel evaluation methods and metrics for image restoration and enhancement algorithms, with a particular emphasis on no-reference metrics, since for most real outdoor images with adverse visual conditions it is hard to obtain any clean “ground truth” to compare with.
This prize challenge has two components that have been combined for a unified workshop at CVPR:
  1. Video object classification and detection from unconstrained mobility platforms:
    • Sponsor:
      • Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA)
    • Prizes:
      • $50,000
  2. Object Detection in Poor Visibility Environments:
    • Sponsors:
      • Kwai
      • Meitu Imaging & Vision Lab
      • NEC Laboratories America
      • Walmart
    • Prizes:
      • $10,000

Challenge Categories

Winners

$K

Awarded in prizes

Keynote speakers

Available Challenges

Can the application of image enhancement and restoration algorithms as a pre-processing step improve image interpretability for automatic visual recognition to classify scene content?

The UG2+ Challenge seeks to advance the analysis of "difficult" imagery by applying image restoration and enhancement algorithms to improve analysis performance. Participants are tasked with developing novel algorithms to improve the analysis of imagery captured under problematic conditions.

Video object classification and detection from unconstrained mobility platforms

Image restoration and enhancement algorithms that remove corruptions like blur, noise and mis-focus, or manipulate images to gain resolution, change perspective and compensate for lens distortion are now commonplace in photo editing tools. Such operations are necessary to improve the quality of images for recognition purposes. But they must be compatible with the recognition process itself, and not adversely affect feature extraction or decision making.

Sub-Challenges

  1. Object Detection Improvement on Video
  2. Object Classification Improvement on Video

Sponsor for this track:

Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA)

Object Detection in Poor Visibility Environments

While most current vision systems are designed to perform in environments where the subjects are well observable without (significant) attenuation or alteration, a dependable vision system must reckon with the entire spectrum of complex unconstrained and dynamic degraded outdoor environments. It is highly desirable to study to what extent, and in what sense, such challenging visual conditions can be coped with, for the goal of achieving robust visual sensing

Sub-Challenges

  1. (Semi-)Supervised Object Detection in Haze Conditions
  2. (Semi-)Supervised Face Detection in Low Light Conditions
  3. Zero-Shot Object Detection with Raindrop Occlusions

Sponsors for this track:

Kwai, Meitu Imaging & Vision Lab, NEC Laboratories America, Walmart

Keynote speakers

Speaker Photo

Rama Chellappa holds a Minta Martin Professorship in the A.J. Clark School of Engineering and served as the Chair of the Electrical and Computer Engineering Department from 2011-2018.

Rama Chellappa
University of Maryland
Speaker Photo

Lars Ericson is currently a Program Manager at the Intelligence Advanced Research Project Activity ( IARPA ), within the Office of the Director of National Intelligence, with a focus on biometrics, computer vision, sensors, and nanotechnology.

Lars Ericson
Program Manager, IARPA
Speaker Photo

Heesung Kwon is a Senior Researcher and the Image Analytics Team lead at the U.S. Army Research Laboratory (ARL).

Heesung Kwon
U.S. Army Research Laboratory
Speaker Photo

Manmohan Chandraker is an Assistant Professor at the CSE department of the University of California, San Diego and is the Computer Vision Lead at NEC Labs America.

Manmohan Chandraker
NEC Labs America

Important Dates

Challenge Registration

January 31 - April 1, 2019

Challenge Result Submission

May 1, 2019

Paper Submission

May 15, 2019

Challenge Winners Announcement

May 20, 2019

Notification of Paper Acceptance

May 30, 2019

Camera Ready

June 10, 2019

CVPR Workshop

June 16, 2019

Organizers

Speaker Photo
Walter J. Scheirer
University of Notre Dame
Speaker Photo
Zhangyang Wang
Texas A&M University
Speaker Photo
Jiaying Liu
Peking University
Speaker Photo
Wenqi Ren
Chinese Academy of Sciences
Speaker Photo
Wenhan Yang
National University of Singapore
Speaker Photo
Kevin Bowyer
University of Notre Dame
Speaker Photo
Thomas S. Huang
University of Illinois
Speaker Photo
Sreya Banerjee
University of Notre Dame
Speaker Photo
Rosaura G. VidalMata
University of Notre Dame
Speaker Photo
Ye Yuan
Texas A&M University
Footer