Login
Forgot password?

Segmentation

Overview

The COCO Panoptic Segmentation Task is designed to push the state of the art in scene segmentation. Panoptic segmentation addresses both stuff and thing classes, unifying the typically distinct semantic and instance segmentation tasks. The aim is to generate coherent scene segmentations that are rich and complete, an important step toward real-world vision systems such as in autonomous driving or augmented reality. For full details of the panoptic segmentation task please see the panoptic evaluation page.

In a bit more detail: things are countable objects such as people, animals, tools. Stuff classes are amorphous regions of similar texture or material such as grass, sky, road. Previous COCO tasks addressed stuff and thing classes separately, see the instance segmentation and stuff segmentation tasks, respectively. To encourage the study of stuff and things in a unified framework, we introduce the COCO Panoptic Segmentation Task. The definition of 'panoptic' is "including everything visible in one view", in our context panoptic refers to a unified, global view of segmentation. The panoptic segmentation task involves assigning a semantic label and instance id for each pixel of an image, which requires generating dense, coherent scene segmentations. The stuff annotations for this task come from the COCO-Stuff project described in this paper. For more details about the panoptic task, including evaluation metrics, please see the panoptic segmentation paper.

The panoptic segmentation task is part of the Joint COCO and LVIS Recognition Challenge Workshop at ECCV 2020. For further details about the joint workshop please visit the workshop page. Please also see the related COCO detection, keypoint, and stuff tasks.

The panoptic task uses all the annotated COCO images and includes the 80 thing categories from the detection task and a subset of the 91 stuff categories from the stuff task, with any overlaps manually resolved. The Panoptic Quality (PQ) metric is used for performance evaluation, for details see the panoptic evaluation page.



Challenge Guidelines

Participants must submit a technical report that includes a detailed ablation study of their submission via CMT. For the technical report, use the following ECCV-based template. Suggested length of the report is 2-7 pages. The reports will be made public. This report will substitute the short text description that we requested previously. Only submissions with the report will be considered for any award and will be put in the COCO leaderboard.

This year for each challenge track we will have two different awards: best result award and most innovative award. The most innovative award will be based on the method description in the submitted technical reports and decided by the COCO award committee. The commitee will invite teams to present at the workshop based on the innovations of the submissions rather than the best scores.