Login
Forgot password?

Multi-Object Tracking

Overview

This task requires the evaluated algorithm to recover the trajectories of objects in sequences. We will provide 96 challenging sequences for this task. Each algorithm is evaluated based on the intersection over union (IoU) between tracklet and groundtruth.

Given an input video sequence, multi-object tracking aims to recover the trajectories of objects in the video. The challenge will provide 96 challenging sequences, including 56 video sequences for training (24,201 frames in total), 7 sequences for validation (2,819 frames in total) and 33 sequences for testing (12,968 frames in total), which are available on the download page. We manually annotate the bounding boxes of different categories of objects in each video frame.

In addition, we also provide two kinds of useful annotations, i.e., occlusion ratio and truncation ratio. Specifically, we use the fraction of objects being occluded to define the occlusion ratio. For truncation ratio, it is used to indicate the degree of object parts appears outside a frame. If an object is not fully captured within a frame, we annotate the bounding box across the frame boundary and estimate the truncation ratio based on the region outside the image. It is worth mentioning that a target is skipped during evaluation if its truncation ratio is larger than 50%. Annotations on the training and validation sets are publicly available.



Challenge Guidelines

The multi-object tracking evaluation page lists detailed information regarding how submissions will be scored. To limit overfitting while providing researchers more flexibility to test their algorithms, we have divided the test set into two splits, including test-challenge and test-dev.