Product Detection in Densely Packed Scenes


Challenge Overview

The world of retail takes the detection scenario to unexplored territories with millions of possible facets and hundreds of heavily crowded objects per image. This challenge is based on the SKU-110K dataset collected from Trax’s data of supermarket shelves and pushes the limits of detection systems.

A typical image in our SKU-110K, showing densely packed objects

Dataset

The SKU-110K dataset collects 11,762 densely packed shelf images from thousands of supermarkets around the world, including locations in the United States, Europe, and East Asia. The dataset can be downloaded from here or here and is provided solely for academic and non-commercial purposes.

Comparison of related benchmarks. #Img.: number of images. #Obj./img.: average items per image. #Cls.: number of object classes (more implies a harder detection problem due to greater appearance variations). #Cls./img.: average classes per image. dense: objects are typically densely packed. Idnt: images contain multiple identical objects or hard to separate object sub-regions. BB: bounding box labels are available.

Challenge Info

This challenge includes a single track, where participants are invited to develop and train their methods using the data in the SKU-110K dataset and be tested on a yet to be released test set. Challenge winners will receive prizes and may be invited to give a presentation at the workshop:

Procedure and Evaluation

All the data in the SKU-110K dataset may be used for training, including the validation and test sets. Methods will be evaluated on a new test set that will be released later on (see dates in the main workshop page). The test set will be published without annotations. Detection results will be evaluated using the code in densely_packed_eval_2020-02-06.zip.

For questions about the challenge or to submit your results for evaluation please contact Ehud Barnea (ehudb at traxretail dot com).

Clarifications (posted at May 13)

Test-set images without annotations will be published by the 28th of May. To participate in the challenge please submit your results and preferred contact details to the email below by June 4 10:00 UTC+8. There is no need to register. The results file should have the same structure as the provided file "example_results.csv". Please use the train-set annotations and the provided code to make sure your results file is structured properly.

Results will be announced by email at June 6. Winners must provide a document presenting their method by June 13 to be eligible for the reward. Other participants are encouraged to present their methods as well. Please post your documents in arXiv and send us a link. Received methods will be published in the challenge website. Authors may also be invited to present a talk at the workshop (the details of the virtual conference are still unclear but there will probably be a slot in the schedule that links to pre-recorded talks that we present in the website).

Test data released (posted at May 27)

Test-set images without annotations can be found here. Please review the clarifications above. All emails will be answered so you know they were not missed.

Questions and Submissions

For questions about the challenge or to submit your results for evaluation please contact Ehud Barnea (ehudb at traxretail dot com).

Challenge results and winners

After evaluating all participants' methods using the new test set we are proud to present the winners:

1st place: USTC-NELSLIP group from the University of Science and Technology of China. The group members include Jun Yu, Haonian Xie, Guochen Xie, Mengyan Li, and Qiang Ling (technical report).

2nd place: Artem Kozlov (technical report).

Summary of results

Adapted from dynavis.github.io