Product Pricing


Challenge Overview

The world of retail takes image understanding to unexplored territories with hundreds of crowded elements per image, such as products, price labels, shelves, and more. This challenge is based on Trax’s data of supermarket shelves and introduces the problem of product pricing - given product boxes, the price of each product should be found by considering the price labels nearby. This challenge includes both the reading of price text and linking prices to relevant products.

A typical image in our pricing dataset

Dataset

The TraxPricing dataset contains box and price annotations for 15063 products in shelf images from many supermarkets around the world. The dataset can be downloaded from here and is provided solely for academic and non-commercial purposes.

Challenge Info

This challenge includes a single track, where participants are invited to develop and train their methods using the data in the TraxPricing dataset and be tested on a yet to be released test set. Challenge winners will receive prizes and may be invited to give a presentation at the workshop:

Procedure and Evaluation

Methods will be evaluated on a new test set that will be published by the 24th of May. The test set will be published with the same annotations but without the prices. To participate in the challenge please run your model on the test data and submit your results and preferred contact details to the email below by May 31. There is no need to register. The results file should have the same structure as the test annotations file, and also include an additional "confidence" column. The final score will be calculated as the area below the precision-"coverage" curve, where a true positive is a product whose predicted price exactly matches the ground-truth price and coverage is the ratio of (true or false) products above the confidence value. A Python file with this calculation will be published soon (update - calculation file is provided below).

Results will be announced by email at June 1. Winners must provide a document presenting their method by June 8 to be eligible for the reward. Other participants are encouraged to present their methods as well. Received methods will be published in the challenge website. Authors may also be invited to present a talk at the workshop.

Clarifications (April 25)

Allowed and not allowed:
  • - It is allowed to add annotations for the price labels.
  • - It is not allowed to add annotations on product detections or product-price linkage.
For example:
  • - You may use publicly available models and train them on publicly available data in order to better detect or read price labels.
  • - You may use publicly available paid services such as cloud APIs provided by cloud providers like Google or Amazon.
  • - You may not use algorithms to detect more products or have more products data.

Update (May 2)

Please use the evaluation code provided in eval_code_v1.zip to evaluate your results with the training data. The same calculation will be use to evaluate results on the test set. The provided test set will include a set of images and annotations including the coordinates of product boxes, but without the price and the confidence at each box.

The published annotations used for training are accurate in most cases. However, as in any dataset, there still were some rather noisy cases with incorrect annotations. To make the evaluation as accurate as possible, we further annotated the test set to make sure it is extra clean. All prices were annotated by several annotators and the test set will only include prices that were agreed upon by the majority of annotators. Therefore, the test set will be as clean and accurate as it gets.

Update - test set (May 24)

Test data is now available. Please run your method on the test set images, and use the product boxes in the annotations file to determine the price of each box. To submit your results, add the price to each row in the annotations file (in the price column) and send the file to the email below. All emails will be answered.

Questions and Submissions

For questions about the challenge or to submit your results for evaluation please contact Ehud Barnea (ehud.barnea at gmail dot com).

Challenge results and winners

After evaluating all participants' methods using the held-out test set we are proud to present the winners:

1st place: USTC-NELSLIP group. Group members include Jun Yu, Liwen Zhang, Zeyu Cui, Haonian Xie, Zhong Zhang, Ye Yu, Wen Su, Fang Gao, and Feng Shuang. (technical report).

2nd place: Artem Kozlov (technical report).

4th place: Raghul Asokan (technical report).

Summary of results

Adapted from dynavis.github.io