Grand Challenges

The following Grand Challenges are to be held in conjunction with ICME 2015.

Please refer to the single Grand Challenge web pages for instructions on call for papers and submission deadlines.



MSR-Bing Grand Challenge: Image Retrieval

With the success of the 1st, 2nd, and 3rd MSR-Bing Image Retrieval Challenge (MSR-Bing IRC) at ACM Multimedia 2013/2014 and ICME 2014, Microsoft Research in partnership with Bing is happy to propose the 4th MSR-Bing IRC at ICME 2015.

Do you have what it takes to build the best image retrieval system? Enter this Challenge to develop an image scoring system for a search query. In doing so, you can:

The topic of the Challenge is web image retrieval. The contestants are asked to develop systems to assess the effectiveness of query terms in describing the images crawled from the web for image search purposes. A contesting system is asked to produce a floating-point score on each image-query pair that reflects how relevant the query could be used to describe the given image, with higher numbers indicating higher relevance. The dynamic range of the scores does not play a significant role so long as, for any query, sorting by its corresponding scores for all its associated images give the best retrieval ranking for these images.

The reference dataset can be found here: challenge website

Evaluation Criteria

Each entry to the Challenge is ranked by its respective Discounted Cumulated Gain (DCG) measure against the test set. In the evaluation stage, you will be asked to download one compressed file (evaluation set) which contains two files in text formats. One is a list of key-query pairs, and the other is a list of key-image pairs. You will be running your algorithm to give a relevance score for each pairs in the first file, and the image content, which will be Base64 encoded JPEG files, can be found in the second file through the key.

The evaluation set, which is encrypted, will be available for downloading 2 to 3 days before the challenge starts. A password will be delivered to all participants to decrypt the evaluation set when the challenge starts.

One full day (24 hours) will be given to the participants to do predictions on all the query-image pairs in the evaluation set. Before the end of the challenge, participants need to submit the evaluation results (which is a TSV file containing a list of triads: key, query, score) to a CMT system (will be announced at: http://research.microsoft.com/irc/ ). The order of the triads in the text file is not important. Running prediction in parallel is encouraged.

The evaluation set will be different from what we used last year. The number of query-image pairs will be increased significantly this time. A trial set will be available around one week before the start of the challenge.

Submission Guidelines

One team can submit up to 3 runs in 3 zipped text files, and each file corresponds to the results of one run. The team need to clearly specify one of the run as the “master” run, which will be used for final ranking. The results for other runs will be also sent back to the teams for their reference.

Detailed information including dates and submission entrance will be announced in the challenge website

Schedule

Grand Challenge Coordinator

Xian-Sheng Hua (xshua@microsoft.com), Yuxiao Hu (yuxiao.hu@microsoft.com)