Image data for the Challenge 1 can be retrieved by visiting the Notre Dame Computer Vision Research Lab website and selecting the "Data Sets" tab. Look for the IJCB 2017 Challenge set. You must follow the licensing instructions before proceedings with the download.
Masks and Sigsets for each data set:
To simplify analysis, participants' biometric matchers will be required to generate similarity scores (a larger value indicates greater similarity). If a participant's matcher generates a dissimilarity score instead of a similarity score, the scores should be negated or inverted in some way so that the resulting value is a similarity measure.
Participants in the competition will be provided with target and query sigsets (lists of biometric signatures or samples) for five verification experiments:
From the licensed data and the provided sigsets, participants are required to generate and submit five similarity matrices: FRGC version 2, Experiment 4; the Good, Bad, and Ugly partitions from the GBU; and the still image portion of the PaSC. The similarity matrices shall have Nt rows and Nq columns, where Nt and Nq are the sizes of the target and query sigsets. The (i,j) entry of a similarity matrix is the similarity score generated by the algorithm when supplied target sigset entry i as a gallery sample and query sigset entry j as a probe sample.
Participants will also be required to supply the companion ROC curve data for each similarity matrix.
For additional guidelines about allowable training and normalization of scores, see the section below on the protocol.
Results on the experiments will be further divided into two categories:
To support this division, participants are required to state whether their results were obtained using the competition-supplied eye coordinates or not.
The second category is included in recognition of the fact that face finding and localization in the video data is itself a hard problem and our goal in organizing this competition is to encourage participation. Participants in the first category, doing their own detection and localization, will be invited to provide information on their process and optionally, should they choose, to share their face localization meta-data.
As has become common for competitions such as this, at least one paper will be written and submitted to IJCB summarizing the findings of the competition. The purpose of this summary paper is three-fold. First, it will describe the scope and aims of the competition to the broader community. Second, it will provide, in one place, a record of how different approaches associated with different participants performed. Third, it will provide an opportunity for the organizers to report some analysis of these results across the various participants.
Performance across algorithms will be summarized in terms of ROC curves as well as a core performance value on those curves, namely verification rate at a fixed false accept rate of 0.001. To allow comparison between algorithms and human accuracy, area under the ROC (AUC) will be computed. The performance for comparison will be computed from submitted similarity matrices.
Beyond the summary report, it is expected that many participants will write up their own efforts and submit these for publication, hopefully to IJCB. Depending upon the pace of the competition and the timing of the availability of results from participants, the competition organizers and the organizers of IJCB will consider organizing a special session or even a workshop organized around the competition and its participants.
All results can be submitted to Notre Dame via email to walter.scheirer@nd.edu.
The competition will follow the GBU and PaSC protocols, which in particular requires that the similarity score s(q,t) returned by an algorithm for query image q and target image t may not in any way change or be influenced by the other images in the target and query sets. This protocol therefore requires that training, as well as steps such as cohort normalization, use a disjoint set of images. Here disjoint means that there are NO subjects (people) in common with subjects included in biometric data collected at the University of Notre Dame and any training or cohort normalization sets used by an algorithm. Also, to test generalization to the benchmarks, the protocol prohibits training on any imagery collected at the University of Notre Dame.