UoM-SGFS Database

Contents:

Overview

Additional Information

How to get the database

Benchmarks

UPDATE February 2018: The UoM-SGFS database has been enlarged with twice the number of subjects, now containing 1200 images of 600 subjects.

Overview

Problem Definition

Numerous algorithms that can identify suspects depicted in sketches following eyewitness descriptions of criminals are currently being developed because of their potential importance in forensics investigations. Yet, despite the prevalent use of software-generated composite sketches by law enforcement agencies, there still exist few such sketches which can be used by researchers to adequately evaluate face photo-sketch recognition algorithms when using these composites. Hence, the publicly available University of Malta Software-Generated Face Sketch (UoM-SGFS) database has been created to enable researchers to evaluate the performance of face photo-sketch algorithms also on software-generated sketches.

What’s special about the UoM-SGFS database?

The UoM-SGFS database contains the largest number of viewed software-generated sketches, that also exhibit several deformations and exaggerations to mimic sketches obtained in real-world investigations. Further, in contrast to sketches found in other databases, all sketches in the UoM-SGFS database are represented in full colour.

Database contents

The database contains two viewed sketches for each of the 600 subjects obtained from the Color FERET database and is thus partitioned into two sets, where each set contains the sketch of one subject.

Set A contains those sketches created using EFIT-V, where the number of steps performed in the program (e.g. number of evolutions, and selection and modifications of components) was minimised so as to lower the risk of producing composites that are overly similar to the original photo. In fact, the average time taken to create sketches varied between approximately 30 to 45 minutes.

The sketches in Set A were then edited using the Corel PaintShop Pro X7 image editing software to fine-tune details which cannot be easily modified with EFIT-V, yielding Set B. Consequently, sketches in Set B are generally closer in appearance to the original face photos. On average, editing spanned approximately 15 to 30 minutes only, to retain inaccuracies as found in real-life forensic sketches.

The provided database also contains the file names of the photos used (from the Color FERET database, which must be obtained separately), the faducial points (eye and mouth centres) of both the sketches and photographs, and some meta data of the subjects.

It should be noted that, as the sole EFIT-V software operator, I received training by a qualified forensic scientist from the Malta Police Force so as to ensure that practices adopted in real-life were also used in the creation of the UoM-SGFS database.

Examples

biosig_high-res

Above: Eight subjects from the Color FERET database and the corresponding sketches from the two sets of the UoM-SGFS database

Face photographs taken from the Color FERET database:


National Institute of Standards and Technology (NIST), “The Color FERET
Database version 2,” [Online]. Available:
http://www.nist.gov/itl/iad/ig/colorferet.cfm

Additional Information

More information about the database may be found in the paper ‘A Large-Scale Software-Generated Face Composite Sketch Database’, which must be referenced in any work using this database. The enlarged database with 600 subjects was also used and described in the paper ‘Matching Software-Generated Sketches to Face Photos with a Very Deep CNN, Morphed Faces, and Transfer Learning’, which must also be referenced in any work using this database. Link to Papers

The protocols to split the images into training and test sets have also been published here, enabling the comparison of new methods with all approaches evaluated in the above-mentioned paper.

Additional information about the database may also be found here.

How to get the database

You may fill and submit the resource request form.

Benchmarks

Several methods proposed in literature have been evaluated on the UoM-SGFS database, details of which may be found in the paper entitled ‘Matching Software-Generated Sketches to Face Photos with a Very Deep CNN, Morphed Faces, and Transfer Learning’ (click here to see publications). The protocol used has also been published here.

Results for algorithms evaluated on this database are shown below, ranked according to the Equal Error Rate (EER). If you wish for the results of your method to be visible on this website, please do not hesitate to make contact by using the Contact form.

Set A:

Type Method Rank-1 (%) Rank-10 (%) Rank-50 (%) Rank-100 (%) TAR@FAR=0.1% TAR@FAR=1.0% EER (%)
Inter DEEPS [1] 31.60 +/- 1.12 66.13 +/- 2.47 86.00 +/- 1.25 93.47 +/- 1.85 41.87 +/- 3.11 73.47 +/- 2.08 6.26 +/- 0.60
Inter D-RS + CBR [2] 25.87 +/- 4.43 56.00 +/- 3.80 76.27 +/- 3.90 84.93 +/- 1.92 32.27 +/- 3.25 62.67 +/- 3.89 8.84 +/- 0.82
Inter D-RS [3], [4] 22.13 +/- 1.45 49.33 +/- 4.24 69.87 +/- 78.67 78.67 +/- 2.26 28.53 +/- 2.38 53.60 +/- 3.35 12.30 +/- 0.93
Inter LGMS [5] 21.87 +/- 5.06 51.20 +/- 4.01 72.40 +/- 3.82 80.80 +/- 2.88 28.00 +/- 6.53 55.20 +/- 4.46 12.34 +/- 1.07
FRS VGG-Face [6] 9.33 +/- 2.45 31.07 +/- 3.73 59.73 +/- 2.52 73.60 +/- 3.58 11.87 +/- 2.47 37.33 +/- 4.40 14.18 +/- 1.05
Intra EP (+PCA) [7] 12.53 +/- 2.08 35.60 +/- 2.19 62.80 +/- 2.88 74.40 +/- 3.61 17.20 +/- 1.66 40.00 +/- 1.70 14.49 +/- 1.29
Intra ET [8] 8.40 +/- 2.14 30.00 +/- 3.62 54.53 +/- 5.82 67.47 +/- 2.28 11.73 +/- 2.81 34.13 +/- 4.33 16.13 +/- 2.06
Inter CBR [9] 5.73 +/- 2.09 18.80 +/- 1.28 43.33 +/- 1.94 52.40 +/- 2.14 6.67 +/- 2.45 24.27 +/- 2.65 21.62 +/- 1.29
Inter HAOG [10] 13.60 +/- 2.29 37.33 +/- 1.94 52.67 +/- 2.62 60.27 +/- 1.67 16.93 +/- 2.29 39.07 +/- 3.73 23.12 +/- 1.78
Intra LLE (+PCA) [11] 6.93 +/- 1.92 24.67 +/- 2.98 43.60 +/- 2.34 57.60 +/- 3.79 8.93 +/- 1.80 24.13 +/- 2.64 23.68 +/- 1.00
FRS PCA [12] 2.80 +/- 1.19 8.40 +/- 2.03 17.73 +/- 3.22 25.20 +/- 3.35 3.33 +/- 1.56 9.33 +/- 1.94 37.13 +/- 0.64

 

Set B:

Type Method Rank-1 (%) Rank-10 (%) Rank-50 (%) Rank-100 (%) TAR@FAR=0.1% TAR@FAR=1.0% EER (%)
Inter DEEPS [1] 52.17 +/- 2.69 82.67 +/- 0.94 94.00 +/- 0.94 97.50 +/- 1.48 66.67 +/- 3.22 88.33 +/- 1.15 3.82 +/- 0.74
Inter D-RS + CBR [2] 42.93 +/- 1.38 75.87 +/- 1.59 90.13 +/- 1.45 96.27 +/- 1.21 56.80 +/- 0.87 83.07 +/- 1.80 4.35 +/- 1.11
Inter D-RS [3], [4] 40.80 +/- 1.45 70.80 +/- 1.85 86.40 +/- 0.89 93.07 +/- 1.12 50.93 +/- 1.38 77.20 +/- 1.97 6.42 +/- 0.51
Inter LGMS [5] 43.47 +/- 4.77 73.60 +/- 3.96 86.93 +/- 1.80 90.40 +/- 2.14 53.87 +/- 5.32 79.47 +/- 2.28 6.62 +/- 1.08
FRS VGG-Face [6] 16.13 +/- 2.72 48.00 +/- 3.30 72.80 +/- 2.84 83.73 +/- 2.24 23.60 +/- 2.73 56.00 +/- 4.83 10.24 +/- 1.13
Intra EP (+PCA) [7] 15.20 +/- 1.28 48.27 +/- 3.04 70.67 +/- 2.49 81.60 +/- 2.97 23.33 +/- 1.33 52.27 +/- 2.14 12.04 +/- 1.11
Intra ET [8] 12.13 +/- 1.10 39.07 +/- 4.73 63.47 +/- 3.18 75.07 +/- 3.55 16.80 +/- 1.37 43.20 +/- 6.04 13.04 +/- 1.14
Inter CBR [9] 7.60 +/- 1.98 25.47 +/- 2.38 48.27 +/- 1.30 57.73 +/- 2.03 10.67 +/- 3.13 30.53 +/- 0.99 20.11 +/- 1.21
Inter HAOG [10] 21.60 +/- 3.67 42.27 +/- 2.89 57.07 +/- 3.39 64.80 +/- 2.23 24.40 +/- 3.90 45.47 +/- 1.37 21.49 +/- 1.35
Intra LLE (+PCA) [11] 10.53 +/- 1.59 31.60 +/- 1.01 54.53 +/- 2.02 67.60 +/- 3.42 13.07 +/- 0.76 34.40 +/- 2.34 19.13 +/- 1.42
FRS PCA [12] 5.33 +/- 2.11 9.87 +/- 0.99 18.67 +/- 3.43 26.93 +/- 3.08 5.87 +/- 1.85 11.07 +/- 1.98 36.05 +/- 1.08

[1] C. Galea and R. A. Farrugia, “Matching Software-Generated Sketches to Face Photos with a Very Deep CNN, Morphed Faces, and Transfer Learning,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 6, pp. 1421-1431, Jun. 2018

[2] S. J. Klum, H. Han, B. F. Klare, and A. K. Jain, “The FaceSketchID System: Matching Facial Composites to Mugshots,” IEEE Trans. Inf. Forensics Security, vol. 9, no. 12, pp. 2248–2263, Dec 2014.

[3] B. Klare and A. K. Jain, “Heterogeneous Face Recognition Using Kernel Prototype Similarities,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1410–1422, Jun 2013.

[4] B. Klare and A. K. Jain, “Heterogeneous Face Recognition: Matching NIR to Visible Light Images,” in Proc. Int. Conf. Pattern Recognition, Aug 2010, pp. 1513–1516.

[5] C. Galea and R. A. Farrugia, “Face Photo-Sketch Recognition using Local and Global Texture Descriptors,” in Proc. European Signal Processing Conference (EUSIPCO), Budapest, Hungary, Aug. 2016.

[6] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in British Machine Vision Conference, 2015.

[7] C. Galea and R. A. Farrugia, “Fusion of intra- and inter-modality algorithms for face-sketch recognition,” in Proc. Computer Analysis of Images and Patterns, vol. 9257, 2015, pp. 700–711.

[8] X. Tang and X. Wang, “Face sketch recognition,” in IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 1, January 2004, pp. 50–57.

[9] H. Han, B. F. Klare, K. Bonnen, and A. K. Jain, “Matching composite sketches to face photos: A component-based approach,” IEEE Trans. Inf. Forensics Security, vol. 8, no. 1, pp. 191–204, Jan 2013.

[10] H. Galoogahi and T. Sim, “Inter-modality face sketch recognition,” in IEEE Int. Conf. Multimedia and Expo (ICME), July 2012, pp. 224–229.

[11] H. Chang, D.-Y. Yeung, and Y. Xiong, “Super-resolution through neighbor embedding,” in Proc. IEEE Conf. Comput. Vision Pattern Recog., vol. 1, June 2004, pp. I–I.

[12] M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, 1991.

Related Work

Click here for a summary of work done in the field of face photo-sketch recognition.

Leave a comment