Face Photo-Sketch Recognition using Local and Global Texture Descriptors

Abstract:

The automated matching of mug-shot photographs with sketches drawn using eyewitness descriptions of criminals is a Heterogeneous Face Recognition (HFR) problem that has received much attention in recent years due to its importance in law enforcement. Although much work has been proposed in literature to tackle this scenario, most algorithms have been evaluated either on small datasets or using sketches that closely resemble the corresponding photos. In this paper, a method which extracts Multi-scale Local Binary Pattern (MLBP) descriptors from overlapping patches of log-Gabor-filtered images is used to extract cross-modality templates for each face or sketch image. The Spearman Rank-Order Correlation Coefficient (SROCC) is then used for template matching. The contributions of this paper therefore include (i) the combination of local and global texture descriptors, (ii) the use of SROCC as a similarity measure, and (iii) an extensive evaluation. Experimental results over a large gallery of 2474 unique subjects and 952 probe sketches show that the proposed method outperforms state-of-the-art methods by over 10% at Rank-1, with a retrieval rate of 81.4%. The fusion of the proposed method with the intra-modality approach Eigenpatches (EP) improves the Rank-1 rate to 85.5%.

Details of the implementation, evaluation methodology used and main results may be found in the paper ‘Face Photo-Sketch Recognition with Local and Global Texture Descriptors‘ [accepted for publication]. It is recommended that you read this paper before continuing to read this page. Link to Papers

 

 

Additional Data:

 

Additional evaluation metrics

ANOVA results

Computation Times

LGMS Code

 

Algorithms considered:

1 = Eigenfaces (Principal Component Analysis (PCA))

2 = Eigentransformation (ET) + PCA

3 = Eigenpatches (EP) + PCA

4 = Locally-Linear Embedding-based approach (LLE) + PCA

5 = Histogram of Averaged Orientation Gradients (HAOG)

6 = Direct Random Subspaces (D-RS)

7 = Prototype Random Subspaces (P-RS)

8 = P-RS + D-RS

9 = ET + EP + HAOG

10 = P-RS + D-RS + EP

11 = MLBP & SROCC

12 = log-Gabor & SROCC

13 = log-Gabor & MLBP & Euclidean distance

14 = log-Gabor & MLBP & Cosine similarity

15 = LGMS

16 = LGMS + EP

 


 

 

Additional evaluation metrics considered

Apart from the True Accept Rate (TAR) at fixed False Accept Rate (FAR) values commonly used to evaluate Face Recognition Systems (FRSs), two other metrics are employed: the Area under the first 200 ranks of the Cumulative Match Curve (CMC), denoted by AuC-Ranks-200, and the Area under the CMC denoted by AuC-Ranks-All. AuC-Ranks-All measures the entire area under the CMC and therefore considers all ranks. However, in forensics investigations, usually only the top 50-200 matches are considered. Therefore, the area under the first 200 ranks is more important than the performance over all ranks.

 

Values for Area under the first 200 ranks of the Cumulative Match Curve (AuC-Ranks-200), Area under the Cumulative Match Curve (AuC-Ranks-All), TAR@FAR=0.1%, TAR@FAR=1.0%:

Algorithm # AuC-Ranks-200 AuC-Ranks-All TAR@FAR=0.1% TAR@FAR=1.0%
1 0.6714±0.0092 0.6306±0.0035 2.3319±0.2395 5.2941±0.3608
2 0.8615±0.0053 0.9551±0.0042 29.8109±2.2448 56.5336±2.3197
3 0.8824±0.0057 0.9639±0.0034 38.2773±1.7618 63.4454±0.8566
4 0.8814±0.0044 0.9633±0.0046 38.6975±1.3341 60.9874±1.3174
5 0.9219±0.0026 0.9635±0.0018 60.4832±1.2460 77.2059±0.7861
6 0.9720±0.0017 0.9958±0.0004 83.2353±1.2420 93.4034±1.4391
7 0.9172±0.0099 0.9794±0.0021 44.4958±2.9267 58.2563±2.7498
8 0.9798±0.0035 0.9973±0.0003 85.1050±2.1391 93.1092±1.1967
9 0.9463±0.0035 0.9848±0.0016 66.9118±2.1668 85.8403±0.8643
10 0.9906±0.0012 0.9986±0.0002 93.3403±0.6991 97.2269±0.5387
11 0.9002±0.0070 0.9677±0.0030 50.5252±1.2692 71.0084±1.2429
12 0.9679±0.0012 0.9882±0.0014 82.9412±0.4848 92.6261±0.3515
13 0.9237±0.0060 0.9675±0.0015 62.2479±1.5413 78.2983±1.1181
14 0.9800±0.0021 0.9967±0.0007 87.5630±1.2921 95.5462±0.7669
15 0.9854±0.0011 0.9973±0.0006 92.7941±0.3902 97.9412±0.4555
16 0.9922±0.0011 0.9986±0.0003 95.6092±0.4419 99.0336±0.1151

 

As can be observed in the results above, the proposed LGMS algorithm achieves the best performance when compared to the intra- and inter-modality algorithms considered. At worst, it is joint best with other metrics, primarily the fusion of P-RS with D-RS (P-RS + D-RS). In terms of TARs at fixed FARs, LGMS achieves performance comparable to even the method fusing P-RS+D-RS with EP (algorithm 10). This is remarkable given that algorithm 10 consists of the fusion of three algorithms that are either intra-modality or inter-modality, thereby requiring more computation time than LGMS. When combining LGMS with EP, the resultant performance is superior to all the other algorithms, including algorithm 10. Since the proposed method is superior to D-RS and P-RS, which in turn outperformed the FaceVACS Commercial Off-the-Shelf (COTS) FRS [1], by extension it could be argued that the proposed approach also outperforms this FRS that is considered to be one of the best commercial face matchers especially in HFR scenarios [1].

 


 

 

Multi-comparison Analysis of Variance (ANOVA) using the Tukey method, where -1/0/1 indicate that the algorithm in the row is statistically inferior/identical/superior to the algorithm in the column, respectively, at the 95% confidence level. Numbers in boldface indicate algorithm numbers as indicated above.

Since an algorithm in a row is deemed to be inferior with a value of -1, while it is superior with a value of 1 and statistically identical with a value of 0, then the computation of a simple sum of each row indicates the statistical performance of each algorithm; a high value is desirable since it indicates a high number of `1’s’ as a result from a algorithm being statistically superior to a high number of other methods. These values, denoted by Normalised Sum, are also shown in the results presented after normalisation by dividing with the maximal value that can be achieved by each algorithm (number of algorithms – 1 = 15) such that the values lie within [-1, 1], where -1 indicates that the algorithm is inferior to all the other algorithms and 1 indicates that the algorithm is superior to all the other algorithms.

Note that multiple comparison analysis tests are known to generally be conservative, such that Type I errors are reduced at the expense of higher Type II errors. A Type I error occurs when an algorithm A is deemed to be superior to another algorithm B, but algorithm B is actually better than A [2, 3]. Future work will include the analysis of other multiple-comparison testing methods to determine better approaches to evaluate the statistical significance of evaluation metrics used for face photo-sketch recognition.  

 

Area under the Receiver Operating Characteristics (ROC) curve (AuC)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Normalised Sum
1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1.0000
2 1 0 -1 0 -1 -1 1 -1 -1 -1 0 -1 -1 -1 -1 -1 -0.6000
3 1 1 0 1 0 -1 1 -1 -1 -1 0 -1 -1 -1 -1 -1 -0.3333
4 1 0 -1 0 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
5 1 1 0 1 0 -1 1 -1 -1 -1 1 -1 0 -1 -1 -1 -0.2000
6 1 1 1 1 1 0 1 0 0 0 1 0 1 0 0 0 0.5333
7 1 -1 -1 -1 -1 -1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.8667
8 1 1 1 1 1 0 1 0 0 0 1 0 1 0 0 -1 0.4667
9 1 1 1 1 1 0 1 0 0 0 1 0 1 0 0 -1 0.4667
10 1 1 1 1 1 0 1 0 0 0 1 0 1 0 0 0 0.5333
11 1 0 0 1 -1 -1 1 -1 -1 -1 0 -1 -1 -1 -1 -1 -0.4667
12 1 1 1 1 1 0 1 0 0 0 1 0 1 0 0 0 0.5333
13 1 1 1 1 0 -1 1 -1 -1 -1 1 -1 0 -1 -1 -1 -0.1333
14 1 1 1 1 1 0 1 0 0 0 1 0 1 0 0 0 0.5333
15 1 1 1 1 1 0 1 0 0 0 1 0 1 0 0 0 0.5333
16 1 1 1 1 1 0 1 1 1 0 1 0 1 0 0 0 0.6667

 

AuC-Ranks-200

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Normalised Sum
1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1.0000
2 1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.8667
3 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
4 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
5 1 1 1 1 0 -1 0 -1 -1 -1 1 -1 0 -1 -1 -1 -0.2000
6 1 1 1 1 1 0 1 0 1 -1 1 0 1 0 -1 -1 0.4000
7 1 1 1 1 0 -1 0 -1 -1 -1 1 -1 0 -1 -1 -1 -0.2000
8 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 -1 0.6000
9 1 1 1 1 1 -1 1 -1 0 -1 1 -1 1 -1 -1 -1 0.0667
10 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 0 0.7333
11 1 1 1 1 -1 -1 -1 -1 -1 -1 0 -1 -1 -1 -1 -1 -0.4667
12 1 1 1 1 1 0 1 -1 1 -1 1 0 1 -1 -1 -1 0.2667
13 1 1 1 1 0 -1 0 -1 -1 -1 1 -1 0 -1 -1 -1 -0.2000
14 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 -1 0.6000
15 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 0 0.7333
16 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 0.8667

 

AuC-Ranks-All

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Normalised Sum
1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1.0000
2 1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.8667
3 1 1 0 0 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.4667
4 1 1 0 0 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.4667
5 1 1 0 0 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.4667
6 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
7 1 1 1 1 1 -1 0 -1 -1 -1 1 -1 1 -1 -1 -1 -0.0667
8 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
9 1 1 1 1 1 -1 1 -1 0 -1 1 0 1 -1 -1 -1 0.1333
10 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
11 1 1 0 0 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.4667
12 1 1 1 1 1 -1 1 -1 0 -1 1 0 1 -1 -1 -1 0.1333
13 1 1 0 0 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.4667
14 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
15 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
16 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667

 

TAR@FAR=0.1%

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Normalised Sum
1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1.0000
2 1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.8667
3 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
4 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
5 1 1 1 1 0 -1 1 -1 -1 -1 1 -1 0 -1 -1 -1 -0.1333
6 1 1 1 1 1 0 1 0 1 -1 1 0 1 -1 -1 -1 0.3333
7 1 1 1 1 -1 -1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.4667
8 1 1 1 1 1 0 1 0 1 -1 1 0 1 0 -1 -1 0.4000
9 1 1 1 1 1 -1 1 -1 0 -1 1 -1 1 -1 -1 -1 0.0667
10 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 0.8667
11 1 1 1 1 -1 -1 1 -1 -1 -1 0 -1 -1 -1 -1 -1 -0.3333
12 1 1 1 1 1 0 1 0 1 -1 1 0 1 -1 -1 -1 0.3333
13 1 1 1 1 0 -1 1 -1 -1 -1 1 -1 0 -1 -1 -1 -0.1333
14 1 1 1 1 1 1 1 0 1 -1 1 1 1 0 -1 -1 0.5333
15 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 0.8667
16 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 0.8667

 

TAR@FAR=1.0%

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Normalised Sum
1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1.0000
2 1 0 -1 -1 -1 -1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.8000
3 1 1 0 0 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.5333
4 1 1 0 0 -1 -1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6000
5 1 1 1 1 0 -1 1 -1 -1 -1 1 -1 0 -1 -1 -1 -0.1333
6 1 1 1 1 1 0 1 0 1 -1 1 0 1 0 -1 -1 0.4000
7 1 0 -1 0 -1 -1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.7333
8 1 1 1 1 1 0 1 0 1 -1 1 0 1 0 -1 -1 0.4000
9 1 1 1 1 1 -1 1 -1 0 -1 1 -1 1 -1 -1 -1 0.0667
10 1 1 1 1 1 1 1 1 1 0 1 1 1 0 0 0 0.8000
11 1 1 1 1 -1 -1 1 -1 -1 -1 0 -1 -1 -1 -1 -1 -0.3333
12 1 1 1 1 1 0 1 0 1 -1 1 0 1 -1 -1 -1 0.3333
13 1 1 1 1 0 -1 1 -1 -1 -1 1 -1 0 -1 -1 -1 -0.1333
14 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 -1 0.6000
15 1 1 1 1 1 1 1 1 1 0 1 1 1 0 0 0 0.8000
16 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 0.8667

 

 

Matching rates at Rank-N:

N=1

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Normalised Sum
1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1.0000
2 1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.8667
3 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
4 1 1 0 0 -1 -1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6000
5 1 1 1 1 0 -1 1 -1 0 -1 1 -1 0 -1 -1 -1 -0.0667
6 1 1 1 1 1 0 1 0 1 -1 1 0 1 -1 -1 -1 0.3333
7 1 1 1 0 -1 -1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.5333
8 1 1 1 1 1 0 1 0 1 -1 1 0 1 0 -1 -1 0.4000
9 1 1 1 1 0 -1 1 -1 0 -1 1 -1 0 -1 -1 -1 -0.0667
10 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 -1 0.8000
11 1 1 1 1 -1 -1 1 -1 -1 -1 0 -1 -1 -1 -1 -1 -0.3333
12 1 1 1 1 1 0 1 0 1 -1 1 0 1 0 -1 -1 0.4000
13 1 1 1 1 0 -1 1 -1 0 -1 1 -1 0 -1 -1 -1 -0.0667
14 1 1 1 1 1 1 1 0 1 -1 1 0 1 0 -1 -1 0.4667
15 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 -1 0.8000
16 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1.0000

 

N=10

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Normalised Sum
1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1.0000
2 1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.8667
3 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
4 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
5 1 1 1 1 0 -1 1 -1 -1 -1 1 -1 0 -1 -1 -1 -0.1333
6 1 1 1 1 1 0 1 -1 1 -1 1 0 1 -1 -1 -1 0.2667
7 1 1 1 1 -1 -1 0 -1 -1 -1 0 -1 -1 -1 -1 -1 -0.4000
8 1 1 1 1 1 1 1 0 1 -1 1 1 1 0 -1 -1 0.5333
9 1 1 1 1 1 -1 1 -1 0 -1 1 -1 1 -1 -1 -1 0.0667
10 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 0.8667
11 1 1 1 1 -1 -1 0 -1 -1 -1 0 -1 -1 -1 -1 -1 -0.4000
12 1 1 1 1 1 0 1 -1 1 -1 1 0 1 -1 -1 -1 0.2667
13 1 1 1 1 0 -1 1 -1 -1 -1 1 -1 0 -1 -1 -1 -0.1333
14 1 1 1 1 1 1 1 0 1 -1 1 1 1 0 -1 -1 0.5333
15 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 0.8667
16 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 0.8667

 

N=50

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Normalised Sum
1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1.0000
2 1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.8667
3 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
4 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
5 1 1 1 1 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.3333
6 1 1 1 1 1 0 1 0 1 -1 1 1 1 0 0 -1 0.5333
7 1 1 1 1 1 -1 0 -1 -1 -1 1 -1 1 -1 -1 -1 -0.0667
8 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
9 1 1 1 1 1 -1 1 -1 0 -1 1 -1 1 -1 -1 -1 0.0667
10 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 0 0.7333
11 1 1 1 1 0 -1 -1 -1 -1 -1 0 -1 -1 -1 -1 -1 -0.4000
12 1 1 1 1 1 -1 1 -1 1 -1 1 0 1 -1 -1 -1 0.2000
13 1 1 1 1 0 -1 -1 -1 -1 -1 1 -1 0 -1 -1 -1 -0.2667
14 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
15 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
16 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 0 0.7333

 

N=100

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Normalised Sum
1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1.0000
2 1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.8667
3 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
4 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
5 1 1 1 1 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.3333
6 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
7 1 1 1 1 1 -1 0 -1 -1 -1 1 -1 1 -1 -1 -1 -0.0667
8 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
9 1 1 1 1 1 -1 1 -1 0 -1 1 -1 1 -1 -1 -1 0.0667
10 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
11 1 1 1 1 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.3333
12 1 1 1 1 1 -1 1 -1 1 -1 1 0 1 -1 -1 -1 0.2000
13 1 1 1 1 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.3333
14 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
15 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
16 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667

 

N=150

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Normalised Sum
1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1.0000
2 1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.8667
3 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
4 1 1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.6667
5 1 1 1 1 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.3333
6 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
7 1 1 1 1 1 -1 0 -1 0 -1 1 -1 1 -1 -1 -1 0.0000
8 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
9 1 1 1 1 1 -1 0 -1 0 -1 1 -1 1 -1 -1 -1 0.0000
10 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
11 1 1 1 1 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.3333
12 1 1 1 1 1 -1 1 -1 1 -1 1 0 1 -1 -1 -1 0.2000
13 1 1 1 1 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.3333
14 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
15 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
16 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667

 

N=200

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Normalised Sum
1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1.0000
2 1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -0.8667
3 1 1 0 0 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.4667
4 1 1 0 0 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.4667
5 1 1 0 0 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.4667
6 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
7 1 1 1 1 1 -1 0 -1 0 -1 1 -1 1 -1 -1 -1 0.0000
8 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
9 1 1 1 1 1 -1 0 -1 0 -1 1 -1 1 -1 -1 -1 0.0000
10 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
11 1 1 0 0 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.4667
12 1 1 1 1 1 -1 1 -1 1 -1 1 0 1 -1 -1 -1 0.2000
13 1 1 0 0 0 -1 -1 -1 -1 -1 0 -1 0 -1 -1 -1 -0.4667
14 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
15 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667
16 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0.6667

 

From the tables above, the proposed LGMS algorithm is at worst statistically identical to other algorithms, i.e. no algorithm statistically outperforms LGMS. In fact, LGMS and LGMS+EP achieve the highest ‘Normalised Sum’ values of all algorithms considered, indicating that they are statistically superior to most other algorithms. At Rank-1, LGMS+EP is statistically superior to all algorithms considered, achieving a ‘Normalised Sum’ value of 1. At worst, the ‘Normalised Sum’ values are joint best with other algorithms, which is mainly due to the conservative nature of multi-comparison tests as described above.

 


 
 
Average computation times (in seconds) of D-RS, P-RS and LGMS to substantiate claim in paper that LGMS requires less computation time than P-RS + D-RS; times for filtering and feature extraction of P-RS, D-RS and D-RS + P-RS (indicated with an asterisk) are identical since they use the same filtering and feature extraction processes, and hence the features can be computed once and used for the three algorithms

LGMS D-RS P-RS D-RS + P-RS
Filtering + feature extraction time (per image) 30 4.4* 4.4* 4.4*
Training and matching time 1120 586 32682 33268

As shown in the table above, LGMS requires more computation time to filter each image and extract the MLBP features, since 32 images are being generated from which MLBP features muct be extracted. In comparison, D-RS, P-RS and therefore D-RS + P-RS use 3 filters from which MLBP and SIFT descriptors are computed. However, the features can be computed once for all images and stored for later retrieval. The most signficant computation time is therefore that required to train the algorithms and perform matching. In this case, it is evident that P-RS takes a long time due to the use of RS-LDA and prototype representation. D-RS also takes a comparatively long time compared to LGMS considering that D-RS only computes 6 matching scores in contrast to LGMS’s 32 scores. This is largely because of the use of RS-LDA in D-RS and LDA in LGMS. Although LGMS uses LDA, which can be shown to be inferior to RS-LDA, it was shown that LGMS can still outperform D-RS and P-RS. It can also outperform D-RS + P-RS which requires the most computation time.

Note that the implementations of all algorithms were done using un-optimised MATLAB code and the time of D-RS + P-RS was obtained by evaluating D-RS and P-RS individually; hence, run times could be improved using optimised code and parallel implementations (which would improve also LGMS’s running times).

 

 


 

 

 

 

LGMS Code

Since the original photographs provided in the Color FERET database cannot be provided, the full LGMS code performing extraction of features from all photos and sketches, subspace learning and feature projection and comparison comparison cannot be provided; instead, two methods are provided: one containing a small number of subjects (10) having the photos and the corresponding sketches from the UoM-SGFS database, from which the features are extracted and a pre-trained model is used to project the features in the PCA+LDA subspace for comparison with SROCC, and another file containing the LGMS features of all sketches in the UoM-SGFS database and the corresponding Color FERET photos, on which PCA+LDA subspace projection is learned on a training set of images.

To request LGMS MATLAB code download link and password to use it, please fill-in the request form by clicking here.

Note: Kindly run the ‘LGMS_demo.m’ file first, since it contains the code to automatically download some of the required files, and then run ‘LGMS_demo.m’ as elaborated in the included readme file.

 

 

References:

[1] B. F. Klare and A. K. Jain, “Heterogeneous Face Recognition using Kernel Prototype Similarities,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1410-1422, June 2013.

[2] MathWorks, Multiple Comparisons, last visited 1 February 2016. [Online]. Available: http://www.mathworks.com/help/stats/multiple-comparisons.html#bum7ugv-1

[3] M. L. McHugh, Multiple Comparison Analysis testing in ANOVA, last visited 1 February 2016. [Online]. Available: http://www.biochemia-medica.com/2011/21/203

 

 

Leave a comment