An Ensemble Approach To Face Recognition In Access Control Systems

Volodymyr Mykolaevich Opanasenko1,*, Shavkat Khayrullaevich Fazilov2, Olimjon Nomazovich Mirzaev2 and Shukrullo Sa’dullo ugli Kakharov2.3

1Department of Microprocessor Technology No. 205, V.M. Glushkov Institute of Cybernetics of National Academy of Sciences of Ukraine, Ukraine
2Laboratory of Biometric Systems, Research Institute for the Development of Digital Technologies and Artificial Intelligence, Republic of Uzbekistan
3Department of Digital Technologies and Mathematics, Faculty of Economics and Tourism, Kokand University, Republic of Uzbekistan
E-mail: Opanasenkoincyb@gmail.com; sh.fazilov@yahoo.com; omirzaev@gmail.com; sh.kaxarov93@gmail.com
*Corresponding Author

Received 01 November 2023; Accepted 29 February 2024

Abstract

The article proposes a method for recognizing faces in mobile devices, based on an ensemble approach to solving the problem of pattern recognition, which ensures high accuracy of results. According to this approach, the basic algorithm is decomposed into two operators: a recognition operator and a decision rule. The recognition operator calculates estimates of the proximity of the tested object to the given classes. The decision rule, based on these estimates, determines whether the tested object belongs to one of the given classes. The ensemble of recognizing operators is formed in the form of a linear polynomial. The values of the polynomial parameters are calculated based on the solution of the multiparameter optimization problem. Experimental studies were carried out using open databases of facial images. When conducting experiments, it was assumed that two options for using basic algorithms would be implemented: separate and ensemble. The accuracy of recognizing objects in the control sample using an ensemble of recognition operators turned out to be higher compared to the accuracy of the best basic recognition algorithm.

The proposed face recognition method can be used in mobile devices, in particular, to verify users when remotely accessing information resources that limited access status.

Keywords: Face image, face recognition, person identification, support operators, ensemble of recognition algorithms.

1 Introduction

There are many computer vision methods proposed for creating mobile recognition systems used in various areas of human activity [1, 2]. Among these methods, automatic face recognition methods are widely used. Rapid advances in technologies such as digital cameras and portable video recording devices, as well as increased demand for security, make facial recognition technology a major biometric technology. There are many applications for facial recognition, including access control using mobile identity verification devices, mobile active video surveillance systems and rapid retrieval of records from remote facial databases. Facial recognition is defined as a process that identifies and compares a requested face image with all sample images in a face database and verifies the identity of the person [3]. In identification, the person being tested is compared to a set of individuals to find the most likely match, and in verification, the person being tested is compared with a known person in the database to make a decision about accepting or rejecting the person being tested [4].

There are two main approaches to face recognition, differing in the way faces are represented: holistic (Figure 1,a) or component-based (Figure 1,b). Both of these approaches are shown and described in Figure 1.

images

Figure 1 Two approaches to face recognition: a) holistic; b) component-based.

As shown in Figure 1, with a holistic representation of a face, all information is extracted using a single vector from the entire face image, and with a component-wise representation, local features are extracted that describe the selected components of the face (eyes, nose, mouth, etc.).

A holistic approach to face recognition is mainly represented by Eigenface and Fisherface methods [5], independent component analysis (ICA) [6], moment invariants [7], discrete cosine transform (DCT) [8] etc. The component-based approach is implemented in the following widely used methods: support vector machine (SVM) [9], linear discriminant analysis (LDA) [10], statistical methods [11, 12] etc.

This work aims to develop a component-based approach to facial recognition for access control systems. When using these systems in mission-critical applications, it is necessary to ensure high accuracy and reliability of identification of the person being verified.

As noted in [13], a system built on the basis of a correct recognition method does not by itself provide one hundred percent reliability of identification. Therefore, if high reliability is required, especially in critical applications of recognition systems, then a combination of several recognition methods (algorithms) should be used. It is this approach to solving the problem of face recognition that is used in this work.

The joint use of several methods (algorithms) to make a final decision on an identified object is called a combination of algorithms, a fusion of algorithms, a mixture of experts, ensemble-based classification, a composition of algorithms, etc. in the scientific literature on pattern recognition.

Figure 2 shows one of the possible schemes for component-wise face recognition using an algorithmic ensemble. The fundamental difference between this scheme and the classical scheme of component-by-component face recognition (Figure 1,b) is that each component of the face is recognized by the corresponding basic algorithm, which generates estimates of the proximity of the component in question to given classes. Then these estimates are fed to the input of the integrator, who, based on them, makes the final decision on the recognized face as a whole (Figure 2).

images

Figure 2 Proposed face recognition scheme using algorithmic ensembles.

2 Related Works

To date, various schemes for combining algorithms have been developed and it has been experimentally demonstrated that in some cases a set of basic algorithms is superior in accuracy to the best algorithm presented in this set [1418]. There are basically two scenarios for combining algorithms [19].

In the first scenario, the algorithms can be different but use the same set of features as input. A typical example of this scenario is a set of algorithms based on the k-nearest neighbours’ method. Each algorithm in this set uses the same feature vector but differs from other algorithms in its parameters (the value of the parameter k, which determines the number of nearest neighbours, or the type of distance metric used to determine the nearest neighbours). Another example is a set of algorithms represented by neural networks of a fixed architecture, but having different sets of weights that were obtained using different training strategies. Moreover, each algorithm determines for the input object the degree of membership in given classes, for example, in the form of estimates of the posterior probability of classes.

In the second scenario, each algorithm included in a set of algorithms that may be identical uses its own attribute description of the input object. In other words, the vector of features describing the input object is unique for each algorithm. An important application of the set of algorithms combined in this scenario is the ability to integrate physically different types of feature measurements. In this case, the calculated posterior probability cannot be considered estimates of the same functional quantity, since classification systems operate in different feature spaces [20].

An important issue when combining algorithms is that individual algorithms should not make the same erroneous decisions for the same recognized objects, that is, they should provide additional information regarding these objects. A successful combination of a set of individual algorithms should improve the overall accuracy of the recognition system, so this concept of combining algorithms is widely used in various applications where high accuracy is required.

According to [21], the combination of algorithms can be carried out at three levels: sensor data level, feature level, and decision-making level.

The sensor fusion layer produces a set of data collected from two or more sensors. In this case, data fusion is carried out before applying methods for extracting features from this data. At the feature combination level, features with equal or different weights are concatenated. At the decision-making level, local solutions of basic algorithms are integrated based on one of three approaches: abstract, rank and score. In the abstract approach, each basic algorithm produces one class label, which is then fed to the integrator input. The latter forms the final solution based on the output labels of the basic algorithms. In the rank-based approach, each base algorithm produces multiple labels ranked from most likely to least likely. These labels are then used to make the final decision. The score-based approach assumes that each algorithm outputs n the best labels along with their confidence scores. Score fusion can be accomplished in several ways by merging confidence scores: density-based, transform-based, and algorithm-based (e.g., neural network). As noted in [22], the score-based approach is the most informative compared to other approaches to combining algorithms.

It should be noted that the above abstract approach to combining algorithms at the decision level is based on the use of hard decisions of the basic algorithms, while the score-based approach uses the soft decisions produced by these algorithms, for example, in the form of estimates of posterior probabilities.

Combining hard solutions of basic algorithms can be carried out on the basis of majority voting, and combining soft solutions of these algorithms can be implemented using the rules of sum, product, maximum, minimum, average and median. These rules use the algorithms’ output posterior probabilities, or scores. The product rule quantifies the probability of a hypothesis by combining the posterior probabilities of the underlying algorithms using a multiplication operation, while the sum rule uses the operation of summing the posterior probabilities. The maximum rule is an approximation of the sum rule and takes the maximum of the posterior probabilities. Likewise, the minimum rule is an approximation to the product rule. The listed rules for combining solutions of basic algorithms are discussed and studied in detail in [23, 24].

The majority voting rule, which combines the hard decisions of the underlying algorithms, can use three voting options: unanimous voting, more than half the votes, and the largest number of votes [22].

Consider a binary vector of output labels of the algorithm 𝒜i:

(ai1,,aiN)T{0,1}N,i=1,,M,

where M – the total number of basic algorithms, N – the number of classes. Here aij=1, if the algorithm 𝒜i assigns a given object to the class ωj, aij=0, otherwise.

Majority voting leads to a decision in favour of the class ω if

i=1Mai=maxji=1Maij,j=1,,N.

In this case, majority voting provides an accurate class label when at least M2+1 the algorithms provide this class label [23].

Concluding the consideration of the main approaches to combining pattern recognition algorithms, we can draw an important conclusion that the combination allows us to obtain high quality recognition, unattainable for individual basic algorithms. Analysis of these approaches shows that although one strategy may be superior to others for a given application, the results of implementing this strategy may not be the best for another application. The most general approach to combining recognition algorithms is proposed in the algebraic theory of pattern recognition [2429]. As noted in [13], this theory allows one to construct correct algorithmic compositions using purely algebraic methods.

3 Basic Concepts and Notation

Let there be a set of 𝔽 objects presented in the form of images of faces. The initial data about each object F is given in the form of a matrix X of size m×n (where m and n are the number of rows (images) and the number of columns (features), respectively) [8]:

X=|xij|n×m.

It is assumed that the elements of the set 𝔽 form disjoint classes K1,,Kj,,K, each of which is represented by images of the face of one person. In this case we have:

𝔽=j=1Kj,KiKj=,ij,i,j{1,,}. (1)

Expression (1) is not fully defined, and there is only some initial information 0 about the classes K1,,Kj,,K. Let there be some sample ~m(~m𝔽) consisting of m objects:

F~m={F1,,Fu,,Fm}, (2)

where Fu𝔽,u=1,m¯.

Let us introduce the following notation for objects (2):

K~j=~mKj,K~j=~m\K~j.

Then the initial information 0 about classes according to [23] can be specified in the form:

0={(F1,α~(F1)),,(Fu,α~(Fu)),,(Fm,α~(Fm))},

where is α~(Fu) – the information vector of the object Fu(Fu~m): α~(Fu)=(αu1,,αuj,,αu). Here αuj – is the meaning of the predicate, which has the following form:

Pj(Fu)={1,ifFuK~j;0,ifFuK~j. (3)

It is known [26] that an arbitrary recognition algorithm A can be represented as the sequential execution of the operators B (recognition operator) and C (decision rule):

A=BC. (4)

From (4) it follows that the algorithm A is implemented in two stages. At the first stage, the operator B translates a valid object Fu into a numerical vector of estimates b~u:

B(Fu)=b~u, (5)

where b~u=(bu1,,buv,,bu), buv – numerical assessment.

At the second stage, according to a buv numerical assessment, the decisive rule C determines the object’s membership Fu in classes K1,,Kj,,K:

C(buv)={0,ifbuv<c1;Δ,ifc1buvc2;1,ifbuv>c2, (6)

where c1,c2 are the parameters of the decision rule. In this case, the estimate buv is calculated using the recognition operator (5). The first condition in (6) means that the object Fu does not belong to the class Kv, the second condition means that the algorithm refuses recognition, the third condition means that the object Fu belongs to the class Kv.

Various decision rules are considered in the literature, however, as shown in [23], we can limit ourselves to considering only rule (6).

After introducing the basic concepts and notations, we can formulate the problem of personality recognition from a face image in the following formulation.

4 Statements of the Problem

Let there be some recognition algorithm 𝒜(𝒜=𝒞), which, based on the initial information, 0 calculates the values of the information vector α~(Fu) for an arbitrary object Fu(FuF~m):

𝒜(0,Fu)=β~u,β~u=(βu1,,βuj,,βu),βuj=Pj(Fu). (7)

Here βuj it is interpreted as follows:

βuj={1,if the objectFubelongs to classKj;0,if the objectFudoesn’t belongs to classKj;Δ,if the model didn’t calculate value of the predicatePj(Fu).

In this case, it is assumed: (i) the algorithm 𝒜 implements a certain rule that characterizes the dependence between the answer (β~u) and the object (Fu); (ii) the rule that characterizes this dependence is unknown; (iii) there are k elementary recognition algorithms, the set of which will be denoted by 𝔸:

𝔸={A1,,Ai,,Ak}, (8)

where Ai(Fu)=α~(Fu), Fu𝔽, (i=1,2,,k); The algorithms in (8) solve the recognition problem with varying accuracy. Considering (4), instead of recognition algorithms (8) consider the set of corresponding recognition operators, which we denote by 𝔹:

𝔹={B1,,Bi,,Bk}. (9)

It should be noted that the recognition operators presented in (9) will be called recognition operators of the first level. Let the recognition operator consists of a certain composition k of elementary recognition operators (9):

(Fu)=𝔅(B1,,Bi,,Bk). (10)

The recognition operator (Fu) in (10) will be called the recognition operator of the second level.

Then problem (7) can be reformulated as follows:

𝔅*=max𝔅fA(F~m),
fA(F~m)=1mFuF~m𝔶(β~u-𝔅(Fu)𝒞(c1,c2)),
𝔶(x)={1,ifx=0;0,ifx0.

As an example, consider a first-order polynomial 𝔅(B1,,Bi,,Bk). Then the model of recognizing operators (10) is given as

𝔄(𝕨,Fu)=Bi𝔹wiBi(Fu).

Thus, the main task is to find the optimal value of the parameter vector 𝕨=(w1,,wi,,wk):

𝕨*=argmax𝕨𝒬𝔄(𝕨), (11)

where

𝒬𝔄(𝕨)=1mFuF~m𝔶(β~u-𝔄(𝕨,Fu)𝒞(c1,c2)).

5 Proposed Method of Solution

An ensemble approach is proposed to solve problem (11). This approach is a logical continuation of work [25], which considered theoretical issues of constructing compositions of recognition algorithms. Based on this approach, a model of recognition operators is proposed, built on the basis of the integration of first-level recognition operators. The main idea of the proposed model is to form representative recognition operators within the framework of first-level recognition operators. The proposed model of recognition operators includes the following main stages.

At the first stage, a set k(k<k) of “independent” subsets of interconnected recognition operators is formed. Let be W all possible disjoint subsets of the recognition operators under consideration {B1,,Bi,,Bk}. From W we select a set of subsets of interconnected recognition operators, which is denoted by W(WW,k=|W|).

As a result of performing the actions provided for at this stage, k subsets of interrelated recognition operators are determined W𝔅:

W={B1,,Bv,,Bk}. (12)

In this case, the subsets in (12) satisfy the following conditions:

v=1kBv=;v=1kBv=W.

Thus, at this stage the parameter is set k, the value of which is determined during the learning process when solving specific problems.

At the second stage of the task, a set of reference points is determined recognizing operators. When defining a set of reference recognition operators, the following conditions are used: (1) the selected support recognition operators must be strongly connected in their subsets (Bv, v=1,k¯) recognizing operators; (2) all selected one from each subset Bv, v=1,k¯ supporting operators must be independent of each other. During the execution of this stage, the power of the considered set of recognition operators decreases 𝔹. Formed set of support We denote recognizing operators by 𝔹:

𝔹=(Bi1,,Biv,,Bik),

where k<k, k=|𝔹|, k=|𝔹|.

Thus, as a result of this stage, the reference recognition operators that correspond to a k-dimensional Boolean vector r=(r1,,ri,,rk), k=𝕣. Here ri=1, if the recognition operator Bi is supporting, or ri=0 – otherwise.

At the third stage, a generalized recognition operator is defined, which calculates a numerical proximity score that characterizes the similarity of an object Fu to objects belonging to the class Kj. Let each estimate for the class Kj calculated using the reference recognition operator Bi(Bi𝔹), corresponds to the numeric parameter 𝔤i. Then the estimate for the class Kj is calculated using all reference recognition operators 𝔹 in the following way

𝔅(Kj,Fu)=Bi𝔹𝔤iBi(Kj,Fu), (13)

where 𝔤i is the parameter of the reference recognition operator Bi. We denote the set of such parameters for all support operators 𝔹 as 𝕘=(𝔤1,,𝔤i,,𝔤k).

Thus, we have defined a model of two-level recognition operators built on the basis of an ensemble approach. An arbitrary recognition operator 𝔅 from this model is completely determined by specifying a set of parameters π~:

π~=(k,𝕣,𝕘).

We denote the set of all recognition algorithms from the proposed model by 𝔅(π~,F). The search for the best recognition operator is carried out in the parameter space π~ according to [2830].

6 Experimental Studies

Experimental studies were carried out using training and control samples, which were formed during the experiments for each facial component separately using the k-fold cross-validation method.

In order to ensure the representativeness and diversity of the source data, images from both the ORL and LABDPS databases were used to form the initial samples of facial images.

The ORL (Olivetti Research Laboratory) database includes 400 frontal images of the faces of 40 people, differing in facial expressions, lighting conditions, the presence of a beard, moustache, and glasses [13]. All samples were obtained against a dark, uniform background in which the faces were presented in an upright pose with slight lateral rotation. Each face image has a size of 92*112 pixels with 256 shades of grey. The permissible tilt and rotation of the head is up to 20. Figure 3 shows examples of facial images presented in the ORL database.

images

Figure 3 Examples of facial images presented in the ORL database.

The LABDPS (Laboratory Data Processing Systems) database was created in the Biometric Systems laboratory of the Research Institute for the Development of Digital Technologies and Artificial Intelligence as part of this project. This database includes 415 colour images of the faces of 34 people, obtained at different periods and differing in lighting conditions, head rotation and tilt. Figure 4 shows examples of facial images presented in the LABDPS database. The size of the face image is 210*250 pixels.

images

Figure 4 Examples of facial images presented in the LABDPS database.

As the basic recognition algorithms used to create the algorithmic ensemble, four algorithms related to the models of algorithms for calculating estimates (ACE) known in image recognition were selected. According to [23], the basis for the formation of these models is the principle of partial precedent. The main idea of this principle is to assess the “closeness” between parts of the previously described classified objects and the object belonging to recognition. The presence of proximity is a partial precedent and is assessed according to some given rule.

The operating principle of ACE is to calculate the degree of similarity, which characterizes the “closeness” of the recognized and reference objects according to a system of reference features, which are a set of subsets of a given set of features. In these algorithms, the recognition object is considered simultaneously in a variety of subspaces of the feature space. As stated earlier, the main idea of the ACE class is to compare objects in parts, called ω-parts. However, it is not always known which combinations of features are the most informative. Therefore, in this model of algorithms, the degree of similarity of objects is calculated by comparing all possible or specific combinations of features included in the descriptions of objects. The problem of determining the similarity and difference of objects is formulated as parametric, therefore, the stage of setting up the ACE using the training set is highlighted, at which the optimal values of the entered parameters are selected. The quality criterion is the recognition accuracy.

Within the ACE model, the implementation of the recognition procedure is carried out in the following sequence:

– a system of reference sets of the algorithm is specified, according to which the recognized object is analysed;

– the concept of proximity is defined on the set of ω-parts of object descriptions;

– the scheme for calculating the proximity assessment of the reference and recognized object is determined, i.e. a quantity called the estimate for pairs of objects is calculated;

– indicates a method for generating estimates for each of the classes based on a fixed reference set based on estimates for pairs of objects;

– the method for generating the total assessment for each of the classes for all reference subsets is determined;

– a decision-making rule is specified, which, based on estimates for classes, ensures that the recognized object is assigned to one of the classes or that a decision is rejected.

Concluding the consideration of algorithms for calculating estimates, it should be noted that setting the values of the corresponding parameters determines the specific algorithm. By varying the selection method and parameters, it is possible to construct a model of ACE that provides the best solution to the recognition problem (for example, in terms of a minimum of errors and failures to recognize).

Thus, as noted above, to create an ensemble of algorithms used in experimental studies, four algorithms for calculating estimates were selected, designated in the experiments as A1,A2,A3,A4. In accordance with the general structural representation of recognition algorithms in the form (4), these algorithms can be represented as A1=B1C1, A2=B2C2, A3=B3C3, A4=B4C4. Considering that these algorithms belong to the same family, therefore, in order to eliminate the correlation of their results, each algorithm was trained on a separate sample, randomly generated from the ORL and LABDPS databases. As a result of training these algorithms, the optimal parameter values were determined for recognition operators B1,B2,B3,B4 and decision rules C1,C2,C3,C4.

During the experiments, two options for using the basic algorithms were implemented – separately and in the form of an ensemble. In the first option, according to (4), the recognition operator Bi(i=1,4¯) and the decision rule Ci(i=1,4¯) are sequentially implemented. In other words, when the algorithms were used separately, hard recognition was carried out, that is, recognition with a final decision made on the face image in question. Testing of the basic algorithms implemented according to the first option showed that algorithm A1 provides an accuracy of recognition of objects in the control sample of 93,57%, A2 – 89,28%, A3–95,31%, A4–91,43%.

When using basic algorithms as part of an ensemble, only recognition operators Bi(i=1,4¯) are implemented, which determine numerical estimates of the proximity of the recognized object Fu to the classes K1,,Kj,,K.

bui=(bu1i,,buji,,bui),i=(1,4¯).

Next, using the generalized recognition operator 𝔅 defined according to (13), the integral estimate of the proximity of the object Fu to the class Kj is calculated based on the estimates buji, i=(1,4¯):

buj=i=14𝔤ibuji,

where 𝔤i is the parameter of the recognition operator Bi.

The belonging of an object Fu to a class Kj is determined using the decision rule (6). The accuracy of recognizing objects in the control sample using the ensemble of basic algorithms A1,A2,A3,A4 was 98.84%.

Thus, it can be stated that the joint use of basic algorithms in the form of an ensemble provided higher recognition accuracy compared to the results of the separate use of these algorithms.

Four algorithms (A1-A4) belonging to the class of algorithms for calculating estimates were used as basic recognition algorithms forming an algorithmic ensemble [29, 30]. To eliminate the correlation of the results of these algorithms, they were trained on four training samples randomly generated from the ORL and LABDPS databases. When conducting experiments, it was assumed that two options for using basic algorithms would be implemented: separate and ensemble. The results of separate use showed that A3 provides recognition accuracy of 95.31%, A1 – 93.57%, A4 – 91.43%, A2 – 89.28%. Using an ensemble of these algorithms provided an accuracy of 98.84%, which exceeds the accuracy of the best basic algorithm A3.

7 Conclusion

The main goal of this work, motivated by scientific research in the field of facial biometrics, is to demonstrate the great potential of an ensemble approach to face recognition for mission-critical applications of mobile personal identification devices.

The main goal of this work, motivated by scientific research in the field of facial biometrics, is to demonstrate the great potential of an ensemble approach to face recognition for mission-critical applications of biometric personal identification technologies. In previous studies involving the use of individual algorithms, difficulties arose in choosing the correct recognition algorithm for a particular problem. These difficulties can be eliminated using algorithmic ensembles, which is confirmed, in particular, by the results of experimental studies presented in this work. The use of basic algorithms in mobile devices as an ensemble provided higher accuracy of face recognition compared to the results of using these algorithms separately.

Thus, it should be concluded that a mobile access control system built on the basis of a correct face recognition algorithm does not by itself provide one hundred percent reliable identification verification. Therefore, if high reliability of such systems is required, especially in critical applications, in particular mobile devices, then a combination of several face recognition algorithms should be used as in form of ensemble. It is this approach to solving the problem of face recognition that is considered in this work.

One of the main directions for further research into the ensemble approach to face recognition should be associated with the widespread use of the mathematical apparatus of the algebraic theory of pattern recognition, which is a general approach to constructing correct algorithmic compositions using purely algebraic methods. The results of these studies are especially important for the creation of mobile devices intended, in particular, for verifying users when remotely accessing information resources that have a restricted use status. This feature of the specified system expands the scope of its practical application, in particular, in systems of active video surveillance of on-board systems [31, 32] and Cloud-based systems [33].

References

[1] Kondratenko, Y., and Mokhor, V. (2023). Guest Editorial Column: Special Issue of Journal of Mobile Multimedia “Artificial Intelligence in Automation with Mobile Applications”. Journal of Mobile Multimedia, 19(03), 1–3.

[2] Kuntsevich, V., Gubarev V., Kondratenko Y., Lebedev D. and Lysenko V. (Eds). Control Systems: Theory and Applications. River Publishers, Gistrup, Delft, 2018.

[3] Purahong, B., Chutchavong, V., Aoyama, H., and Pintavirooj, C. (2020). Hybrid Facial Features with Application in Person Identification. Journal of Mobile Multimedia, 16(1–2), 245–266. https://doi.org/10.13052/jmm1550-4646.161212.

[4] Kortli, Yassin, Maher Jridi, Ayman Al Falou, and Mohamed Atri. 2020. Face Recognition Systems: A Survey. Sensors 20, no. 2: 342. https://doi.org/10.3390/s20020342.

[5] Belhumeour PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces vs. fisherfaces : recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720.

[6] Comon P (1994) Independent component analysis; A new concept? Signal Process 36(3):287–314.

[7] Nabatchian A, Abdel-Raheem E, Ahmadi M (2008) Human face recognition using different moment invariants: a comparative study, image and signal processing, 2008. In: CISP ’08. Congress on, 27–30 May 2008, China.

[8] Narzillo M, Bakhtiyor A, Shukrullo K, Bakhodirjon O, Gulbahor A (2021) Peculiarities of face detection and recognition. 2021 International Conference on Information Science and Communications Technologies (ICISCT). Tashkent, Uzbekistan, 2021, pp. 1–5, https://doi.org/10.1109/ICISCT52966.2021.9670086.

[9] Heisele B, Serre T, Poggio T (2007) A component-based framework for face detection and identification. Int J Comput Vis 74(2):167–181.

[10] Zhang W, Shan S, Gao W, Chang Y, Cao B (2005) Component-based cascade Linear Discriminant Analysis for face recognition. In: Advances in biometric person authentication lecture notes in computer science, Vol. 3338, pp. 19–79.

[11] Paul SK, Bouakaz S, Rahman CM, Uddin MS Component-based face recognition using statistical pattern matching analysis. Pattern Anal Appl. 2021;24(1):299–319. https://doi.org/10.1007/s10044-020-00895-4.

[12] S. Takeda, T. Terada, and M. Tsukamoto. (2012). Implicit context awarenessby face recognition. Journal of Mobile Multimedia, Vol. 8, No. 2, pp. 132–148.

[13] Voroncov K.V. Mashinnoe obuchenie (kurs lekcij). Chast’ pervaja. – Shkola analiza dannyh, 2019. – URL: http://www.machinelearning.ru [in Russian].

[14] Adjabi I., Ouahabi A., Benzaoui A., Taleb-ahmed A. (2020) Past, Present, and Future of Face Recognition: A Review. Electronics, 9(8), 1188; https://doi.org/10.3390/electronics9081188.

[15] Striuk, O., and Kondratenko, Y. (2023). Implementation of Generative Adversarial Networks in Mobile Applications for Image Data Enhancement. Journal of Mobile Multimedia, 19(03), 823–838. https://doi.org/10.13052/jmm1550-4646.1938.

[16] Mohd Naved, V. Ajantha Devi, Loveleen Gaur, Ahmed A. Elngar (Eds). IoT-enabled Convolutional Neural Networks: Techniques and Applications. River Publishers, Gistrup, Delft, 2023.

[17] Kondratenko, Y., Gerasin, O., Kozlov, O., Topalov, A, Topalov, A, Kilimanov, B. (2021). Inspection mobile robot’s control system with remote iot-based data transmission. Journal of Mobile Multimedia, 17(04), 499–526. https://doi.org/10.13052/jmm1550-4646.1742.

[18] Adhikari S., Saha S., “Multiple classifier combination technique for sensor drift compensation using ANN & KNN,” 2014 IEEE International Advance Computing Conference (IACC), Gurgaon, India, 2014, pp. 1184–1189, https://doi.org/10.1109/IAdCC.2014.6779495.

[19] Kittler J., Hatef M., Robert P., Duin W., Matas J. On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 3, pp. 226–239, Mar. 1998.

[20] Mohandes M., Deriche M., Aliyu S. Classifiers Combination Techniques: A Comprehensive Review. IEEE Access. pp. 1–14., Apr. 2018.

[21] Chitroub S. Classifier combination and score level fusion: Concepts and practical aspects. Int. J. Image Data Fusion., vol. 1, no. 2, pp. 113–135.

[22] Polikar R. Ensemble based systems decision making. IEEE Circuits Syst. Mag., vol. 6, no. 3, pp. 21–45.

[23] Kuncheva L. Combining Pattern Classifier: Methods and Algorithms. Hoboken, NJ, USA: Wiley, 2004.

[24] Zhuravljov Ju.I. Ob algebraicheskom podhode k resheniju zadach raspoznavanija ili klassifikacii // Problemy kibernetiki. – Moskva,1978. – T.33 – S. 5 – 68. [in Russian].

[25] Zhuravljov Ju. I., Rudakov K. V. Ob algebraicheskoj korrekcii procedur obrabotki (preobrazovanija) informacii // Problemy prikladnoj matematiki i informatiki. – 1987. – pp. 187–198. [in Russian].

[26] Voroncov K. V. Optimizacionnye metody linejnoj i monotonnoj korrekcii v algebraicheskom podhode k probleme raspoznavanija // ZhVM i MF. – 2000. – T. 40, 1. – pp. 166–176. [in Russian].

[27] Rudakov K. V. Polnota i universal’nye ogranichenija v probleme korrekcii jevristicheskih algoritmov klassifikacii // Kibernetika. – 1987. – 3. – pp. 106–109. [in Russian].

[28] Fazilov, Sh.Kh., Mirzaev, O.N., Kakharov, S.S. (2023). Building a Local Classifier for Component–Based Face Recognition. Lecture Notes in Computer Science, vol. 13741. Springer, Cham. https://doi.org/10.1007/978-3-031-27199-1\_19.

[29] Yu.I. Zhuravlev, V.V. Ryazanov, and O.V. Senko, Recognition. Mathematical methods. Software system Practical applications. Moscow: Fazis, 2006.

[30] Kamilov M.M., Fazylov Sh.H., Mirzaev N.M., Radzhabov S.S. Modeli algoritmov raspoznavanija, osnovannyh na ocenke vzaimosvjazannosti priznakov – Tashkent: Nauka i tehnologija, 2020. – 149 p. [in Russian].

[31] Opanasenko, V., Palahin, A., and Zavyalov, S. “The FPGA–Based Problem-Oriented On–Board Processor”, in Proceedings of the 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, vol. 1, (IDAACS’2019), 18–21 September 2019. – Metz, France. – pp. 152–157. https://doi.org/10.1109/IDAACS.2019.8924360.

[32] Opanasenko V.M., Fazilov Sh. Kh, Radjabov S.S., Kakharov S.S. “Multilevel Face Recognition System,” Cybern Syst Anal, 60, pp. 146–151, 2024. https://doi.org/10.1007/s10559-024-00655-w.

[33] Malakhov, K. S. (2023). Letter to the Editor – Update from Ukraine: Development of the Cloud–based Platform for Patient–centered Telerehabilitation of Oncology Patients with Mathematical–related Modeling. International Journal of Telerehabilitation, 15(01). 1–3. https://doi.org/10.5195/ijt.2023.6562.

Biographies

images

Volodymyr Mykolaevich Opanasenko, Doctor of Science, Professor, He has received the master’s degree in computer engineering from Kazan aircraft Institute (1979), the philosophy of doctorate degree Ph.D. (1988) in Elements Devices of Computer and Control Systems from V.M. Glushkov Institute of Cybernetics of NAS of Ukraine and Dr.Sc. (2007), respectively. He is currently working as a Leading Researcher of the Department of Microprocessor Devices at V.M. Glushkov Institute of Cybernetics of NAS of Ukraine. Research interests include pattern recognition, artificial intelligence systems, and Reconfigurable computing.

images

Shavkat Khayrullaevich Fazilov, Doctor of Science, Professor, Research Institute for the Development of Digital Technologies and Artificial Intelligence, 17A, Buz-2, Mirzo Ulugbek, 100125 Tashkent, Republic of Uzbekistan. He is currently working as a Head Researcher in Laboratory Biometric Systems, Research Institute for the Development of Digital Technologies and Artificial Intelligence. Doctor of Science in Technical Sciences in Computer Science, Institute of Cybernetics NAS of Uzbekistan, Uzbekistan. Research interests include pattern recognition, artificial intelligence systems and image processing.

images

Olimjon Nomazovich Mirzaev, Research Institute for the Development of Digital Technologies and Artificial Intelligence, 17A, Buz-2, Mirzo Ulugbek, 100125 Tashkent, Republic of Uzbekistan. He is currently working as a Senior Researcher in Laboratory Biometric Systems, Research Institute for the Development of Digital Technologies and Artificial Intelligence. He holds a Ph.D. in Technical Sciences in Computer Science, which he received from the Research Institute for the Development of Digital Technologies and Artificial Intelligence in Uzbekistan. Research interests include pattern recognition, artificial intelligence systems and image processing.

images

Shukrullo Sa’dullo ugli Kakharov, Kokand University, 28A, Turkistan, 150700 Kokand, Republic of Uzbekistan. He is currently working as an Associate Professor in the Department of Digital Technologies and Mathematics, Faculty of Economics and Tourism, Kokand University. He holds a Ph.D. in Technical Sciences in Computer Science, which he received from the Research Institute for the Development of Digital Technologies and Artificial Intelligence in Uzbekistan. Research interests include pattern recognition, artificial intelligence systems and image processing.

Abstract

1 Introduction

images

images

2 Related Works

3 Basic Concepts and Notation

4 Statements of the Problem

5 Proposed Method of Solution

6 Experimental Studies

images

images

7 Conclusion

References

Biographies