Boosting Based Implementation of Biometric Authentication in IoT
B. Thilagavathi1 and K. Suthendran2
1Assistant professor, Department of Electrical Sciences, Karunya Institute of Technology and Sciences, Coimbatore, India
2Associate professor, Department of Information Technology, Kalasalingam Academy of Research and Education, krishnankoil, India
E-mail: thilagavathib@karunya.edu; k.suthendran@klu.ac.in
Received 22 January 2018; Accepted 05 April 2018;
Publication 26 April 2018
In security and control application the biometric authentication played a specific and important role to identify the person. Analysis of face recognization is the prerequisite process for the entire authentication. This paper focuses an automatic real-time implementation of face recognization system by high-level description language such as python. Comparing the biometrics where still images are used, video based biometric holds ample information than a single image. It provides the innovative solution for automatic real time face recognition from the video by the following algorithms like Adaboost, Haar cascade classifier and local binary pattern Histogram (LBPH). From the video stream, the input images are trained by adaboost algorithm which is implemented in cascade classifier. AdaBoost is called as adaptive boosting algorithm. It is a learning algorithm which has been combined with a weak classifier in order to form a strong classifier. Here, the real time testing image is compared with the 627 frames of trained images which are obtained from 47.7 MB (56088648 bytes) video. The hardware implementation of face recognization is obtained and the result can be stored and viewed in http://169.254.108.24. Thus by the above said procedures are authenticated by the combinational performance of Adaboost and Cascade classifier.
Biometric recognition depends on the individual biological and behavioural characteristics [6]. Several biometric traits have been offered for person recognition, fingerprint, face, and iris. The disadvantage of finger print recognization is that it makes mistake with the dryness, dirty of the finger and the age factor [9]. The merit of the face recognization is the unique identification of individual from the videos non-invasively [10, 19]. The parameter that affects the performance of the system can be categorized into physical and external factors. Physical factors includes the pose, expression etc., External factors include ageing factor, scale, occlusion etc., [30]. The image sequences from the videos are active in several research areas such as surveillance, biometric, computer and embedded application field for face recognition [12, 32]. The analysis of Face recognition is depends on classifying and locating the face and non-face in images irrespective of size, position, and reflection condition [5]. Many of the early techniques focused on either reducing the dimensionality of the facial image or on extracting a particular feature from the image and then on classifying the output. Some of these methods for face recognition include Eigen Faces and Fisher Faces. Eigen Faces focuses on principal component analysis. Fisher Faces focuses on linear discriminant analysis [16, 20]. The main challenges of any face recognition system are coping with changes in light intensity and pose. A method based on local binary patterns [21] has been shown to be relatively robust to changes in light intensity. Most recently deep learning techniques have been applied to face recognition with a high level of success [22, 23]. In images, numerous approaches are proposed for face detection. Early researches considered the features of colour, motion and textures for face detection [3, 7]. Due to the complexity of the early research has paved path for new researches in this field. Viola and Jones are the most common and among the face detection approaches using statistic approaches [17, 18]. Image classification is one of the classical problems in image processing [13]. The goal of image classification is to forecasting the categories of the input images by using its features. Several approaches are used to resolve the problems such as K-Nearest Neighbor (K-NN), Artificial Neural Network (NN), Adaptive boost (Adaboost) [14], Support Vector Machine (SVM). They had used a variant of the AdaBoost algorithm [1, 2] which attains rapid and robust face recognition in images. This paper provides the innovative solution for automatic real time face recognition from the video by the following algorithms like Adaboost, cascade classifier, and local binary pattern Histogram (LBPH). The Adaboost algorithm is implemented with a cascade classifier to train the face from the video stream.
In this paper the real time testing image is compared with the 627 frames of trained images which are obtained from 47.7 MB (56088648 bytes) video. The hardware implementation of face recognization is obtained and the result can be stored and viewed in http://169.254.108.24. The processor working on 640 by 480 pixel images, faces are detected at 32 frames per second. Among the face detection algorithms, the AdaBoost [1–4] based method is generating the strong learner by iteratively adding weak learners. In this paper Simple AdaBoost learning algorithm is used. AdaBoost uses an elective scheme from the weak classifiers; hj(x), where X represent the input image and no of iteration represent t. In the boosting algorithm, each weak classifier ht(x) is trained with a single feature as in [17, 6]. Input image Xn, n ∈{1, 2,…, no of samples} and its class label Yn ∈{1, –1} is used to find weak classifiers hj(x) [31]. These weak classifiers hj(x) are used to construct a strong classifier H(X). The iteration number is greater than desired number of boosting iterations T. If t > T, the output will be a strong classifier H(X), otherwise t will be increased by 1 and update the weights of the training samples and support the hj(x). AdaBoost [3, 4] based LBPH [28] face recognition can be implemented in hardware effectively. The structure which compares the pixel of an image with its adjacent pixel is LBPH [28]. It is assigned as 1 when the value of the adjacent pixel is higher than the threshold or else 0. In this way it forms a binary matrix.
Testing image captured from the video stream and detects the face using Adaboost algorithm. The program loads the Haar features into the memory which contains features of faces and using this features detect the human faces. The training is done using Adaboost algorithm from the 47.7 MB video. LPBH and Haar cascade classifier is used to recognize the face. The recognized face with name is authenticated and displayed in the http://169.254.108.24 IP address.
The algorithm uses features which are significant of Haar Basis. For every pixels on the image some operations are performed to calculate the integral image [18]. To build a cascade classifier some important features are required [5]. To make certain fast classification, the total number of pixels is smaller than the number of Haar features. The learning process depends upon the limited critical features but not on a large majority of the available features [17, 18]. Feature selection is achieved by constraining the weak learner [15]. A new weak classifier is selected at each stage of boosting process. Adaboost provides an effective learning algorithm and provide strong bounds [8, 9, 12]. The complex classifier is combined in a cascade structure for the detection speed to be increased. To obtain the quick response for implementation by boosting, Haar features have been introduced [18].
From 1970s, different techniques for face detection are available. Two important streams are there in face detection methods. One is based on feature and the other is based on image. Feature based method is used to extract the face features of humans. The features of face depend on geometry and distances. The main advantage is that it shows higher rate of detection. The drawback is that, this method is not applicable for non-frontal face. In image-based approach [25] is to differentiate the face and non-face components. The machine learning algorithm is preferred for better face features. It can be categorized based on appearance and based on boosting [26]. In appearance method the classification stage need not be consider. Neural networks [9, 10], Support Vector Machines (SVM) [11] and Bayesian classifiers [12, 13] are used for classification. The demerit of appearance-based approach is not suitable for real time performance and it takes time to process an image. Viola and Jones [17] are the researchers who were the first to use the boosting method with 15 frames/second rate of detection. In this paper 32 frames/second achieved using the AdaBoost algorithm.
A weak classifier hj(x) contains a feature fj, a threshold 𝜃j, and a parity pj indicating the direction of the inequity sign:
AdaBoost algorithms are able to make a strong classifier with a combination of weak classifiers. A threshold value is chosen from the finite or infinite set.
Where the stage threshold is represent as 𝜃,αt denotes the weak classifier’s weight, and T represents the total number of weak classifiers. The adaboost learning algorithm [5] is used for feature extraction and to construct the classifiers. It results depends upon the array references for features, constant coefficients of classifiers, and thresholds values. These features are stored and transferred to some other location. If an image is given as an input to the algorithm, it calculates the threshold for the testing image. This threshold value is used throughout the algorithm to produce the respective output. Hence the threshold is static in still images. But this is not the case for video processing. A video is set of images that are taken as continuous frames of moving objects. Therefore many threshold values are acquired from –5.0425500869750977e+00 to –2.9928278923034668e+00 at variable stages and hence when video is processed. Whereas in still images –0.0138 is the threshold value acquired. Therefore in the case of video processing threshold is dynamic which will lead to dynamic output by the algorithm.
The Local Binary Patterns Histogram (LBP Histogram) features [11, 21] have high discriminative power and tolerance. The number of pixels decides the number of LBPH features. For high detection rate, Haar features are combined with LBPH features and gives faster training stage. The initial stages of the cascade have lesser features than the final stages; this is how the Haar Cascade classifier is build. The error in the training factor reduces theoretically with the number of iteration during the cascade learning process [7]. The fast feature selection approach is implemented by Jianxin Wu [8].
Haar cascade is explained by Haar features and cascade classifiers. A Haar feature considers the specific location of adjacent rectangular regions, from which the pixel intensities are added in every region. Now it calculates the difference between the added up pixels intensities which is used for face detection [24, 29]. In the detection face, the window is scanned the input image, at every sub window Haar features are calculated and the difference is compared with the threshold value. To implement this algorithm, the large number of positive and negative samples is trained. Every feature on all the training images calculates the best threshold value which is classifying the images. Viola and Jones [18] introduced the cascade detector. It is organized sequentially by more complicated classifiers. The promising zones of the image are focused in order to increase the performance of the face detector. The first stage of the cascade specifies which sub windows must be valued for the next stage in the promising zone. If a sub window is labelled as non-face, the decision will be excluded and it is terminated. If it is a face it has been valued by the next classifier. The sub window continues the process all over the region and labels the face component, to increase the number of sub windows for better performance.
Automatic real time face recognition is achieved from the video by the following algorithms like Adaboost, Haar cascade classifier and Local Binary Pattern Histogram (LBPH). From the video stream, the input images are trained by Adaboost algorithm which is implemented in cascade classifier. Testing image captured from the video stream and detects the face using Adaboost algorithm. The program loads the Haar features into the memory which contains features of faces and using this features detect the human faces. The training is done using Adaboost algorithm from the 47.7 MB video. LPBH and Haar cascade classifier is used to recognize the face.
Faces are detected at 32 frames per second and the algorithm implemented on the ARM Cortex processor which is 1.2 GHz clock frequency. The recognized face with name is authenticated and displayed in the http://169.254.108.24 IP address which is the hypertext pre-processor (PHP) that runs in the Web Server (Apache).
In this paper automatic real time face recognition is achieved from the video by the following algorithms like Adaboost, Haar Cascade Classifier and Local Binary Pattern Histogram (LBPH). The Adaboost algorithm in a cascade classifier to train the face from the video stream is implemented successfully. The recognized face is authenticated and displayed in the http://169.254.108.24 IP address. Future work will recognize any new individual for which the algorithm is not trained; the algorithm will automatically learn the training features and recognize the face for biometric authentication.
[1] Gao, C., Li, P., Zhang, Y., Liu, J., and Wang, L. (2016). “People counting based on head detection combining Adaboost and CNN in crowded surveillance environment,” Published by Elsevier, vol. 208, 108–116.
[2] Zhao, Y., Gong, L., Zhou, B., Huang, Y., and Liu, C. (2016). Detecting tomatoes in greenhouse scenes by combining AdaBoost classifier and colour analysis. Biosystems Engineering, 148, 127–137.
[3] Zhang, J., Yang, Y., and Zhang, J. (2016). A MEC-BP-Adaboost neural network-based color correction algorithm for color image acquisition equipments. Optik-International Journal for Light and Electron Optics, 127(2), 776–780.
[4] Wang, H., and Cai, Y. (2015). Monocular based road vehicle detection with feature fusion and cascaded Adaboost algorithm. Optik-International Journal for Light and Electron Optics, 126(22), 3329–3334.
[5] Gaber, T., Tharwat, A., Hassanien, A. E., and Snasel, V. (2016). Biometric cattle identification approach based on weber’s local descriptor and adaboost classifier. Computers and Electronics in Agriculture, 122, 55–66.
[6] Jain, A. K., Nandakumar, K., and Ross, A. (2016). 50 years of biometric research: Accomplishments, challenges, and opportunities. Pattern Recognition Letters, 79, 80–105.
[7] Zhuo, L., Zhang, J., Dong, P., Zhao, Y., and Peng, B. (2014). An SA–GA–BP neural network-based color correction algorithm for TCM tongue images. Neurocomputing, 134, 111–116.
[8] Razavian, A. S., Azizpour, H., Sullivan, J., and Carlsson, S. (2014). CNN features off-the-shelf: an astounding baseline for recognition. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (pp. 512–519).
[9] Gragnaniello, D., Poggi, G., Sansone, C., and Verdoliva, L. (2013). Fingerprint liveness detection based on weber local image descriptor. In IEEE Workshop on Biometric Measurements and Systems for Security and Medical Applications (BIOMS), (pp. 46–50).
[10] Morerio, P., Marcenaro, L., & Regazzoni, C. S. (2012). People count estimation in small crowds. In IEEE Ninth International Conference Advanced video and signal-based surveillance (AVSS), (pp. 476–480).
[11] Yu, J. (2011). The application of BP-Adaboost strong classifier to acquire knowledge of student creativity. In International Conference on Computer Science and Service System (CSSS), (pp. 2669–2672).
[12] Lan, J., and Zhang, M. (2010). A new vehicle detection algorithm for real-time image processing system. In International Conference on Computer Application and System Modeling (ICCASM), (Vol. 10, pp. V10-1).
[13] Chen, J., Shan, S., He, C., Zhao, G., Pietikainen, M., Chen, X., and Gao, W. (2010). WLD: A robust local image descriptor. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9), 1705–1720.
[14] Yang, M., Crenshaw, J., Augustine, B., Mareachen, R., and Wu, Y. (2010). AdaBoost-based face detection for embedded systems. Computer Vision and Image Understanding, 114(11), 1116–1125.
[15] Zhou, X., and Bhanu, B. (2006). Feature fusion of face and gait for human recognition at a distance in video. In 18th International Conference on Pattern Recognition (ICPR), (Vol. 4, pp. 529–532).
[16] Schneiderman, H., and Kanade, T. (2004). Object detection using the statistics of parts. International Journal of Computer Vision, 56(3), 151–177.
[17] Viola, P., and Jones, M. (2001). Robust real-time object detection. International Journal of Computer Vision, 57(2), 137–154.
[18] Viola, P., and Jones, M. (2001).Rapid object detection using a boosted cascade of simple features, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, Hawaii, pp. 511–518.
[19] Chuah, C. S., and Leou, J. J. (2001). An adaptive image interpolation algorithm for image/video processing. Pattern Recognition, 34(12), 2383–2393.
[20] Belhumeur, P. N., Hespanha, J. P., and Kriegman, D. J. (1997). Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 711–720.
[21] Ahonen, T., Hadid, A., and Pietikainen, M. (2006). Face description with local binary patterns: Application to face recognition. IEEE transactions on pattern analysis and machine intelligence, 28(12), 2037–2041.
[22] Freytag, A., Rodner, E., Simon, M., Loos, A., Kühl, H. S., and Denzler, J. (2016). Chimpanzee faces in the wild: Log-euclidean cnns for predicting identities and attributes of primates. In German Conference on Pattern Recognition (pp. 51-63). Springer, Cham.
[23] Parkhi, O. M., Vedaldi, A., and Zisserman, A. (2015). “Deep face recognition”, in British Machine Vision Conference, 3, 6.
[24] Kim, D. H., Jung, S. U., and Chung, M. J. (2008). Extension of cascaded simple feature based face detection to facial expression recognition. Pattern Recognition Letters, 29(11), 1621–1631.
[25] Riahi, D., and Bilodeau, G. A. (2016). Online multi-object tracking by detection based on generative appearance models. Computer Vision and Image Understanding, 152, 88–102.
[26] Meynet, J., Popovici, V., and Thiran, J. P. (2007). Face detection with boosted Gaussian features. Pattern Recognition, 40(8), 2283–2291.
[27] Shen, S., and Liu, Y. (2008). Efficient multiple faces tracking based on Relevance Vector Machine and Boosting learning. Journal of Visual Communication and Image Representation, 19(6), 382–391.
[28] Yang, B., and Chen, S. (2013). A comparative study on local binary pattern (LBP) based face recognition: LBP histogram versus LBP image. Neurocomputing, 120, 365–379.
[29] Zhang, X., Gonnot, T., and Saniie, J. (2017). Real-Time Face Detection and Recognition in Complex Background. Journal of Signal and Information Processing, 8(2), 99–112.
[30] Pagano, C., Granger, E., Sabourin, R., Marcialis, G. L., and Roli, F. (2014). Adaptive ensembles for face recognition in changing video surveillance environments. Information Sciences, 286, 75–101.
[31] Louis, W. (2011). “Co-Occurrence of Local Binary Patterns Features for Frontal Face Detection in Surveillance Applications”, EURASIP Journal on Image and Video Processing, 11, 2818–2832.
[32] Arivazhagan, S., Sekar, J. R., and Priyadharshini, S. S. (2014). curvelet and ridgelet-based Multimodal Biometric recognition system using Weighted similarity approach. Defence Science Journal, 64(2), 106–114.
B. Thilagavathi is currently pursuing her PhD at Kalasalingam Academy of Research and Education. Her research work is in the area of Video Processing, Real Time processing, Embedded Implementation and Internet of Things. She graduated M.E (Embedded System Technology) from Arulmigu Kalasalingam college of Engineering, Anna University in 2008 and B.E (Electrical and Electronics) from Madurai Kamaraj University in 2002. She is currently working as an Assistant professor in Karunya Institute of Technology and Sciences, Coimbatore.
K. Suthendran received his B.E. Electronics and Communication Engineering from Madurai Kamaraj University in 2002, M.E. Communication Systems from Anna University in 2006 and Ph.D Electronics and Communication Engineering from Kalasalingam University in 2015. From 2007 to 2009 he was a Research and Development Engineer at Matrixview Technologies private limited Chennai. He is currently the head of Cyber Forensics research laboratory and also Associate Professor of School of Computing in Kalasalingam Academy of Research and Education. His current research interests include Communication System, Signal Processing, Image Processing and Cyber Security etc.
Journal of Cyber Security and Mobility, Vol. 7_1, 131–144. River Publishers
doi: 10.13052/jcsm2245-1439.7110
This is an Open Access publication. © 2018 the Author(s). All rights reserved.