Multi-Rhythm Capsule Network Recognition Structure for Motor Imagery Classification

Meiyan Xu1, 2, Junfeng Yao1, ∗, Yifeng Zheng2 and Yaojin Lin2

1Xiamen University, China

2Minnan Normal University, China

E-mail: yao0010@xmu.edu.cn

*Corresponding Author

Received 25 March 2021; Accepted 25 April 2021; Publication 02 June 2021

Abstract

Existing machine learning methods for classification and recognition of EEG motor imagery usually suffer from reduced accuracy for limited training data. To address this problem, this paper proposes a multi-rhythm capsule network (FBCapsNet) that uses as little EEG information as possible with key features to classify motor imagery and further improves the classification efficiency. The network conforms to a small recognition model with only 3 acquisition channels but it can effectively use the limited data for feature learning. Based on the BCI Competition IV 2b data set, experimental results show that the proposed network can achieve 2.41% better performance than existing cutting-edge methods.

Keywords: Capsule network, deep learning, brain machine interface, motor imagery, classification.

1 Introduction

At present, the research of brain-computer interaction in machine learning focus on Convolutional Neural Network (CNN) [1–6] and deep belief network (DBN) [7, 8]. As for Steady-State Visual Evoked Potentials (SSVEP), Kwak et al. [4] explored a CNN with a spatial convolutional layer and a temporal layer, which uses the frequency band power features from two EEG channels. There have been many studies exploring deep belief networks (DBN) for the classification of motor imagery (MI) [3, 5]. In addition, Lu Na et al. [9] used a restricted Boltzmann machine for classification, with FFT and wavelet packet decomposition for pre-training [10]. The Correlation-based Feature Selection (CFS) and K-Nearest-Neighbor (KNN) data mining algorithms proposed the recognition of attention in the learning process [11]. For the relevance of each feature, the greedy chase algorithm is used to search for the features, and all the features are sorted according to the relevance. These algorithms were tested with the KNN classifier to find the subset with the highest classification rate.

Representative achievements in recent years are Deep ConvNet [3] and EEGNet [12], which use a compact convolutional neural network (Shallow ConvNet) for EEG. The former is designed as a general architecture and not limited to specific functional types, while the latter is as parameterized as possible. Both of these models can be applied to classification tasks conforming to different brain-computer interface paradigms. FBCNet [13] performs heuristic convolutional neural networks in rhythm segments based on the neurophysiology of motor imagery. The traditional CNN network is usually one or more groups of network layers consisting of a convolutional layer and an aggregation layer. Take ERD and ERS recognition as an example, the low-level CNN is able to detect features such as wave amplitude, and the high-level CNN can detect features such as waveform. The max pooling layer can reduce the size of the feature vector, obtain the maximum value in its convolution window, and keep the window for a certain time. However, the max pooling layer does not care about spatial features and some important features may be discarded by this function. The capsule network aim to solve this problem and it is a rough method to obtain maximum information [14].

For classification and recognition of EEG motor imagery, the feature extraction by rhythm segment or machine learning methods has achieved remarkable results. The representative algorithms for feature extraction by rhythm include SBCSP [15], FBCSP [16–18], DFBCSP [19, 20], and WPD CSP [21]. The learning strategy of rhythm segment capturing the EEG characteristics is utilized with the capsule network for EEG movement. During the training process, the output route of each capsule (i.e., feature attribute) receives the distribution coefficient of the previous layer, and iteratively outputs the distribution coefficient of each frequency band through EM (Expectation-Maximization). Compared with the previous sub-band recognition method, this method has distinguishing advantages.

In 2017, Sabour et al. [22] mentioned that CNN would lose the spatial information between objects and lead to incorrect classification results, through the use of pooling operations. Therefore, the dynamic routing capsule network was proposed. Based on this, Zhang Quanshi et al. [23] used the storage filter representation method in the capsule network to improve the interpretation ability. In 2018, Hinton et al. [14] proposed a new type of capsule network, which uses the EM algorithm to update a set of neurons for the feature map of each iteration training, and performs backpropagation on adjacent capsule layers. EM iteration finally achieved higher recognition accuracy with 45% less error for 3D object data set, Smal lNORB. In 2019, Kwon-Woo and Jin-Woo applied the capsule network to distinguish the left and right hand moving images on the BCI Competition IV 2b dataset [23]. In 2020, Liu Yu et al. [24] proposed a multi-level feature-guided capsule network (MLFCapsNet) for EEG emotion recognition to overcome CNN’s inability to describe the internal connections between different channels of EEG signals. This paper focuses on the core algorithm and network architecture design of the multi-rhythm capsule network for performing motor imagery classification tasks. The data structure of the multi-rhythm segment for preprocessing the motor imagery EEG signal is first introduced. Then, the clustering algorithm of the multi-rhythm capsule network is elaborated. Finally the entire network architecture is introduced.

2 Data Center Infrastructure and Power Consumption

The disadvantage of CNNs is that they cannot model spatial relationships well and rely heavily on hyperparameters to grasp the data structure. In contrast, for image recognition performed on the brain, a reverse graphics solution can be achieved. The recognition in real environment is a hierarchical representation of the world around us, and the visual information received by the eyes are deconstructed conforming to the learning model stored in the brain. The key idea is that the representation of objects in the brain does not depend on perspective and internal representation needs to be performed in the neural network. Capsule network is a variant of CNN with a group of neurons in a learning network, and researchers are trying to overcome the defects of CNN through this network. The activity vector in capsule network represents various parameters of a specific entity. The length of the activity vector represents the probability of the existence of the object, and the direction of the activity vector represents the instantiation parameter. Therefore, various attributes including position, size, and rotation can be represented by the activity vectors in capsule network architecture.

images

Figure 1 Schematic diagram of capsule algorithm process.

In the capsule network, each capsule has a logical unit that represents the existence of an object, and a 2D matrix is usually used to represent the value of the data attribute of each object. The core of the network is the dynamic routing, which introduces a new iteration process between the capsule layers. After each iteration, the new capsule receives a set of weights for each entity attribute. It is equivalent to “softening” (convolution) K-means which directly divides points into the nearest clusters, while dynamic routing obtains similarity through softmax, assigning corresponding weights to each point and dividing it into each cluster.

The dynamic routing of the capsule network is an iterative algorithm that can encapsulate and output the given information to a data structure (see Figure 1). Each capsule has a logical unit that represents the existence of an entity, and its output represents different attributes of the same entity. In addition to the convolutional layer for clustering information, each capsule also has a self-rescue mechanism, which realizes the characteristics and information of the upper layer by multiplying its own feature matrix by the weight of the learnable relationship coefficient. In order to route the output of each capsule to the upper capsule that receives similar voting clusters, the expectation-maximization algorithm is used to back-propagate feature information between layers and the capsule is formed to obtain a feature matrix that can distinguish information.

3 Proposed Multi-filter Bank Capsule Network Method

In this paper, the spatial feature learning is integrated with the primary capsule of the capsule network,Mq={mijk}Rp×s1×s2, where P is the primary channel (PC for short) and s1×s2 is the primary capsule neuron (Primary Shape, PS for short). This paper targets a binary classification problem, and the network would be a 4D structure if the main capsule layer includes batch size. So the number of capsules output is 16. The advantage of the main capsule network designed in this work is to replace the clustering algorithm with a self-attention mechanism algorithm for better convolution calculation. The vector length of each capsule in the main capsule layer indicates the feature classification. In this case, a feature recognition mechanism named protocol dynamic routing mechanism [22] exists between the main capsule and the digital capsule, connecting these two capsules.

3.1 Multi-rhythm Capsule Algorithm

An iterative algorithm is applied between the main capsule layer and the digital capsule layer to establish a dynamic routing mechanism. This process can not only capture the spatial relationship of each capsule through the transformation matrix, but also connect the information between each capsule through routing. Dynamic routing can capture the information allocated in different layers, which overcomes the limitation of spatial convolution calculation to obtain the consistency of the capsule layer. The specific process of the multi-rhythm dynamic routing algorithm (MulFB Capsule) is as follows:

1. The main capsule Mq={mijk}Rp×s1×s2 is input to the digital capsule layer, where P is the number of capsules and S1S2 is the capsule matrix structure. First, through a matrix conversion with μ0=T(M)Rp×s1×s2, T(M) converts the capsule of [P×S1×S2] structure to [S2×S1×P], which is convenient for subsequent routing calculation. μ0=[μ1(0),μ2(0)μp(0)]T, for μm(0), where mN[1,P], the cycle calculation of steps 2 to 6 is performed according to the number of routing cycles.

2. The dynamic routing is introduced to learn advanced features of MI category,μmt,1=Wkμmt,0, where tN[1,R], R is routing calculation degree; Wk is the conversion matrix between μmt,0 and μmt,1 ; kN[1,C], C is the number of digital capsules; WkR1×S2×(C*D), D is the capsule dimension. Each rhythm segment has an independent dynamic route to extract a certain aspect of the feature, and capture the projection of different features μmt,1.

3. Construct new eigenvalues, assuming that the digital capsule layer has C categories. The distribution probability of μmt,1 after Softmax calculation is (P1|m,P2|mPc|m),and the new feature can be represented as μmt,2=(P1|mμmt,1,P2|mμmt,1Pc|mμmt,1).

4. To calculate the characteristics of each capsule, μmt,2 is first added with different weights to obtain si, as shown in Equation (1). Then data squashed squeeze is performed on si. Moreover, the squash calculation is normalized to ensure that the output value of each capsule is between 0 and 1, which is helpful for convergence. The process of normalization is shown in Equation (2) with μmRC×D.

Sm =n=1Cpn|mμm1 (1)
μm =squash(Sm)=||Sm||21+||Sm||2Sm||Sm|| (2)

5. According to the similarity of the vm, the soft division is performed, and the feature value is obtained by calculating the distance of each dimension mean v̄m as shown in Equation (3). Finally, the feature weight learned by each capsule is exhibited as follows.

μmt,3={μmnt,3|μmnt,3=n=1D(μmn-μm-)2}ϵRC×1 (3)

6. μmt+1=μmt,3, and return to the second step to enter the next round of routing feature training, and finally output.

3.2 FBCapsNet

The processing of the whole multi-rhythm MI by encoders and decoders of the capsule network is shown in Figure 2. First, the EEG information in rhythm segment is extracted at 4 to 40Hz, and the N segments are of equal bandwidth. In the experiment, N=[3,6,9]. Then, the EEG information of each rhythm segment uses the same data segmentation method introduced in Chapter 4. The Encoder is performed for feature learning on the dual input, including the time domain and the space domain, to obtain the main capsule feature Mq,qN[1,N]. Finally, the Decoder of the EEG features is divided into two stages. First, the capsule network is used in each rhythm segment to realize the different attributes of the same information source and different feature domains. Then, the soft attention mechanism is used and the classification attributes of each rhythm segment are adaptively re-weighted and aggregated. Soft attention can isolate the characteristic attributes of important frequency bands, and increase their classification weights to avoid interference with unimportant information. Thus, the accuracy of classification is improved.

images

Figure 2 Schematic diagram of multi-rhythm capsule network structure.

4 Experimental Result and Discussions

4.1 Data

The experimental data in this paper comes from the BCI2008IV2b data set [20], which contains 9 subjects, divided into training and validation data sets. Also, there is 6480 times, 23760S of MI data in the dataset. The following preprocessing strategy for the input data is adopted:

1. Band-pass filtering is used to filter the EEG signals of 9 iso-rhythmic segments from 4 to 40 Hz, and train the three collection points of C3, Cz, and C4.

2. According to the experimental paradigm of BCI2008IV2b, the data from 1.25 s to 5.25 s after the evoked event was intercepted without visual feedback, and the data from 2 s to 5.5 s after the evoked event was intercepted with visual feedback.

3. Based on double input, the time window is set to 1 s and step length is 100 ms, with the latter input delayed by 100ms than the previous one.

4.2 Hyper Parameters Selection

Table 1 Classification accuracy of each Parameter of BCI 2008 IV-2B Data set in FBCapsNet Network.

images

The training set data of the FBCapsNet network comes from the training set of BCI2008IV2b, which contains two sessions without visual feedback and one session with visual feedback. Then, the influence of the number of channels in the main capsule layer, the structure of the capsule layer and the number of routing iterations on the network performance is investigated. The experimental results are shown in Table 1. It can be seen that when the number of main capsules is 128, the classification accuracy is relatively high. For the same number of main capsules, larger primary capsules are needed to capture features. Moreover, the number of routing iterations is set to 2, which is more suitable than iterations of 1 or 3, as shown in Figure 3.

images

Figure 3 Classification of FBCapsNet network routing iteration times in different models.

4.3 Result

According to the above experimental results, the FBCapsNet model is a main capsule with 128 channels, 16 capsules and a dimension of 8. Also, the number of iterative routing is 2. The comparison with four existing representative methods in shown in Table 2. It can be seen that the classification accuracy of this model is higher than other methods, showing 2.41% better performance than the latest method CapsNet [25]. After learning the characteristics of each rhythmic segment, the capsule network is used to learn more distinguishing attributes, which effectively improves the model’s ability to recognize the target task. This also shows that the capsule network is effective in distinguishing the characteristics of brain electrical MI.

Table 2 Classification accuracy (%) of five End-to-End Algorithms in BCI 2008 IV-2B data

Subject Shallow Net [3] Deep Net [3] EEG Net [12] Caps Net [25] FBCaps Net
No.1 71.56 67.25 67.18 78.75 79.47
No.2 53.57 56.10 58.21 55.71 58.34
No.3 53.12 54.87 55.62 55 59.59
No.4 95.93 94.52 95.31 95.93 96.40
No.5 85 84.59 86.87 83.12 84.06
No.6 76.87 74.46 77.5 83.43 88.09
No.7 76.56 77.03 76.87 75.62 82.12
No.8 85.93 87.75 89.68 91.25 90.47
No.9 82.18 79.25 80 87.18 89.12
Average 75.63 75.10 76.36 78.44 80.85

4.4 Discussion

Our analysis indicates that the capsule layer output category of 16 is the most suitable for the two-class 4D structure capsule. For network with 2 iterations in the capsule layer and 128 main capsule layers, the result is shown in Table 3. It can be seen that when the classification attribute of the capsule layer is 16, the classification accuracy is higher than that of the case with 8 and 32 capsules. Also, under the same capsule structure, the accuracy is 4% higher, showing that the capsule layer pairs output sensitivity of dimensions. For the output category of the capsule layer, the number of classifications to the power of the capsule dimension can be set, namely ClassNumberDmension, where Dmension refers to the dimension of the capsule structure matrix.

Compared with the traditional CNN, the capsule network has achieved better results in classification accuracy, but the training process is much slower. On the other hand, for more complex data sets containing many classification categories, the capsule network is not recommended, as the network may fall off and the classification effect is not good. However, the capsule network has risen from the neuron level to the routing level, bringing great significance to the machine learning applications.

Table 3 The Classification Accuracy in the FBCapsNet Model with different capsule layer structures when routing = 2

Primary Channel Primary Shape Capsule Number Capsule Dimenison Accuracy (%)
128 8 × 2 8 4 64.08
8 69.59
16 4 68.46
8 74.66
32 4 60.01
8 71.15
4 × 4 8 4 63.52
8 70.59
16 4 78.54
8 80.85
32 4 68.47
8 76.35
2 × 8 8 4 70.09
8 70.65
16 4 74.66
8 79.41
32 4 69.19
8 76.78

5 Conclusion

Based on the dataset of MI, the experimental results show that the method proposed in this work is significantly better than some of the latest methods in literature. The source of the superior performance is summarized as follows.

1. Compared with handwritten digital images in the MINIST dataset used to evaluate the performance of the original capsule network [22], EEG signals have more complex internal representations related to MI. The proposed framework combines different levels of multi-layer feature maps before forming the main capsule, thus the ability of feature representation is enhanced. This makes our method based on multi-rhythm segment EEG more powerful than other capsule networks for MI classification.

2. In addition to the strong correlation between the MI task and the EEG brain function, the method conforms to a spontaneous and rhythmic EEG. The methods in this work divides the targeted features in different sub-rhythm segments under the same task, and classify the features in the rhythm segments first to avoid the loss of some characteristic attributes. Specifically, the main capsule encodes the features, and the capsule layer encodes the special attributes in each rhythm segment. The neurons in the capsule layer contain all the important information of the characteristic state, which is beneficial to extract the distinguishing characteristics of EEG movement and imagination. Moreover, the vector weights used in the network have contributed to the recognition efficiency and robustness.

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China (No. 62072388), the Industry Guidance Project Foundation of Science Technology Bureau of Fujian Province in 2020 (No. 2020H0047), the Natural Science Foundation of the Science Technology Bureau of Fujian Province in 2019 (No. 2019J01601) and the Science Technology Bureau Project of Fujian Province in 2019 (No. 2019C0021).

References

[1] H. Cecotti, M. P. Eckstein, and B. Giesbrecht, “Single-trial classification of event-related potentials in rapid serial visual presentation tasks using supervised spatial fltering,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 11, pp. 2030–2042, 2014.

[2] X. Zhang and D. Wu, “On the vulnerability of cnn classifiers in eegbased bcis,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 5, pp. 814–825, 2019.

[3] R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball, “Deep learning with convolutional neural networks for eeg decoding and visualization,” Human Brain Mapping, vol. 38, no. 11, pp. 5391–5420, 2017.

[4] Hongzhi, Y. Xue, L. Xu, Y. Cao, and X. Jiao, “A speedy calibration method using riemannian geometry measurement and other-subject samples on a p300 speller,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, no. 3, pp. 602–608, 2018.

[5] N.-S. Kwak, K.-R. Müller, and S.-W. Lee, “A convolutional neural network for steady state visual evoked potential classification under ambulatory environment,” PLoS one, vol. 12, no. 2, p. e0172578, 2017.

[6] Y. R. Tabar and Ugur Halici, “A novel deep learning approach for classification of eeg motor imagery signals,” Journal of Neural Engineering, vol. 14, no. 1, p. 016003, 2016.

[7] Y. Ren and Y. Wu, “Convolutional deep belief networks for feature extraction of eeg signal,” International joint conference on neural Networks (IJCNN), Beijing, China, pp. 2850–2853, 2014.

[8] J. Li and A. Cichocki, “Deep learning of multifractal attributes from motor imagery induced eeg,” International Conference on Neural Information Processing, Springer, Cham, pp. 503–510, 2014.

[9] N. Lu, T. Li, X. Ren, and H. Miao, “A deep learning scheme for motor imagery classification based on restricted boltzmann machines,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 6, pp. 566–576, 2016.

[10] P. Wang, A. Jiang, X. Liu, J. Shang, and L. Zhang, “Lstm-based eeg classification in motor imagery tasks,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, no. 11, pp. 2086–2095, 2018.

[11] B. Hu, X. Li, S. Sun, and M. Ratcliffe, “Attention recognition in eegbased affective learning research using cfs + knn algorithm,” IEEE/ACM Transactions on Compu- tational Biology and Bioinformatics, vol. 15, no. 1, pp. 38–45, 2016.

[12] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance, “Eegnet: a compact convolut- ional neural network for eeg-based brain–computer interfaces,” Journal of Neural Engineering, vol. 15, no. 5, p. 056013, 2018.

[13] R. Mane, N. Robinson, A. P. Vinod, S.-W. Lee, and C. Guan, “A multi-view cnn with novel variance layer for motor imagery brain computer interface,” The 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC), pp. 2950–2953, 2020.

[14] G. Hinton, S. Sabour, and N. Frosst, “Matrix capsules with em routing,” International Conference on Learning Representations(ICLR), Vancouver, Canada, 2018.

[15] Z. Zhang, W. Liao, X.-N. Zuo, Z. Wang, C. Yuan, Q. Jiao, H. Chen, B. B. Biswal, G. Lu, and Y. Liu, “Resting-state brain organization revealed by functional covariance networks,” Plos One, vol. 6, no. 12, p. e28817, 2011

[16] K. Ang, Y. Zheng, H. Zhang, and C. Guan, “Filter bank common spatial pattern (fbcsp) in brain-computer interface,” Proc. IEEE Int. Joint Conf. Neural Netw., pp. 2390–97, 2008

[17] K. Ang, Z. Chin, C. W. C, C. Guan, and H. Zhang, “Filter bank common spatial pattern algorithm on bci competition iv datasets 2a and 2b,” Frontiers in Neuroscience, vol. 6, no. 39, pp. 1–9, 2012.

[18] K. P. Thomas, C. Guan, L. C. Tong, and V. A. Prasad, “An adaptive filter bank for motor imagery based brain computer interface,” The 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC’08), British Columbia, Canada, pp. 1104– 1107, 2008.

[19] E. Gentile, A. Brunetti, K. Ricci, M. Delussi, and M. de Tommaso, “Mutual interaction between motor cortex activation and pain in fibromyalgia: Eeg-fnirs study,” PloS One, vol. 15, no. 1, p. e0228158, 2020.

[20] L. R, B. C, and M.-P. G, “Bci competition 2008-graz data set b,” Graz University of Technology, pp. 1–6, 2018

[21] B. Yang, H. Li, Q. Wang, and Y. Zhang, “Subject-based feature extraction by using fisher wpd-csp in brain–computer interfaces,” Computer Methods and Programs in Biomedicine, vol. 129, pp. 21–28, 2016.

[22] S. Sabour and N. Frosst, “Dynamic routing between capsules,” 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 2017.

[23] Q. Zhang, Y. N. Wu, and S. Zhu, “Interpretable convolutional neural networks,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 2018.

[24] Y. Liu, Y. Ding, C. Li, J. Cheng, R. Song, F. Wan, and X. Chen, “Multichannel eeg-based emotion recognition via a multi-level features guided capsule network,” Computers in Biology and Medicine, vol. 123, 2020.

[25] K.-W. Ha and J.-W. Jeong, “Motor imagery eeg classification using capsule networks,” Sensors, vol. 19, no. 13, p. 2854, 2019.

Biographies

images

Meiyan Xu received her B. A. and M. A. degrees in Mathematics and Applied Mathematics and Software Engineering from the Xiamen University, China in 2006 and 2010. Currently, she is a PhD at the software school of Xiamen University between 2016 and 2020, China. In 2021, he joined the faculty of School of Computer Science, Minnan Normal University, Zhangzhou, China. Her research interests include data analytics, data mining, machine learning, and brain computer interfaces.

images

Junfeng Yao received his Ph.D. degree in thermal engineering from the Central South University, China in 2001. He conducted his post-doctoral research work in Tsinghua University in the area of Electrical Simulation and Controlling from 2001 to 2003. He was a visiting scholar in Southern Polytechnic State University, USA from 2009 to 2010 and in University of Washington, USA from 2016 to 2017. He is now a professor at the software school of Xiamen University, China. His research interests are wide-reaching but mainly involve the areas of machine learning, artificial intelligence and computer graphics.

images

Yifeng Zheng received the B.E. degree in computer science and technology from Minnan Normal University, Zhangzhou, China, in 2004, and received the M.E. degree and PhD degree in computer technology from China University of Petroleum-Beijing, Bejing, China, in 2016 and in 2020. In 2004, he joined the faculty of School of Computer Science, Minnan Normal University, Zhangzhou, China. His research interests include artificial intelligence, machine learning, deep learning and network communications.

images

Yaojin Lin received the Ph.D. degree in School of Computer and Information from Hefei University of Technology. He currently is a professor with Minnan Normal University. His research interests include data mining, and granular computing. He has published more than 80 papers in many conferences and journals, such as IJCAI, CVPR, TKDD, PR and so on.

Abstract

1 Introduction

2 Data Center Infrastructure and Power Consumption

images

3 Proposed Multi-filter Bank Capsule Network Method

3.1 Multi-rhythm Capsule Algorithm

3.2 FBCapsNet

images

4 Experimental Result and Discussions

4.1 Data

4.2 Hyper Parameters Selection

images

4.3 Result

4.4 Discussion

5 Conclusion

Acknowledgment

References

Biographies