Comparative Techniques Using Hierarchical Modelling and Machine Learning for Procedure Recognition in Smart Hospitals
DOI:
https://doi.org/10.13052/jicts2245-800X.1023Keywords:
Procedure recognition • Inside-out Vision • Machine Learning • Artificial Neural Network, 6G-enabled applicationsAbstract
6G is one of the key cornerstone elements of the futuristic smart system setup – the others being cloud computing, big data, wearable devices and Artificial Intelligence. Also, smart offices and homes have become even more popular than before, because of the advancement in computer vision and Machine Learning (ML) technologies. Recognition of human actions and situations are fundamental components of such systems, especially in complex environments like healthcare, for example at the dentist clinic, where we need cues such as eye movement to distinguish procedures being undertaken. In this work, we compare models based on hierarchical modelling and machine learning to identify the dental procedure. We used the objects seen while following the eye trajectories and focussed on elements including material used for treatment, equipment involved and the teeth conditions i.e. symptoms. Our experiments showed that using Artificial Neural Network (ANN) increased the accuracy of prediction compared to hierarchical modelling. Our experiments show an improvement in accuracy for each of the constituent parameters i.e., symptom (ANN: 95.58% vs. Hierarchical: 45.68%), material (ANN: 86.32% vs. Hierarchical: 45.18%) and equipment (ANN: 92.65% vs. Hierarchical: 59.39%).
Downloads
References
J. Gao, Y. Yang, P. Lin, DS. Park, ‘Computer Vision in Healthcare Applications’, J Healthc Eng. 2018;2018:5157020. 4 March 2018, doi:10.1155/2018/5157020.
C. H. Chen. ‘Series in Computer Vision’, Volume 2, Pages: 412, Computer Vision in Medical Imaging. January 2014.
A. Betancourt, P. Morerio, C.S. Regazzoni, and M. Rauterberg, ‘The Evolution of First Person Vision Methods: A Survey’, IEEE Transactions on Circuits and Systems for Video Technology, vol. 25, no. 5, pp. 744–760, May 2015.
M. Dimiccoli. ‘Computer Vision for Egocentric (First-Person) Vision’, In Computer Vision for Assistive Healthcare, Editors: Marco Leo, Giovanni Maria Farinella. Academic Press, 2018, Pages 183–210, ISBN 9780128134450, https://doi.org/10.1016/B978-0-12-813445-0.00007-1.
S. Tian, W. Yang, J. Michael Le Grange, P. Wang, W. Huang, Z. Ye, ‘Smart healthcare: making medical care more intelligent’, Global Health Journal, Volume 3, Issue 3, 2019, Pages 62–65, ISSN 2414-6447, https://doi.org/10.1016/j.glohj.2019.07.001.
G.J. Zelinsky, R.P.N. Rao, M.M.Hayhoe, and D.H.Ballard, ‘Eye Movements Reveal the Spatiotemporal Dynamics of Visual Search’, A journal of the association for Psychological Science, vol. 8, no. 6, pp. 448–453, 1997.
A.Toet, ‘Gaze directed displays as an enabling technology for attention aware systems’, Journal in Computers in Human Behavior, vol. 22, no. 4, pp. 615–647, July 2006.
H. Kang, A.E. Alexei, M. Herbert, and T. Kanade, ‘Image Matching in Large Scale Indoor Environment’, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop on Egocentric Vision, 2009.
L. Sun, U. Klank, and M. Beetz, ‘EYEWATCHME 3D Hand and object tracking for inside out activity analysis’, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2009. CVPR Workshops, pp. 9–16, 20–25 June 2009.
A. Furnari, G. M. Farinella, ‘What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention’, International Conference on Computer Vision, 2019.
M. A. A. Al-qaness, & F. Li, ‘WiGer: WiFi-based gesture recognition system’, ISPRS Inter- national Journal of Geo-Information, 5(6), 92. 2016.
S. Wang, & G. Zhou, ‘A review on radio based activity recognition’, Digital Communications and Networks, 1(1), 20–29. 2015.
C. H. Lu, & L.C. Fu, ‘Robust location-aware activity recognition using wireless sensor network in an attentive home’, IEEE Transactions on Automation Science and Engineering, 6(4), 598–609. 2009.
Q.Sun, J. Shen, H. Qiao, X. Huang, C. Chen, & F. Hu, ‘Static human detection and scenario recognition via wearable thermal sensing system’, Computers, 6(1), 3, 2017, doi: 10.3390/computers6010003.
J. M. Henderson, & A. Hollingworth, ‘High-level scene perception’, Annual Review of Psychology, 50, 243271, 1999.
K. Rayner, T. J. Smith, G. L. Malcolm, & J. M. Henderson, ‘Eye movements and visual encoding during scene perception’, Psychological Science, 20(1), 6–10, 2009.
Y. Wang, X. Jiang, R. Cao, & X. Wang, ‘Robust indoor human activity recognition using wireless signals’, Sensor, 15(7), 17195–17208, 2015.
B. Zhou, A. Lapedriza, J. Xiao , A. Torralba, & A. Oliva, ‘Learning deep features for scene recognition using places database’, In Z. Ghahramani M. Welling, C. Cortes, N. D. Lawrence, & K. Q. Weinberger (Eds.), Advances in neural information processing systems (Vol. 27, pp. 487–495). Red Hook: Curran Associates Inc. 2014.
R.S. Segundo, J.M. Montero, J.M. Pimentel, and J.M. Pardo, ‘HMM Adaptation for Improving a Human Activity Recognition System’, Algorithms, Volume 9, No. 3, 2016.
E. Kim, S. Helal, and D. Cook, ‘Human Activity Recognition and Pattern Discovery’, IEEE Pervasive Computing, Volume 9, No. 1, pp. 48–53, January, 2010.
L.C. Jatoba, U. Grossmann, C. Kunze, J. Ottenbacher and W. Stork, ‘Context-Aware Mobile Health Monitoring: Evaluation of Different Pattern Recognition Methods for Classification of Physical Activity’, 30th IEEE Annual International Conference on Engineering in Medicine and Biology Society, 2008.
D. Anguita, A. Ghio, L. Oneto, X. Parra and J.L. Reyes-Ortiz, ‘Energy Efficient Smartphone-Based Activity Recognition Using Fixed-Point Arithmetic’, Journal of University Computer Science, 2013.
U. Maurer, A. Smailagic, D. Siewiorek, and M. Deisher, ‘Activity Recognition and Monitoring Using Multiple Sensors on Different Body Positions’, Proceedings of International Workshop on Wearable and Implantable Body Sensor Networks, 2006.
K. Shaharyar, J. Ahmad, and D. Kim, ‘Depth Images- Based Human Detection, Tracking and Activity Recognition Using Spatiotemporal Features and Modified HMM’, Journal of Electrical Engineering & Technology, Volume 11, No. 3, pp. 1921–1926, 2016.
J. Yang, ‘Toward Physical Activity Diary: Motion Recognition Using Simple Acceleration Features with Mobile Phones’, Proceedings of 1st ACM International Workshop on Interactive Multimedia for Consumer Electronic, 2009.
J.R. Kwapisz, G.M. Weiss, and S.A. Moore, ‘Activity Recognition Using Cell Phone Accelerometers’, SIGKDD Explore News Letters, Volume 12, No. 2, pp. 74–82, March, 2011 (Last Visit: 15 June 2017). [Online]. Available: https://en:wikipedia.orgwiki/Deeplearning
S. Noor, and V. Uddin, ‘Using ANN for Multi-View Activity Recognition in Indoor Environment’, International Conference on Frontiers of Information Technology, pp. 258–263, December, 2016.
R. DamaeviIius, M. Vasiljevas, J. AlkeviIius, and M. Wofniak, ‘Human Activity Recognition in AAL Environments Using Random Projections’, Computational and Mathematical Methods in Medicine, pp. 17, 2016.
[Last seen on: 13-Apr-2021] https://motionarray.com/stock-video/dentists-working-on-female-patient-101661
[Last seen on: 31-Apr-2021] https://www.mirror.co.uk/news/uk-news/more-eight-fillings-could-raise-8933885
[Last seen on: 31-Mar-2021] https://www.quora.com/How-do-you-remove-ridges-from-your-teeth
[Last seen on: 31-Jan-2020] https://en.wikipedia.org/wiki/Fixation (visual)
[Last seen on: 31-Jan-2020] https://en.wikipedia.org/wiki/Saccade.
S. Noor, H.M. Minhas, M.I. Saleem, V. Uddin and N. Ismat, ‘Inside-out Vision for Procedure Recognition in Dental Environment’, 2020 Global Conference on Wireless & Optical Technologies (GCWOT), Malaga, Spain 6–8 October 2020, pp. 1–8, doi: 10.1109/GCWOT49901.2020.9391594
D.E. Rumelhart, G.E. Hinton, R.J. Williams, ‘Learning representations by back-propagating errors’, Nat. Int. Wkly. J. Sci., 1986, 323, pp. 533–536.
S. Noor and V. Uddin, ‘Using ANN for Multi-view Activity Recognition in Indoor Environment,’ 14th International Conference on Frontiers of Information Technology (FIT-2016), 19–21 December 2016.
S. Noor and V. Uddin, ‘Using context from inside-out vision for improved activity recognition’, IET Computer Vision, vol. 12, no. 3, pp. 276–287, March 2018.
[Last seen on: 31-Mar-2021] https://www.newmouth.com/dentistry/restorative/crowns/
[Last seen on: 31-Mar-2021] https://www.preferreddentalcaresantarosa.com/cavity-filling-the-procedure-aftercare-and-long-lasting/
[Last seen on: 31-Mar-2021] https://www.knoxvillesmiles.com/teeth-whitening/
[Last seen on: 31-Mar-2021] https://decisionsindentistry.com/article/novel-technique-placing-sealants/
[Last seen on: 31-Mar-2021] https://www.livescience.com/44223-cavities-tooth-decay.html
[Last seen on: 31-Mar-2021] https://www.southfloridadentalcare.com/category/crackedbroken-teeth/
[Last seen on: 31-Mar-2021] https://wmsmile.com/what-is-tartar/
[Last seen on: 31-Mar-2021] https://www.dentakademi.com.tr/en/what-is-dental-calculus-or-tartar/
[Last seen on: 31-Mar-2021] http://www.welcomedentistry.net/blog/post/a-stainless-steel-crown-could-extend-the-life-of-a-primary-molar.html
[Last seen on: 31-Mar-2021] https://en.aliradar.com/item/32930788236-original-teeth-whitening-44-carbamide-peroxide-dental-bleaching-system-oral-gel-kit-tooth-whitener-dental-equipment-care-oral-h
[Last seen on: 31-Mar-2021] https://www.myfamilydentistry.com/blog/5-ways-prevent-chlorhexidine-staining/
[Last seen on: 20-Feb-2020] https://www.waterpik.com/oral-health/pro/dental-supplies/dental-instruments/densco-condenser-accessories/
[Last Seen on: 27 June 2017] http://easynn.com/