Questioning the Scope of AI Standardization in Learning, Education, and Training

Authors

  • Jon Mason Charles Darwin University, Australia
  • Bruce E. Peoples Innovations LLC, USA
  • Jaeho Lee University of Seoul, South Korea

DOI:

https://doi.org/10.13052/jicts2245-800X.822

Keywords:

Learning, education, training, ITLET, adaptive systems, artificial intelligence, AI, learning technology, standards, knowledge representation, context

Abstract

Well-defined terminology and scope are essential in formal standardization work. In the broad domain of Information and Communications Technology (ICT) the necessity is even more so due to proliferation and appropriation of terms from other fields and public discourse – the term ‘smart’ is a classic example; as is ‘deep learning’. In reviewing the emerging impact of Artificial Intelligence (AI) on the field of Information Technology for Learning, Education, and Training (ITLET), this paper highlights several questions that might assist in developing scope statements of new work items.
While learners and teachers are very much foregrounded in past and present standardization efforts in ITLET, little attention has been placed until recently on whether these learners and teachers are necessarily human. Now that AI is a hot spot of innovation it is receiving considerable attention from standardization bodies such as ISO/IEC, IEEE and pan-European initiatives such as the Next Generation Internet. Thus, terminology such as ‘blended learning’ necessarily now spans not just humans in a mix of online and offline learning, but also mixed reality and AI paradigms, developed to assist human learners in environments such as Adaptive Instructional Systems (AIS) that extend the scope and design of a learning experience where a symbiosis is formed between humans and AI. Although the fields of LET and AI may utilize similar terms, the language of AI is mathematics and terms can mean different things in each field. Nonetheless, in ‘symbiotic learning’ contexts where an AIS at times replaces a human teacher, a symbiosis between the human learner and the AIS occurs in such a way where both can exist as teacher and learner. While human ethics and values are preeminent in this new symbiosis, a shift towards a new ‘intelligence nexus’ is signalled where ethics and values can also apply to AI in learning, education, and training (LET) contexts. In making sense of the scope of standardization efforts in the context of LET based AI, issues for the human-computer interface become more complex than simply appropriating terminology such as ‘smart’ in the next era of standardization. Framed by ITLET perspectives, this paper focuses on detailing the implications for standardization and key questions arising from developments in Artificial Intelligence. At a high level, we need to ask: do the scopes of current LET related Standards Committees still apply and if not, what scope changes are needed?

Downloads

Download data is not yet available.

Author Biographies

Jon Mason, Charles Darwin University, Australia

Jon Mason is Assistant Dean International and a Senior Lecturer in the College of Education at Charles Darwin University, Australia. Specialising in the application of digital technologies, his research spans questioning, open educational practices, sense-making, data literacy, standardization, and digital learning futures. He holds Master’s degrees in Cognitive Science and Knowledge Management and a PhD in Education. He has been active in international standardization of information technology for learning, education, and training for over 20 years.

Bruce E. Peoples, Innovations LLC, USA

Bruce E. Peoples received his BSc in Cross Cultural Communications, and MSc in Instructional Design from Clarion University, USA; and a PhD in Information Technology Science from Université Paris 8, France. He is a Fellow at the International Institute of Informatics and Systemics and a recipient of the Raytheon 2006 Excellence in Technology award. He has over 25 years’ experience developing and leading international standards activities in IEEE and ISO/IEC. He is Chair Emeritus of ISO/IEC JTC1 SC36 Information Technology for Learning, Education and Training. He currently participates in ISO/IEC JTC1 SC42 Artificial Intelligence, ISO/IEC JTC1 SC36 ITLET, ISO/IEC JTC1 AG16 Brain-computer interface, and IEEE Learning Technology Standards Committee.

Jaeho Lee, University of Seoul, South Korea

Jaeho Lee is a Professor of the School of Electrical and Computer Engineering, at the University of Seoul, in Korea since 1998. He received the BSc and the MSc degrees in Computer Science from Seoul National University, Seoul, Korea, in 1985 and 1987, respectively and a PhD in Computer Science and Engineering from the University of Michigan, Ann Arbor, Michigan, USA, in 1997. His interests are in artificial intelligence, intelligent service robots, and educational technology. He has over 20 years of experience developing international standards in ISO/IEC.

References

R. Robson, A. Barr, ‘The New Wave of Training Technology Standards’, Proc. Interservice/Industry Training, Simulation, and Education Conf. (I/ITSEC). 2018.

T. Hoel, J. Mason, ‘Deficiencies of scope statements in ITLET standardization’, Proc. 20th Int. Conf. on Computers in Education (ICCE), Singapore, 2012.

P. McCorduck, M. Minsky, O. G. Selfridge, H. A. Simon. ‘History of Artificial Intelligence’ In IJCAI, pp. 951–954. 1977.

J. Mason, ‘Standards, Services, Models, and Frameworks: Trends in ICT Infrastructure for Education and Research’, in J. Yoshida & H. Sasabe (eds.), Next Generation Photonics and Media Technologies, 143–150, PWC Publishing: Chitose, Japan, 2007.

B. Marr, ‘The Key Definition of Artificial Intelligence (AI) That Explain its Importance’, Forbes, 2018.

D. Dawson, E. Schleiger, J. Horton, J. McLaughlin, C. Robinson, G. Quezada, J. Scowcroft, S. Hajkowicz, ‘Artificial Intelligence: Australia’s Ethics Framework’. Data61 CSIRO, Australia, 2019.

J. McCarthy, M. Minsky, N. Rochester, C. E. Shannon, ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’. http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf, August 1955.

S. Bringsjord, N. S. Govindarajulu, ‘Artificial Intelligence’, The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/fall2018/entries/artificial-intelligence/2018.

ISO Online Browsing Platform, https://www.iso.org/obp/ui/2019.

L. Floridi, J. Cowls, M. Beltrametti, ‘AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’, Minds & Machines 28: 689, 2018.

The IEEE Initiative on Ethics of Autonomous and Intelligent Systems. ‘Ethically Aligned Design, v2’ https://ethicsinaction.ieee.org 2018.

S. Zuboff, ‘In the age of the smart machine: The future of work and power.’ Basic Books, NY. 1988.

S. Zuboff, ‘Big other: Surveillance capitalism and the prospects of an information civilization’, Journal of Info Tech, 30, 75–89, 2015.

T. Berners-Lee, ‘Three challenges for the web. An Open Letter to the Web Foundation’ http://webfoundation.org/2017/03/web-turns-28-letter/ 2017.

E. Simperl, ‘Reusing ontologies on the Semantic Web: A feasibility study’, Data & Knowledge Eng, 68:10, 905–25, 2009.

E. Fast, E Horvitz, ‘Long-term trends in the public perception of artificial intelligence’, Proc 31st Conf on Artificial Intelligence, 2017. https://arxiv.org/pdf/1609.04904.pdf

F. Rossi, ‘Building trust in artificial intelligence’, Journal of Int Affairs, 72:1, 127–134, 2019. https://www.jstor.org/stable/26588348

European Commission, ‘Ethics Guidelines for Trustworthy AI’, High-Level Expert group on Artificial Intelligence’. https://ec.europa.eu/futurium/en/ai-alliance-consultation

A. Ferrario, M. Loi, E. Viganò, ‘In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions’, Philosophy & Technology. 23:1–7, 2019.

Downloads

Published

2020-04-23

Issue

Section

Articles