Questioning the Scope of AI Standardization in Learning, Education, and Training

Jon Mason1,*, Bruce E. Peoples2 and Jaeho Lee3

1Charles Darwin University, Australia

2Innovations LLC, USA

3University of Seoul, South Korea

E-mail: jon.mason@cdu.edu.au; brucepeoples02@gmail.com; jaeho@uos.ac.kr

* Corresponding Author

Received 26 August 2019; Accepted 14 January 2020; Publication 23 April 2020

Abstract

Well-defined terminology and scope are essential in formal standardization work. In the broad domain of Information and Communications Technology (ICT) the necessity is even more so due to proliferation and appropriation of terms from other fields and public discourse – the term ‘smart’ is a classic example; as is ‘deep learning’. In reviewing the emerging impact of Artificial Intelligence (AI) on the field of Information Technology for Learning, Education, and Training (ITLET), this paper highlights several questions that might assist in developing scope statements of new work items.

While learners and teachers are very much foregrounded in past and present standardization efforts in ITLET, little attention has been placed until recently on whether these learners and teachers are necessarily human. Now that AI is a hot spot of innovation it is receiving considerable attention from standardization bodies such as ISO/IEC, IEEE and pan-European initiatives such as the Next Generation Internet. Thus, terminology such as ‘blended learning’ necessarily now spans not just humans in a mix of online and offline learning, but also mixed reality and AI paradigms, developed to assist human learners in environments such as Adaptive Instructional Systems (AIS) that extend the scope and design of a learning experience where a symbiosis is formed between humans and AI. Although the fields of LET and AI may utilize similar terms, the language of AI is mathematics and terms can mean different things in each field. Nonetheless, in ‘symbiotic learning’ contexts where an AIS at times replaces a human teacher, a symbiosis between the human learner and the AIS occurs in such a way where both can exist as teacher and learner. While human ethics and values are preeminent in this new symbiosis, a shift towards a new ‘intelligence nexus’ is signalled where ethics and values can also apply to AI in learning, education, and training (LET) contexts. In making sense of the scope of standardization efforts in the context of LET based AI, issues for the human-computer interface become more complex than simply appropriating terminology such as ‘smart’ in the next era of standardization. Framed by ITLET perspectives, this paper focuses on detailing the implications for standardization and key questions arising from developments in Artificial Intelligence. At a high level, we need to ask: do the scopes of current LET related Standards Committees still apply and if not, what scope changes are needed?

Keywords: Learning, education, training, ITLET, adaptive systems, artificial intelligence, AI, learning technology, standards, knowledge representation, context.

1 Introduction

It is no accident that for many domains of practice, foundational standardization efforts involving IT and ICT have been focused on issues of terminology, semantics and semantic interoperability for a specific discipline’s semiotic system, and between multiple disciplinary semiotic systems. In recent decades, initial outputs from both formal and informal Standards Setting Organizations thus typically manifested as metadata and vocabulary standards. In the case of ISO/IEC JTC 1/SC 36 – Information Technology for Learning, Education, and Training (ITLET) – the scope was well-defined two decades ago, and focused on enabling ‘interoperability and reusability of resources and tools’ for all stakeholders. Excluded in the scope are: (1) standards or technical reports that define educational standards (competencies), cultural conventions, learning objectives, or specific learning content; and (2) work done by other ISO or IEC TCs, SCs, or WGs with respect to their component, specialty, or domain. Instead, when appropriate, normative or informative references to other standards shall be included. Examples include documents on special topics such as multimedia, web content, cultural adaptation, and security.

Standardization of information technology in the domain of learning, education, and training (LET) has now been proceeding for over three decades – the first formal standard on computing platform recommendations for Computer Based Training (CBT) was developed by the Aviation Industry Computer-Based Training Committee (AICC) in 1989. This foundational work was developed further with key components of it incorporated into arguably the most successful ITLET standard to date – the Sharable Content Object Reference Model (SCORM). This standard, or more precisely, a ‘reference model’ specifying several standards, was a collaborative effort that utilised input from several other standardization communities and led by Advanced Distributed Learning (ADL) in the USA. Times have changed dramatically since then with subsequent waves of ICT innovation and disruption [1]. Several of the key players from the change of millennium continue to lead next generation efforts and these include:

At the change of the millennium, however, there was rapid growth of activity around IT (and ICT) standardization and numerous other organizations and communities were likewise engaged, such as the Internet2 Middleware Architecture Committee for Education, the Internet Engineering Taskforce (IETF), and the Simulation Interoperability Standards Organization (SISO). Examples of broader efforts include the World Wide Web Consortium (W3C) and the Dublin Core Metadata Initiative (DCMI). It would therefore be a mistake to imagine that the only standards relevant to LET are those purpose-built for it. To highlight this point, one only needs to consider metadata standards. On the one hand, the DCMI defined a concise set of metadata elements that provided for cross-domain semantic interoperability because of their generality. On the other hand, the IEEE LTSC defined a more elaborate set of metadata elements specific to the domain of learning known as Learning Object Metadata (LOM). Despite the fact the much of the underlying semantics of the data models of both standards were similar, it took almost a decade before a harmonized approach was finally standardized by ISO/IEC JTC 1/SC 36 (hereafter SC36) as a series of Metadata for Learning Resources (MLR) standards. These standards also addressed syntax issues associated with the standardized bindings. At least two key lessons can be taken from this example:

  1. Metadata standards require well-defined semantics;
  2. Interoperability depends on how the semantics are expressed.

A closely related issue for all standardization efforts relates to terminology, for it is terminology and the underlying concepts and semantics that determine scope, not only for a standards committee, but also for the standards the committee produces.

This paper offers a critical perspective on what scope needs to be defined or modified for the next generation of ITLET standards arising from developments in AI. In doing so, we recognize there is a broader public discourse that brings together both anticipation of significant societal benefit as well as anxiety about AI, which is currently articulated in the development of social benefit, privacy, and ethical frameworks [6, 10, 11]. It is also important to make explicit that we consider the likely impact of AI on ITLET will be disruptive and enabling. In this respect, it will be consistent with disruption associated with other features of the ongoing digital revolution, such as the emergence of ‘surveillance capitalism’ powered by big data and ‘surveillance socialism’ powered by social credit [12, 13]. More than this, we concur with Floridi et al., in their recent work on the AI4People project, that ‘AI is not another utility that needs to be regulated once it is mature. It is a powerful force, a new form of smart agency’ [10].

We use several vignettes, foregrounding some critical questions that could assist in shaping scope statements. These include:

2 Scope Statements

Scope statements typically represent the primary focus of any standardization effort, at both macro and micro levels. At a macro level, an SDO committee’s scope and the scope of its working groups, study groups, ad-hoc groups, etc., sets the boundaries for the focus of standard development within the committee and its groups.

At a micro level, each standard developed by an SDO committee usually contains a scope statement. At the macro level, a scope’s function is to explicitly articulate what is included and excluded in development of a standard, and therefore sets the boundaries of the focus of a standard. In doing so, a standard’s scope relies on well-formed statements and unambiguous semantics. Scopes constrain both standard development activities and outputs. Of course, where there are issues of terminology interpretation, the standard explicitly deals with in the articulation of terms and definitions, usually in a section of the standard, such as a ‘Terms and Definitions’ section, dedicated to expressing specific contextual meanings, such as a glossary.

In practice, however, committees and working groups in Standards Development Organizations (SDOs) sometimes unwittingly engage in ‘scope creep’. This is partly due to the supporting rationale for the scope is either opaque or absent [2]. Scope creep can occur both at macro and micro levels. Sometimes it is necessary for an SDO committee to re-evaluate a current scope, both at macro and micro levels due to either evolving technologies, or clarification of existing technologies.

At a generalized high-level of investigation, then, it is relatively easy to create what would seem to be a workable scope statement. For example, establishing a study group or working group focused on ‘Artificial Intelligence (AI)’ clearly defines the scope – as the entire field of AI. Where things become challenging is when we move from the general to the specific – identifying focused requirements that will enable a function and rendering them into technical reports, specifications, or standards.

In today’s rapidly evolving technology environment, it is sometimes difficult to determine if a committee’s scope, in both macro and micro levels, are suitable. For example, with the advent of standardization of AI, the main focus of how AI learns is Machine Learning (ML). ML can utilize a single ML algorithm such as a Deep Learning Neural Net, or an ensemble of ML algorithms such as used in a stacking technique, where several algorithms are utilized. Stacking is where the output of one ML algorithm such as a Support Vector Machine classifier, feeds results into a prediction algorithm, such as a Deep Learning Neural Network (DLNN). In an ITLET context, the use of ML algorithms singly or as an ensemble, could be considered a pedagogical model. The problem here is the scope of SC36 considers standardization of pedagogies out of scope. This limitation of scope, however, was intended for standardization of pedagogies for humans, not necessarily for AI, an IT technology.

3 Evolution of Artificial Intelligence

Just as historical perspective provides insight into current challenges in ITLET, it also informs standardization activities in AI. Thus, while AI is now receiving mainstream media attention as something about to have a revolutionary impact on society, it is also a field that has been evolving for over 60 years [3, 16]. Indeed, prior to the emergence of the smart phone, many applications and services on these devices that are now just considered as ‘smart’ (such as voice recognition and anticipated text) were once considered the domain of AI. Additionally, it is now common in public discourse to refer to ‘an AI’ as an entity-in-itself, such as a robot (physical or virtual).

The evolution of the field of AI therefore has an impact on terminology. And, it is dependent on definitions of both artificial and intelligence. Consider for examples two definitions of the field of AI – one from 1956 and one from 2019:

1956. The science and engineering of making intelligent machines [5].

2019. A collection of interrelated technologies used to solve problems autonomously and perform tasks to achieve defined objectives without explicit guidance from a human being [6].

Arguably, what is more significant about commentary from 1956 is that the conference where the term ‘artificial intelligence’ was first coined emerged from a scope definition of a research project:

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it [7].

Such a scope statement is both focused and versatile. In reviewing recent examples developed by ISO committees, however, we find AI defined in several ways:

capability of a functional unit to perform functions that are generally associated with human intelligence such as reasoning and learning ISO/IEC 2382:2015(en), 2123770

branch of computer science devoted to developing data processing systems that perform functions normally associated with human intelligence, such as reasoning, learning, and self-improvement ISO/IEC 2382:2015(en), 212139

interdisciplinary field, usually regarded as a branch of computer science, dealing with models and systems for the performance of functions generally associated with human intelligence, such as reasoning and learning ISO/IEC 2382:2015(en), 2123769 [9]

<system> capability of an engineered system to acquire, process and apply knowledge and skills ISO/IEC WD 22989:2019(E)

<engineering discipline> which studies the engineering of systems with the capability to acquire, process and apply knowledge and skills

Note 1 to entry: knowledge are facts, information, and skills acquired through experience or education ISO/IEC WD 22989:2019(E)

While the above definitions might be ‘fit for purpose’ in specific contexts, it is contestable that they are sufficient or appropriate for all situation in the field of ITLET. Thus, the first definition only refers to the field while the second and fourth privileges a systems view. There is no reason why AI cannot be implemented simply as a service or as a module at a component level. The multiple ISO definitions together convey the broad scope of AI, though we suspect they also require fine-tuning for certain ITLET scenarios such as adaptive learning systems supporting human learning systems, and Symbiotic Learning (SL), where both a human and an AI entity can both be learners and/or teachers. For standards supporting SL implementations, one must ask if the scopes of ISO/IEC JTC 1/SC36 Information Technology for Learning, Education, and Training and ISO/IEC JTC 1/SC 42 Artificial Intelligence, allows for standards production for SL type systems.

4 Key Questions

How questions are asked matters. As Alan Turing famously argued in 1950, the popularised question ‘Can a machine think?’ should be replaced by a more precise question ‘Can a machine be linguistically indistinguishable from a human?’ [8]. Questions are pivotal to driving scientific research and innovation. Whether they are straightforward of complicated to answer they also can help in determining or articulating scope. In the context of contemporary developments with AI in the context of LET, there is a growing list of questions that demand clarity. Importantly, the questions that require answers are questions that reach beyond the technical domain of LET and AI, to the socio-legal-political domain. Examples include:

Such questions may seem ‘out-of-scope’ for technically-focused ITLET work; however, they do stimulate a series of related questions that should be ‘in-scope’ of an ITLET or AI standards committee.

4.1 Is Interoperability Necessarily the Goal?

Standardization experts may see such a question as provocative and not so practical and prefer to identify what aspects of a domain would benefit from standardization or not. Much of the discourse associated with learning technology standards over the past 30 years, however, has been explicitly focused on achieving interoperability (of IT systems, components, services, and structured learning content). With SCORM, the notion of interoperability was situated within a broader collection of ‘ilities’ (reusability, adaptability, accessibility, affordability, and durability). Once the era of service-oriented architectures and cloud computing arrived, however, ‘composability’, ‘flexibility’, ‘scalability’, ‘sustainability’ and ‘agility’ were added to the list [4]. Of course, not all systems should be interoperable and the domains of privacy, security and cryptography are becoming increasingly interrelated – indeed, interoperability and privacy and security are typically orthogonal issues. However, with the inventor of the Web recently articulating angst regarding the loss of privacy and use of the Web as a platform for misinformation [14], from both a human and AI perspective, then again, we consider the pivotal role of interoperability.

Thus, might the ethical and social imperatives for security and privacy from a human and AI perspective be such that non-interoperable systems could be a ‘smarter’ choice in some contexts? Following on, a question emerges aimed at a practical goal: in what ways can we realize the ‘trustworthiness of AI’?

4.2 What Standards are Necessary to Allow AI to Learn?

In many respects, such a question seems obvious at the beginning of any standards development project – what standards are necessary? In this case, however, there is an additional layer of complexity due to the presence of the term ‘learn’. This is because within ISO there are over 100 variants of this term associated with existing standards [9]. Of course, while the high-level scope of SC36 addresses ‘learning’ there is no assumed equivalence here about learning in AI and human learning – there exist very clear distinctions. Moreover, in international ITLET contexts, learning is commonly defined as ‘the acquisition of knowledge, skills, and attitudes’ – it is certainly defined this way by SC36. Thus, if we are to probe this question while diligently aligning or leveraging related standards, the acquisition of knowledge and skills may seem relatively straightforward. We can decompose a set of requirements and specify them as components and relations within a service model. However, with ‘attitudes’ and the ‘smart agency’ that AI can offer, numerous other issues intervene in an AI context, for example standards for pedagogical models, learning objectives, cultural conventions, bias, trustworthiness, and symbiosis. This then leaves us to consider whether the scope of certain ITLET standard committees need modification for producing learning standards for AI needs.

4.3 Are New Trust and Privacy Infrastructure Standards for AI and Humans Required?

Trust is a pivotal enabler of Internet technologies and is embedded within several well-known services. In the education sector, schemas such as eduPer-son and eduOrg are built on long-established protocols such as Lightweight Directory Access Protocol (LDAP) and designed specifically for global higher settings. Likewise, eduroam extends the trusted domain of identity management from a single institution to an international federation, while ORCID (Open Researcher and Contributor ID) now provides a trusted common source for researcher identity. At the same time, however, there is a decline in the public trust of long-established institutions such as churches and banks, and perhaps more concerning, a decline in the trust of political systems and governance. Yet, ironically, there is also a resilience within the existing digital infrastructure with a discernible augmentation of trust in the way that services from ‘strangers’ are brokered – examples include Uber and Airbnb. Fundamentally, trust requires transparency. In this era of big data and social credit, data governance becomes an issue requiring scrutiny, and therefore trust infrastructure requires careful attention when introducing AI into the mix of services and supporting technologies.

Because the question of trust, as phrased above, is binary and implicitly rhetorical a consequent question becomes: What kinds of trust and privacy infrastructure standards for AI needs to be developed in an AI context? Or, as phrased in 4.1 above: in what ways can we realize the ‘trustworthiness of AI’? In answering such questions, we also need to be mindful to differentiate between standards and policies – and the guidelines or procedures associated with each. Each country and/or culture has their own regulation on trust and privacy from a human perspective, and standards have been developed to address the differing implementations. In an AI context, however, there are not many standards addressing trust and privacy, although research and guidelines have been developed in several jurisdictions for this purpose [17–19]. Even though trust and privacy needs are different for humans and AI, when both interact in today’s digital ecosystems, there is a co-dependency on humans and AI. Thus, how can standards be developed not only for the human or AI, but also for trust and privacy in a symbiotic relationship between the two?

Given that trust is often associated with transparency (of process etc.) and that that openness is semantically related, another question arises when probing the nature of trust and privacy infrastructure: Does openness need recalibration to constrain anonymity? This question has arisen from the proliferation of misinformation on the Internet that is enabled by anonymous and non-identifiable sources. It also follows the lament of Tim Berners-Lee that we have ‘lost control of our privacy’ and need to find a way to curtail misinformation [14]. Thus, in an AI-powered era this question demands attention given that in the evolution of the web anonymity has somehow become embedded as a feature of ‘openness’.

4.4 How Can Standards From Other SDOs Be Leveraged as Normative References for ITLET Standards Relating to AI?

Traditionally, when needed, SDOs utilize standards developed by another organization critical to implement a standard under development. This is done by normatively referencing the standard developed by another SDO. In today’s rapid evolvement of technologies, development of standards that can be effectively utilized by other SDOs is key to effective development of other standards. Effective development of standards which can be utilized across IT domains is becoming more difficult, especially if needs and requirements are not known by the SDO developing a standard. For example, in a LET context, a trustworthiness standard developed in ISO/IEC JTC1 SC 42 Artificial Intelligence should have knowledge of trustworthiness standards or requirements in SC36 when using AI technologies. SC42 can then develop the AI trustworthiness standard to allow effective implementation of trust components for LET systems, hence allowing SC36 standards to effectively normative reference the SC42 standard.

Traditionally, exchange of knowledge between SDOs is done at the macro level through Liaisons, for example a Category A Liaison at the ‘Plenary level’ in ISO/IEC JTC1 Sub-Committees. What is needed in today’s evolving technologies and related standards development activities is the sharing of knowledge at the micro levels, such as a Category C Liaison at ISO/IEC JTC1 SC Working Group levels. The key benefit being the sharing of knowledge at the requirement and technology level, instead of policies. However, an administrative problem of creating maintaining a micro level Liaison, such as a Category C Liaison between ISO/IEC JTC1 SC WGs, then arises. To overcome this problem, current ISO directives allow an ISO/IEC JTC1 SC WG to ‘invite’ technical experts to a meeting and this is used by several ISO/IEC JTC1 SC WGs. A potentially better way could be by extending Liaisons, such as a Category A Liaison between ISO/IEC JTC1 SCs to cover both the macro level (Plenary) and the micro level (Work Group) levels. This would of course dictate a change in how Liaisons function and take additional time for Liaisons to digest and report needed knowledge to their own SDO.

4.5 How Can Vocabularies Between SDOs Be Harmonized?

Typically, during standards development within the ISO context, it is common practice to review standardized terms and definitions that already exist in related standards prior to ‘reinventing the wheel’. If appropriate, and terms and definitions are adopted, then normative references are included within the new standards work. However, this does not always happen because the context of a new work item can involve nuance not previously considered. Our experience in the ITLET domain has been that harmonization can be challenging work. In developing the MLR suite of standards, for example, harmonizing seemingly simple terms like ‘learning object’ from the IEEE LTSC and ‘learning resource’ from the broader library community took a significant amount of time to resolve. Thus, with standardized vocabularies, the effort can be considerably longer. If we think ahead, then we can foresee a time when machine-understandable (AI) representations of standards may assist in this quest. With ontological representation of terminologies from diverse committees and machine-understandable representations of the technical specifications, then AI is just a logical extension of the enabling and supportive feature of IT itself. Moreover, for over two decades the Worldwide Web Consortium (W3C) has been developing and implementing similar solutions, a prominent example being the Web Ontology Language (OWL) which was developed as a common technical way to author and represent ontologies enabling shared semantics. While it is also the case that W3C specifications supports ongoing development of the ‘open’ web and linked open data and arguably, therefore, for less precise purposes than formal standards development, effective reuse of ontologies has been a key success factor for the Semantic Web [15].

In today’s context of rapidly evolving technologies and related standards development, key to interoperability and integration is the use of vocabularies that share the same semantics. To achieve this, ISO/IEC has encouraged the formation of Terminology Coordination Groups (TCGs), both within a specific SDO committee, and between different SDO committees.

4.6 What Are the Standardization Impacts of ‘Joint Working Groups’ Formed From Multiple SDOs with Differing Scopes?

In today’s ISO/IEC standard development activities where specific technology paradigms seemingly blur together, the use of Joint Working Groups (JWG) is increasing. For example, ISO/IEC JTC1 SC 42 Artificial Intelligence has recently formed a JWG with ISO/IEC JTC1 SC40 IT Service Management and IT Governance. This allows development of standards under the authority of the ‘macro’ level scopes of both SCs. This allows the JWG to develop ‘micro’ level scopes for standards development which may be forbidden by just one of the SCs “macro” level scopes. The use of a JWG can also allow needed experts to collaborate more effectively when differing areas of expertise are needed.

There are other SDO agreements that allow different SDOs to form JWGs. For example, the ISO/IEEE Partner Standards Development (PSDO) Cooperation Agreement allows participating ISO or IEEE standard committees to form JWGs to develop standards. Again, this allows development of standards under the authority of ‘macro’ level scopes of both committees, and generation of appropriate ‘micro’ level scopes for specific projects.

5 Conclusion

In presenting a summary of some key features of the contemporary ITLET context this paper has highlighted some questions that we consider to be representative of the inquiry that needs to accompany standardization where AI is implicated. Formulating concise and semantically precise scope statements has always been essential to robust standardization activities; however, we are questioning whether this is sufficient within the broader standardization ecosystem where there is a proliferation of terminology and associated definitions, much of which is questionable and may not endure. This new era of emerging AI and ML opens several frontiers at once and introduces new complexity into one of the basic foundations of all standards development: terminology. Achieving clarity of purpose and rationale where AI can deliver a net benefit is an important next step for standardization of AI in ITLET.

References

[1] R. Robson, A. Barr, ‘The New Wave of Training Technology Standards’, Proc. Interservice/Industry Training, Simulation, and Education Conf. (I/ITSEC). 2018.

[2] T. Hoel, J. Mason, ‘Deficiencies of scope statements in ITLET standardization’, Proc. 20th Int. Conf. on Computers in Education (ICCE), Singapore, 2012.

[3] P. McCorduck, M. Minsky, O. G. Selfridge, H. A. Simon. ‘History of Artificial Intelligence’ In IJCAI, pp. 951–954. 1977.

[4] J. Mason, ‘Standards, Services, Models, and Frameworks: Trends in ICT Infrastructure for Education and Research’, in J. Yoshida & H. Sasabe (eds.), Next Generation Photonics and Media Technologies, 143–150, PWC Publishing: Chitose, Japan, 2007.

[5] B. Marr, ‘The Key Definition of Artificial Intelligence (AI) That Explain its Importance’, Forbes, 2018.

[6] D. Dawson, E. Schleiger, J. Horton, J. McLaughlin, C. Robinson, G. Quezada, J. Scowcroft, S. Hajkowicz, ‘Artificial Intelligence: Australia’s Ethics Framework’. Data61 CSIRO, Australia, 2019.

[7] J. McCarthy, M. Minsky, N. Rochester, C. E. Shannon, ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’. http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf, August 1955.

[8] S. Bringsjord, N. S. Govindarajulu, ‘Artificial Intelligence’, The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/fall2018/entries/artificial-intelligence/2018.

[9] ISO Online Browsing Platform, https://www.iso.org/obp/ui/2019.

[10] L. Floridi, J. Cowls, M. Beltrametti, ‘AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’, Minds & Machines 28: 689, 2018.

[11] The IEEE Initiative on Ethics of Autonomous and Intelligent Systems. ‘Ethically Aligned Design, v2’ https://ethicsinaction.ieee.org 2018.

[12] S. Zuboff, ‘In the age of the smart machine: The future of work and power.’ Basic Books, NY. 1988.

[13] S. Zuboff, ‘Big other: Surveillance capitalism and the prospects of an information civilization’, Journal of Info Tech, 30, 75–89, 2015.

[14] T. Berners-Lee, ‘Three challenges for the web. An Open Letter to the Web Foundation’ http://webfoundation.org/2017/03/web-turns-28-letter/ 2017.

[15] E. Simperl, ‘Reusing ontologies on the Semantic Web: A feasibility study’, Data & Knowledge Eng, 68:10, 905–25, 2009.

[16] E. Fast, E Horvitz, ‘Long-term trends in the public perception of artificial intelligence’, Proc 31st Conf on Artificial Intelligence, 2017. https://arxiv.org/pdf/1609.04904.pdf

[17] F. Rossi, ‘Building trust in artificial intelligence’, Journal of Int Affairs, 72:1, 127–134, 2019. https://www.jstor.org/stable/26588348

[18] European Commission, ‘Ethics Guidelines for Trustworthy AI’, High-Level Expert group on Artificial Intelligence’. https://ec.europa.eu/futurium/en/ai-alliance-consultation

[19] A. Ferrario, M. Loi, E. Viganò, ‘In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions’, Philosophy & Technology. 23:1–7, 2019.

Biographies

images

Jon Mason is Assistant Dean International and a Senior Lecturer in the College of Education at Charles Darwin University, Australia. Specialising in the application of digital technologies, his research spans questioning, open educational practices, sense-making, data literacy, standardization, and digital learning futures. He holds Master’s degrees in Cognitive Science and Knowledge Management and a PhD in Education. He has been active in international standardization of information technology for learning, education, and training for over 20 years.

images

Bruce E. Peoples received his BSc in Cross Cultural Communications, and MSc in Instructional Design from Clarion University, USA; and a PhD in Information Technology Science from Université Paris 8, France. He is a Fellow at the International Institute of Informatics and Systemics and a recipient of the Raytheon 2006 Excellence in Technology award. He has over 25 years’ experience developing and leading international standards activities in IEEE and ISO/IEC. He is Chair Emeritus of ISO/IEC JTC1 SC36 Information Technology for Learning, Education and Training. He currently participates in ISO/IEC JTC1 SC42 Artificial Intelligence, ISO/IEC JTC1 SC36 ITLET, ISO/IEC JTC1 AG16 Brain-computer interface, and IEEE Learning Technology Standards Committee.

images

Jaeho Lee is a Professor of the School of Electrical and Computer Engineering, at the University of Seoul, in Korea since 1998. He received the BSc and the MSc degrees in Computer Science from Seoul National University, Seoul, Korea, in 1985 and 1987, respectively and a PhD in Computer Science and Engineering from the University of Michigan, Ann Arbor, Michigan, USA, in 1997. His interests are in artificial intelligence, intelligent service robots, and educational technology. He has over 20 years of experience developing international standards in ISO/IEC.

Abstract

1 Introduction

2 Scope Statements

3 Evolution of Artificial Intelligence

4 Key Questions

4.1 Is Interoperability Necessarily the Goal?

4.2 What Standards are Necessary to Allow AI to Learn?

4.3 Are New Trust and Privacy Infrastructure Standards for AI and Humans Required?

4.4 How Can Standards From Other SDOs Be Leveraged as Normative References for ITLET Standards Relating to AI?

4.5 How Can Vocabularies Between SDOs Be Harmonized?

4.6 What Are the Standardization Impacts of ‘Joint Working Groups’ Formed From Multiple SDOs with Differing Scopes?

5 Conclusion

References

Biographies