Global Challenges in the Standardization of Ethics for Trustworthy AI


  • Dave Lewis ADAPT Centre, Trinity College Dublin, Ireland
  • Linda Hogan ADAPT Centre, Trinity College Dublin, Ireland
  • David Filip ADAPT Centre, Trinity College Dublin, Ireland
  • P. J. Wall ADAPT Centre, Trinity College Dublin, Ireland



Artificial intelligence, ethics, Trustworthy AI, standards, stakeholders


In this paper, we examine the challenges of developing international standards for Trustworthy AI that aim both to be global applicable and to address the ethical questions key to building trust at a commercial and societal level. We begin by examining the validity of grounding standards that aim for international reach on human right agreements, and the need to accommodate variations in prioritization and tradeoffs in implementing rights in different societal and cultural settings. We then examine the major recent proposals from the OECD, the EU and the IEEE on ethical governance of Trustworthy AI systems in terms of their scope and use of normative language. From this analysis, we propose a preliminary minimal model for the functional roles relevant to Trustworthy AI as a framing for further standards development in this area. We also identify the different types of interoperability reference points that may exist between these functional roles and remark on the potential role they could play in future standardization. Finally we examine a current AI standardization effort under ISO/IEC JTC1 to consider how future Trustworthy AI standards may be able to build on existing standards in developing ethical guidelines and in particular on the ISO standard on Social Responsibility.We conclude by proposing some future directions for research and development of Trustworthy AI standards.


Download data is not yet available.

Author Biographies

Dave Lewis, ADAPT Centre, Trinity College Dublin, Ireland

Dave Lewis is the head of the AI Discipline at the School of Computer Science and Statistics at Trinity College Dublin. He is also Deputy Director of the ADAPT SFI Research Centre for Digital Content Technology. He leads ADAPT’s programme of industry collaborative research and its multidisci-plinary research theme on Data Governance. His research focuses on the use of open semantic models to manage the Data Protection and Data Ethics issues associated with digital content processing. He has led the development of international standards in AI-based linguistic processing of digital content at the W3C and OASIS and is currently active in international standardization of Trustworthy AI at ISO/IEC JTC1/SC42 serving as an expert contributor to documents on an Overview of Trustworthy AI and Ethical and Societal Issues for AI.

Linda Hogan, ADAPT Centre, Trinity College Dublin, Ireland

Linda Hogan is an ethicist with extensive experience in research and teaching in pluralist and multi-religious contexts. Her primary research interests lie in the fields of inter-cultural and inter-religious ethics, social and political ethics, human rights and gender. Professor Hogan has lectured on a range of topics in ethics and religion, including Ethics in International Affairs; Ethics of Globalisation; Biomedical Ethics; Human Rights in Theory and Practice; and Comparative Social Ethics. She has been a member of the Irish Council for Bioethics and has been a Board member of the Coombe Hospital, Science Gallery and Marino Institute of Education. She has worked on a consultancy basis for a number of national and international organisations, focusing on developing ethical infrastructures.

David Filip, ADAPT Centre, Trinity College Dublin, Ireland

David Filip is a research fellow at Trinity College Dublin and is part of the ADAPT Centre. His research address interoperability in digital content technologies by developing and implementing open and transparent technical standards. He focuses on optimizing processes, making science and technology serve business needs, including ethical and societal concerns. He is: convenor of JTC 1/SG 1 Open Source Software; convenor of JTC 1/SC 42/WG 3 Trustworthiness of AI, national mirror chair for National Standards Authority of Ireland (NSAI) TC 02/SC 18 on AI; Head of the Irish national delegation, ISO/IEC JTC 1/SC 42 AI; Chair & Editor of OASIS XLIFF OMOS TC; Secretary & Lead Editor, OASIS XLIFF TC; and NSAI expert to ISO/IEC JTC 1/SC 38 Cloud Computing, ISO TC 37/SC 3 Terminology management, SC 4 Language resources, SC 5 Language technology.

P. J. Wall, ADAPT Centre, Trinity College Dublin, Ireland

P. J. Wall is a postdoctoral researcher at the ADAPT Centre (www.adapt in the School of Computer Science, Trinity College Dublin. His research focuses on technological innovation and an exploration of the wider implications associated with the social, cultural, and political aspects of the implementation and use of ICT in the Global South (ICT4D). His current research is based in Sierra Leone and examines the role of mobile technologies in reconfiguring health systems and practices (mHealth), and an exploration of how such mobile devices are implemented, adopted, scaled, and sustained. His interest is in understanding ICT, and specifically mHealth, adoption, and use from a variety of critical realist and interpretivist ontological perspectives.


European Commission’s High Level Expert Group, “Ethics Guidelines for Trustworthy AI”, April 2019. Retrieved from

“Ethically Aligned Design – First Edition”, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, IEEE, 2019.

Borning, A. and Muller, B., “Next steps for Value Sensitive Design”, Proceedings of CHI 2012, 1125–1134. New York, NY: ACM Press.

“Asilomar AI Principles”, Future of Life Institute, 2017. Retrieved from

OECD/LEGAL/0449. (2019). “Recommendation of the Council on Artificial Intelligence”. OECD.

UN General Assembly, Resolution 217A, 1948

Pollis, A., “Human Rights: A Western Construct with Limited Applicability”, In Human Rights: Cultural and Ideological Perspectives, edited by Pollis, A. and Schwab, P, 1–18, New York: Praeger, 1979.

Dunne, T. and Wheeler, N. (eds.) “Human Rights in Global Politics”, New York: Cambridge University Press, 1999.

“Final Declaration of the Regional Meeting for Asia of the World Conference on Human Rights”, Bangkok, March 29 to April 2, 1993,

Bourgeois-Doyle, D. “Two-Eyed AI: A Reflection on Artificial Intelligence – A Reflection Paper prepared for the Canadian Commission for UNESCO”, Ottowa, Canada, March 2019.

ISO/IEC Directives Part 2: “Principles and rules for drafting and structuring of ISO and IEC documents”, 8th edition, 2018.

“Policy and Investment Recommendation for Trustworthy AI”, High Level Expert Group on Artificial Intelligence, 26 June 2019.

Hagendorff, T., “The Ethics of AI Ethics – An evaluation Guideline”,

O’Keefe, K. and Brien, D.O. “Ethical Data and Information Management: Concepts, Tools and Methods”, Kogan Page, 2018.

Freeman, R.E. and McVea, J. A Stakeholder Approach to Strategic Management (2001). Darden Business School Working Paper No. 01– 02. Available at SSRN: or

Tutt, A. (2016). “An FDA for Algorithms”. SSRN Electronic Journal, 83–123.

Calo, R. (2017). “Artificial Intelligence Policy: A Roadmap”. SSRN Electronic Journal, 1–28.

Polonetsky, J., Tene, O., and Jerome, J. (2015). “Beyond the common rule: Ethical structures for data research in non-academic settings”. Colo Tech L J, 13, 333–368.

“Ethics assessment for research and innovation – Part 1: Ethics committee”, CEN Workshop Agreement CEN/CWA 17145-1:2017

ISO Guide 82:2014, “Guidelines addressing sustainability in standards”, ISO, April 2014,

“Mapping regulatory proposals for artificial intelligence in Europe”, Access Now, Nov 2018. Retrieved from

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J.W., Wallach, H., Daumé, H., and Crawford, K. “Data sheets for Datasets, in Proceedings of the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning”, Stockholm, Sweden, 2018.

Bender, E., and Freidman, B., “Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science”, in Transactions of the Association for Computational Linguistics, vol. 6, pp. 587–604, 2018

“Ethics assessment for research and innovation – Part 2: Ethical impact assessment framework”, CEN Workshop Agreement CWA 17145-2,

“Guidance on social responsibility”, International Standards, ISO 26000:2010(E), 2010.

Adamson, G., Havens, J.C., and Chatila, R. “Designing a Value-Driven Future for Ethical Autonomous and Intelligent Systems”, Proceedings of the IEEE, 107(3), 518–525. 2019, 18.2884923

Zeng, Y., Lu, E., and Huangfu, C. “Linking artificial intelligence principles” arXiv preprint arXiv:1812.04814, 2018

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., and Srikumar, M. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI”, January 15, 2020, Berkman Klein Center Research Publication No. 2020-1. Available at SSRN:

Calo, R. “Consumer Subject Review Boards: A Thought Experiment”, Stanford Law Review Online, 66 (2013), Retrieved from

European Union. (2016). Regulation 2016/679 of the European parliament and the Council of the European Union (General Data Protection Regulation). Official Journal of the European Communities, 2014(October 1995), 1–88.

Daly, A., Hagendorff, T., Hui, L., Mann, M., Marda, V., Wagner, B., Wang, W., and Witteborn, S., “Artificial Intelligence Governance and Ethics: Global Perspectives”, 8 June 2019,




How to Cite

Lewis, D., Hogan, L., Filip, D., & Wall, P. J. (2020). Global Challenges in the Standardization of Ethics for Trustworthy AI. Journal of ICT Standardization, 8(2), 123–150.