Enhancing Communication for People with Autism Through the Design of a Context-aware Mobile Application for PECS

Fatima Ez Zahra El Arbaoui*, Kaoutar El Hari and Rajaa Saidi

SI2M Laboratory, National Institute of Statistics and Applied Economics (INSEA), Rabat, Morocco
E-mail: f.elarbaoui@insea.ac.ma; k.elhari@insea.ac.ma; r.saidi@insea.ac.ma

Received 31 March 2024; Accepted 24 September 2024

Abstract

Autism is a neurodevelopmental condition characterized by difficulties with social skills and communication. Autistic individuals require different types of assistance to cope with these challenges. The Picture Exchange Communication System (PECS) is a commonly used program for teaching nonverbal and symbolic communication skills, particularly for children with limited or no communication abilities. However, despite the development of various technology-based PECS systems, there is a lack of features that can simplify their use by children with autism. In our work, we are developing a design of a personalized and context-aware PECS system. Our system not only presents pictures as content but also adapts and enhances the content using contextual information.

Keywords: Autism, IoT, PECS, context aware system, mobile application, sensing technology, autism spectrum disorder.

1 Introduction

Autism Spectrum Disorder (ASD) is a neurological and developmental disorder that affects communication and behavior [11]. Each person with autism has their own unique strengths and challenges, which is why autism is considered a spectrum disorder [10]. People with ASD commonly have difficulties with social interaction, communication, sensory issues, repetitive behaviors, and restricted interests [9], which can impact their performance at work, school, and in other areas of their lives. ASD has several subtypes, which are influenced by a combination of genetic and environmental factors, and some people with ASD may require more or less assistance, or even be able to live independently [10].

ASD affects people from all over the world, regardless of their race, nationality, culture, or economic background, and its prevalence is increasing [10]. It can also coexist with other developmental disorders such as perceptual and expressive developmental disorders, learning problems, and stuttering. The Diagnostic and Statistical Manual of Mental Disorders (DSM-5) now includes social communication disorder and all the elements of diffuse developmental disorder and Asperger’s disorder under a single ASD category [10].

Communication deficits are one of the biggest challenges faced by people with autism [1]. Communication is the transfer of information between a communicator and a communicant, and it must be understandable to the receiver [8]. Communicators use both verbal and nonverbal communication to convey messages. Hence, enhancing speech and language abilities is a primary objective for children with ASD. Early intervention therapy in communication skills is recommended, as evidence-based treatments have been shown to be effective [2].

Assistive technology and IoT have infiltrated social skills education and offer opportunities for innovation, therapy, and education for people with ASD to improve their communication skills. PECS is a commonly used communication tool for children with ASD. PECS utilizes visual abilities and associates actions or objects with corresponding images. The progress of technology has made several programs available for people with ASD to improve their communication skills, including mobile applications for PECS.

This article discusses the use of IoT to improve communication skills for children with ASD and proposes the design of a context-aware mobile application for PECS. The proposed design considers contextual information to enhance picture management and presentation.

The remaining sections of this article are organized as follows: Section 2 presents related work and a relevant existing solution. Section 3 provides the solution requirements. In Section 4, we introduce our methodology. Section 5 presents the proposed system architecture. Finally, in Section 6, we present the conclusion and future works.

2 Related Work

The rapid growth of technology has the potential to improve living conditions for people affected by autism and their families. Researchers have explored the use of technology to support communication in people with ASD particularly children. In this section, we provide a general overview of IoT or technology based solutions that can assist children with autism in communication. Then, we will identify solutions dedicated to PECS, and finally, we will discuss some limitations regarding similar work that proposes the integration of context in PECS.

2.1 Assistive IoT Systems for Communication

We collected relevant papers proposing the use of IoT to assist children with autism in communication. After analyzing the articles, we noticed that each one addresses a specific issue related to communication. As stated in [15], communication refers to the transmission of a message, which can be achieved through various means such as language, signs, actions, etc. Therefore, we reorganized the articles by classifying them into the following categories, as illustrated in the Figure 1.

images

Figure 1 IoT-based assistance approaches for children with autism in communication.

The following part of this section presents the selected articles that aim to improve social skills and interaction among children with autism.

Social skills and interaction

McGuire and Priestly (1980) defined social skills as “those behaviors that are essential to effective face-to-face communication between individuals.” The significance of social skills in communication has been emphasized in numerous publications. In fact, various techniques have been utilized to implement the Internet of Things (IoT) and enhance social communication in children with autism.

The authors of [32] designed a system based on IoT and AI dedicated to children with autism to improve their social and daily skills. The system has two main features: monitoring heart rate and sending alarms to parents and caregivers, and developing communication skills by improving eye contact. For the second feature, the authors used IBM Watson for 3D modeling, which allows for illustrating people and interacting with dialogue. The accuracy of identifying a child’s emotion to send notifications attended 99.8%. Besides, the authors reported the positive impact of the 3D graphical model provided by the application on the social interaction of children with autism. Improving social skills is also a purpose of the work presented in [27]; hence, the authors proposed the design of interactive activities through dual tablets to educate children with autism in verbal communication. The system was based on the SCoSS architecture, which enables peers to be aware of each other. The two tablets were connected by wireless technology. The authors deployed five picture-sequencing activities; both of the children that use tablets need to put the picture in the same position to move to the next round. During sessions, the authors observed the children’s engagement, their imitations, and their awareness as the main aspects of social collaboration and interaction. The analysis of results suggested that collaborative software is a useful tool for promoting communication skills for children with autism.

In [26] and [25], the authors integrated augmented reality as the main technology for their researchers. In [26], the authors introduced four augmented-reality applications, two of them dedicated to enhancing communication skills, and the others were for learning other cognitive skills. The first presented game is “Supporting Ships in the Air,” the concept of this game is to fuel ships to conduct goals, and the game is easier when there are many players collaborating with each other to accomplish the goal and move to the next level. The results obtained from the observation of children playing this game show a development in their attention. The second game introduced in this study, “Hunting Treasure,” requires more collaboration between two players to achieve a specific goal by directing the robotic ball Sphero 2.0. The game includes gathering items, problem-solving, etc. Besides, the authors of [25] proposed the use of the virtual agent to teach children with autism social interaction. They proposed “AVATAR,” which stands for “Autism: Virtual Agents to Augment Relationship in children.” The system is developed using human intelligence and artificial intelligence. It provides two interfaces and an expert system that acts as a mediator. The first interface is designed for the child to interact with the agent in the context of learning sessions. The second interface is a platform that allows parents and therapists to be informed about the child’s progress. AVATAR promotes social interaction through learning by imitation; it is programmed with a variety of scenarios and features for virtual agents. No experience or evaluating tests were reported in this study, but in future work, the authors plan to develop other features that will make AVATAR deployed also for groups of children.

The purpose of the study presented in [24] is to demonstrate how RFID/NFC technology can link digital and physical instructional materials to create a hybrid approach by combining manipulative and sensor technologies. To achieve this, the authors introduced various applications and prototypes, including WandBot, Walden PECS Communicator WPC, and Block Magic. They also introduced the concept of Class 2.0 as a context of education based on IoT technologies and games, which could facilitate the development of essential cognitive skills and social interaction and empower effective learning processes.

In addition to improving social skills and interaction, technology can also assist in recognizing and understanding emotions in children with autism. This is particularly crucial as individuals with autism may face challenges in recognizing and expressing emotions, which can lead to difficulties in social communication. Several studies have explored the use of technology, including IoT, to improve emotion recognition in children with autism. The following paragraph will provide an overview of some of these studies.

Emotion recognition

Efficient communication, according to [6], relies on the recognition of language, tone, facial expression, gaze, and posture. However, individuals with ASD may have deficits in this recognition, as noted in the diagnostic criteria of both the International Classification of Diseases (ICD-10) and the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV). In [20], the authors proposed the design of IoT-based games to aid in the education and interaction of children with ASD, introducing the concept of emotional interaction design to engage the child’s emotions and establish a connection between the child and the game. The authors incorporated IoT technologies such as Radio-Frequency Identification Technology (RFID), Global Positioning System (GPS), and data processing. They also reported on intervention measures for children with autism, including music therapy, game therapy, Applied Behavior Analysis (ABA), and sensory training. However, the study did not involve the development of IoT toys; rather, the authors implemented the smart IoT model and analyzed the market.

To develop social and emotional skills in children with autism, the authors of [19] introduced the development of an immersive virtual reality system. They recruited two groups of children with High-Functioning Autism (HFA), one of which participated in 10 sessions with the developed tool, while the other did not participate in any sessions. The authors compared the performance of the two groups and reported that the tool could be used as an educational tool for the children. The developed virtual environment represents a virtual school with many contexts where children can interact with avatars and practice verbal and nonverbal communication skills. Data were collected during all phases of the deployment of the system, and the processing of these data showed an improvement in many aspects of the participants’ behaviors.

Similarly, the authors of [18] proposed “Buddy” as a virtual reality agent to assist children with autism in social and emotional skills. The system architecture is based on two different user interfaces. The first one is for the child affected by autism, and the second interface is used by an individualized educational placement (IEP) team. The main objectives of the system are to provide education in communication and social skills and to act as a friend for the child by providing emotional support. Moreover, the system is equipped with sensors that can track the body language and facial expressions to determine the child’s mental state. Based on the child’s emotions, the system produces relevant content.

It’s important to note that while emotion recognition is a critical component of autism intervention, it is just one piece of the puzzle. Developing verbal and language skills is equally crucial for fostering effective communication and social interactions. Fortunately, numerous technologies have been developed to support language acquisition and improve communication abilities, as we will see in the next section.

Verbal and language skills

The study presented in [17] reports the impact of robot-based interaction on improving the verbal and nonverbal communication skills of children with autism. Four children participated in the study and were invited to participate in four sessions with the NAO robot over four weeks. The first session was an introduction session, and in the second session, questions and answers with NAO robots were programmed. In the third session, the interaction with the robots was based on physical activities. All feedback was collected during the final session, which was observed using an Android Phone Video Camera and another camera attached to the robot. The results of analyzing interactions show that three of the four children react favorably to the robot, and this interaction helps them improve their communication skills.

A similar study was introduced in [16], where the authors aimed to improve the rehabilitation procedure of children with autism by proposing a robot that therapists can use to help those children ameliorate communication skills. IoT will be implemented through the use of a smartphone to monitor the robot via Wi-Fi. The robot called “Tomodachi” can work in two different modes: an automatic mode when the robot recognizes the child through facial expression recognition and starts interacting with them, and a manual mode when it is managed by a smartphone. Numerous features of this robot were developed, including facial recognition, response to voice commands, and the ability to be controlled via the Touch OSC application. Therefore, the proposed robot will have useful features for ameliorating the rehabilitation process of communication skills.

In addition, robot-human interaction was reported in both [14] and [13]. In [14], the authors programmed an NAO robot to act as an assistant for teaching children with autism spelling words, reading texts, and multiple lessons. The system is built upon an interactive framework, with many lessons uploaded to the robot with the intervention of a tutor. There are two possibilities to interact with the robot: either by touching the robot’s screen or by voice. The robot proposed a variety of lessons, including programming, mathematics, science, technology, etc. However, the developers of this work did not evaluate the designed robot in real situations, and they plan to gather feedback data to ameliorate lessons.

On the other hand, the authors of [13] presented a platform built upon the NAO humanoid robot to teach children with ASD movements and verbal skills. The system requires the intervention of a teacher to control the communication between the child and the robot. The whole system is constructed from an NAO robot, Kinect sensors that track the child’s actions, and a laptop computer connected to the robot through Wi-Fi and transmits data by TCP/IP protocol. The robot can either invite the child to imitate it or teach them movements. The system was tested by the participation of typically developed adults and ASD children. From the results, the authors reported that further investigations are needed to validate the system’s performance on speech and movement interactions.

The authors of [12] developed ASPECT, an Alexa-based skill to calculate the performance of digital voice assistants (DVA) in recognizing and evaluating the verbalization of children with ASD. ASPECT operates within three modes: the startup mode, the practice mode, and the assessment mode. In the startup mode, a therapist is asked to enter a unique ID for the user (autistic child) and select the assessment mode or the practice mode. In the practice mode, the therapist can select utterances that the child needs to practice to improve verbal skills. Similarly, in the assessment mode, the therapist selects utterances that the child will practice during a session, but in this mode, ASPECT provides feedback on the child’s performance. ASPECT was built upon the Amazon Web Services (AWS) framework, and the speech recognition engine was designed to match a child.

In Table 1, we present a summary of all the mentioned assistive IoT systems.

Table 1 Assistive IoT systems for communication

Technologies
Associated
Category Ref Description with IoT
Social skills and interaction [32] An AI-enabled IoT system aimed at enhancing the cognitive abilities of children with autism. AI
[27] This work explores how the use of dual tablets can enhance communication and other-awareness in learning disabled children with autism. Mobile technology
[26] The use of augmented reality to improve the appeal and effectiveness of learning. The authors discuss the benefits and potential of this technology in enhancing the learning experience. Augmented reality
[25] The study investigates the potential benefits of virtual agents and their impact on promoting social interaction and communication skills in ASD children. Virtual reality
[24] The integration of RFID/NFC technologies in educational games to bridge the gap between digital and physical learning experiences. The study examines the potential of these technologies and their impact on enhancing the educational outcomes of students. RFID/NFC technologies
Emotion recognition [20] The design of an emotional interaction system for children with ASD using smart toys connected to the IoT. The study focuses on developing an app that facilitates emotional engagement and communication skills. IoT
[19] The study aim to enhance communication skills and problem-solving abilities for ASD children by using virtual reality. The research highlights the potential of virtual reality as an educational tool for ASD children. Virtual reality
[18] This work presents “Buddy,” a virtual life coaching system designed to support children and adolescents with HFA, providing personalized assistance and guidance. Virtual reality
Verbal and language skills [17] This work explores the enhancement of communication skills in children with ASD through Human Robot Interaction, aiming to improve both verbal and non-verbal communication abilities Human Robot Interaction
[16] This work presents an interactive robotic platform designed for both education and language skill rehabilitation purposes. It explores the potential of using robotics to enhance learning experiences and aid in language development. Human Robot Interaction
[13] The authors developed an interactive training system that combines imitation and speech instructions to facilitate motor learning in children with autism. The system aims to improve their motor skills through a multimodal approach, providing a promising intervention method for children on the autism spectrum NAO humanoid robot
[14] This study explores the use of a humanoid robot as an assistant tutor for autistic children, aiming to enhance their learning experience. The researchers investigate the effectiveness of this approach and discuss its potential benefits for improving the educational support provided to autistic individuals. NAO humanoid robot
[12] This study explores the use of IoT devices to help children with autism spectrum disorder develop echoic skills. The researchers investigate the potential benefits of these devices in supporting the language development of children with autism. IoT

Through the use of technology, researchers have developed innovative ways to implement and enhance PECS, such as digital versions and mobile applications. These technologies have the potential to increase the accessibility and effectiveness of PECS for children with autism. In the following section, we will discuss some of these innovations and their impact on communication and social skills in children with autism.

2.2 PECS

Augmentative and alternative communication (AAC) is used by people who, some or all of the time, cannot rely on their speech. It incorporates the individual’s full communication abilities and can include any speech or vocalization, gestures, hand signals, and assisted communication. It is truly multimodal, allowing individuals to use every possible mode to communicate. More than 2 million individuals who present with significant expressive language impairment use it [4]. AAC includes various types of systems, such as sign language, picture cards, speech generating devices, etc. Its primary goal is to enable the user to communicate interactively in the most effective and efficient manner. For autistic people, there are five main AAC methods that autistic children must follow with their parents and caregivers to make considerable progress: the social scenario, Makaton, and PECS [3]. In our research, we have focused on PECS since it is frequently used with children with autism.

Many studies have investigated the improvement of PECS using assistive technologies. According to [21], iCAN is a teaching-assistive tablet application that helps children with autism in improving their communication skills. The program is based on the PECS, which is a well-known communication training method for non-verbal ASD children [7]. PECS is not only used for children with autism but can also be deployed for children with other handicaps. It begins with the expression of what the child needs through a picture and promotes interaction. Using paper cards to interact with pictures has several issues, including the management and loss of cards, time consumption, and difficulty in preparing pictures. iCAN addresses these issues by integrating digital support, virtualization, voice recognition, and other features that simplify using PECS as training to promote communication for children with autism. This program facilitates the creation of pictures and sentences and organizes the storage and selection of sentences using categories. In the experiments, eleven ASD children with verbal communication deficits participated in the study to investigate the impact of iCAN. The analysis of the data extracted during the sessions when children used the program shows significant improvement in the children’s learning capabilities. Additionally, caregivers were also satisfied, and the program saved efforts regarding the management of cards and improved the training process.

In [28], a Personal Digital Assistant (PDA) was proposed as a technology that may replace the PECS binders. The architecture of the proposed system is based on two components: a standalone communication device and an internet portal that administers and manages images. The PDA system was evaluated by comparing it to PECS’s binders and by assessing if the communicating messages are easy to understand. The authors revealed that there is no difference in understanding messages by using PDA and PECS’s binders. However, PDA has higher quality, is easy to use, and is more pleasant compared to the PECS-based paper approach. The system needs further investigation by including user feedback and improving the user interface.

In [22], the researchers developed a software system that incorporates PECS. The system, called “PixTalk,” consists of two components: a smartphone application that works on Windows mobile smartphones and permits children the navigation and selection of pictures to express their needs, and a website that acts as an administration portal for managing and maintaining the portfolio of pictures for each child. The users of the website are teachers and caregivers. The application was developed following the User-centered design and the PECS model and can operate in two different modes: operational and display modes. Three children and their teachers participated in evaluating PixTalk. They noticed some limitations, such as battery life and the small size of images. However, they reported a preference for using PixTalk over paper cards.

The augmentative technology was used as the foundation for the work presented in [29]. The authors proposed a mobile application based on an augmentative technology method to help autistic children communicate their needs and emotions. The mobile application addresses the issue that children with autism, their parents, and caregivers face when using a card-based approach for communication. The application can be utilized by both Arabic-speaking and non-Arabic speaking children. The system includes four modules: a search engine module that looks for additional pictures from the internet for more content, a data store module responsible for the management of data in the database, a parser module that gets the accurate image according to input data, and a multimedia loader that charges multimedia content. A similar mobile application to this work was presented in [30] called “AutiSay,” designed as a communication tool for children affected by autism and who have difficulties in verbal communication. The application has three main features shown on the application’s screen after a customized welcome page: life skills, activities, and communication. The life skills feature classifies daily living skills such as selecting clothes. This feature is extensible for adding new skills in addition to what the platform proposes. The communication feature gathers feelings and needs when the child clicks on the icon, and a voice is generated to express the child’s needs or emotions. The activities feature allows parents and caregivers to do other activities that are not included in life skills. The developers of this application suggested that it is low-cost, portable, and personalized. Similarly, “MyVoice” was proposed in [31] as an iOS-based application for supporting nonverbal Emirati children with autism. The application proposes features to support children and their parents and caregivers, containing a large library of images that can be used by the child to express their needs or feelings. By clicking on a suitable picture, a voice is generated. On the other hand, parents and caregivers can manage the library of images and receive alerts and notifications if the child formulates a sentence that indicates a bad emotion. Two therapists were invited to test the application, and both of them had positive feedback and recommended the use of “MyVoice” in autism centers. In addition, an autistic child participated in the evaluation of the application for two months and quickly adapted to the use of “MyVoice” since he is already comfortable with the PECS approach.

Smartphone apps can be very helpful for those using PECS. For example, Abilipad is a program that adapts to people’s needs and helps them communicate effectively using images and symbols. Meanwhile, Avaz French and ChatAble French use predictive text and speech output features to help users communicate more efficiently. Go Speak Now and Niki Words are also useful assistive communication apps that use symbols and images to help individuals with speech and language difficulties communicate effectively.

Other picture-based solutions that allow people with communication challenges to express themselves include Comm’ Pictures and Emauti’Causes, both of which use symbols and images. Dis-moi! is a speech therapy app designed to improve language and communication skills, while Découvrons les émotions – PRO is a learning tool that helps individuals identify and understand various emotions. Où ai-je mal? is a medical app that helps users pinpoint the source of their discomfort. Finally, Predictable and Proloquo2go are AAC apps with simple and user-friendly interfaces.

These mobile applications offer a variety of features and functions that can assist individuals with autism and improve their quality of life. However, it is important to note that they have limitations. Some users may find them too complex or difficult to use, especially those with severe communication or learning difficulties. Additionally, the cost of these apps may make them inaccessible to some individuals who could benefit from their use.

All of these solutions have the potential to greatly improve communication and interaction for individuals with ASD who use PECS. With the development of innovative solutions such as augmented and virtual reality and mobile applications, the possibilities for enhancing communication and engagement are expanding rapidly. However, there are still challenges that need to be addressed in terms of customization, accessibility, and usability. On the other hand, IoT-based systems can incorporate real-time data from a variety of sensors and devices to provide a more responsive and adaptable communication platform. By integrating IoT and context-aware technology, we can create a more dynamic and interactive version of PECS that adapts to the specific needs and preferences of the individual user. This could include personalized prompts and reminders, real-time feedback on communication effectiveness, and the ability to track progress over time.

For example, in [42], the authors presented how context-aware technology can be utilized in a service-based system. By leveraging context-aware technology, service-based smart systems can be designed to better meet the needs of users. This could include adapting to changes in the user’s environment or behavior, or providing personalized recommendations based on user preferences.

Hence, such an approach can provide valuable insights to therapists and caregivers, allowing them to tailor their support and intervention strategies to better meet the individual’s needs. Overall, the integration of IoT and context-aware technology has the potential to significantly enhance the effectiveness and usability of PECS for individuals with autism.

2.3 PECS and IoT

The implementation of an IoT and context aware system for PECS in autism has the potential to revolutionize communication for individuals on the autism spectrum. By leveraging the power of connected devices and real-time data analysis, such a system can help autistic individuals to better understand and interact with their environment, while also enabling caregivers and therapists to monitor and adjust their communication strategies in real-time. As evidenced by previous reviews [40] [41], there is a clear lack of similar solutions currently available, making this an important area of innovation and research. We were able to find only one article that proposes the integration of contextual elements into the PECS system.

Existing solution

The authors of [23] presented a system that enhances and customizes PECS by including three key features: location, event detection, and on-demand request via WeChat. WeChat is a popular messaging app in China. For example, when the child is in a restaurant, the system uses location and event detection to suggest images appropriate for the situation, enabling the child to quickly select their preferred items. The system design revolves around four modules: adaptive content display, behavior tracking, location sensing, and event sensing. Additionally, a messaging module has been created for children’s tablets. The system contains 400 images in four categories.

Limitations

Throughout the analysis of [23], we identified some potential limitations and threats to the validity of the conducted research. These limitations can be summarized as follows:

• The manuscript is lacking in details. Only one paragraph describes the proposed solution with no detailed information.

• The system includes a message-sending module that is supposed to make PECS very affordable for Chinese children’s use in rural areas. The request can be sent to a child-specified individual in two modes: (1) as a direct WeChat message on a mobile phone, (2) a vibration signal if the individual wears a special bracelet. However, this module did not consider children with Low Functioning Autism (LFA), who commonly use PECS for communication. They can’t write messages and will not benefit from this functionality. In fact, this module is not coherent with the PECS approach. Why present pictures if the child can communicate their needs directly through a message?

• According to the proposed solution, the system will present only the pictures related to a specific context and hide some pictures that are irrelevant. However, this approach may block the child who can have a special need not related to a specific situation.

• The system has a collection of 400 images divided into four categories, which represent a few amounts of pictures and categories to the user.

• The authors did not identify any methodology or framework for building such a system.

• Only location and event-sensing are integrated as context information, which is not sufficient for a context-aware application.

In the next section, we will address these limitations and propose the design of a new context-aware mobile application for PECS. First, we will determine the system requirements, and then we will explain our methodology for building the design.

3 System Requirements

In order to achieve the core objectives of the system, requirements were established for the proposed mobile application. These requirements outline the necessary abilities that the application must possess to accurately and efficiently detect situations and recommend appropriate pictures and categories.

The first requirement of the system is the ability to collect data from the smartphone’s embedded sensors, which offer a wide range of possibilities for the application. By utilizing these sensors, the application can provide various services such as environmental sensing, context awareness, and location tracking. Additionally, the system should be able to accurately detect situations without user intervention, increasing user convenience and satisfaction. Users should be able to trust the application to provide accurate information and recommendations without constant user input. The third requirement of the system is to provide highly personalized and relevant content that enhances user engagement and retention. This involves continuous improvement based on the user’s profile.

The fourth requirement ensures that the application is optimized for performance and stability, providing users with a smooth and reliable experience.

Finally, the fifth requirement ensures that the application focuses solely on recommending pictures and categories, streamlining the user experience and avoiding overwhelming the user with unnecessary content.

In Table 2, we summarize the mobile application requirements in accordance with their respective motivations..

Table 2 System requirements

N Requirement Motivation
Req1 The mobile application should collect data from the smartphone’s embedded sensors. From our prior research, we have discovered that wearable devices may not be well-tolerated by all autistic children. Additionally, we have determined that we can collect all necessary contextual information for our study using only the sensors embedded in smartphones. As a result, we will exclusively rely on smartphone sensors to gather the required data.
Req2 The mobile application should accurately detect the user’s situation without any intervention required. In our case, the user cannot intervene in the detection of situations. On the other hand, the situations detected will impact the recommendation of pictures. Therefore, this requirement is of high importance
Req3 The mobile application should be able to continuously improving the content based on the user profile Autism is variable; the system should adapt and improve the content according to each user profile.
Req4 The mobile application should not allow sharing of content between users. Building on our previous motivation stated in Req3, the most accurate pictures can vary from one user to another, even if their situations are the same.
Req5 The mobile application should only recommend pictures and categories of pictures. It should not support additional media content or other types of recommendations. The system is dedicated for PECS

In the next section, we will select a framework that fulfills the most these requirements to design the proposed solution.

4 Methodology

To select the appropriate framework for building the system architecture, we analyzed multiple reviews and surveys that explore frameworks for building context-aware systems. We selected survey [39], which presents context-aware frameworks based on the following considerations:

• It provides a comprehensive survey of existing frameworks techniques and applications for building context-aware systems.

• It provides valuable insights into the current trends and future directions in this field

• It covers a wide range of topics, including the types of contextual information that can be used, the different approaches to modeling user preferences and behaviors, and the different techniques for making recommendations.

• It categorizes the frameworks based on their underlying approaches, such as rule-based, model-based, and hybrid approaches. This classification provides a clear understanding of the strengths and weaknesses of each approach and helps readers choose the most appropriate framework for their specific needs

We will compare these frameworks according to our requirements indicated in the previous chapter as illustrated in Table 3 below. After that, we will select the framework that best meets the requirements.

Table 3 Comparison of context aware framework according to requirements

Framework Description Req1 Req2 Req3 Req4 Req5
Context-aware media recommendations for smart devices [5] The article presents a framework for a context-aware media recommendation system for smart devices. The framework takes into account user preferences, location, and time of day to make personalized recommendations. The effectiveness of the framework is evaluated and demonstrated through a case study. 1 1 1 1 1
Mobile platform for affective context-aware systems [35] The article presents a mobile platform for developing affective context-aware systems. The framework combines sensors and machine learning techniques to capture user affective states and contextual information, and then use this information to personalize user experiences. The effectiveness of the framework is evaluated and demonstrated through a case study. 0 1 1 1 1
Personalized recommender system for resource sharing based on context-aware in ubiquitous environments [36] The article presents a personalized recommender system for resource sharing in ubiquitous environments. The framework uses context-awareness to take into account the user’s current situation and preferences to make personalized recommendations. The effectiveness of the framework is evaluated and demonstrated through a case study. 1 1 1 0 0
A Context-Aware Mashup Recommender Based on Social Networks Data Mining and User Activities [37] The article presents a context-aware mashup recommender system based on social network data mining and user activities. The framework combines social network data with user activity data to make personalized recommendations for mashup services. The effectiveness of the framework is evaluated and demonstrated through a case study. 1 1 1 0 1
A proactive multi-type context-aware recommender system in the environment of Internet of Things [38] The article presents a proactive multi-type context-aware recommender system for the Internet of Things environment. The framework takes into account multiple types of contextual information, including user location, weather conditions, and personal preferences, to make personalized recommendations. The effectiveness of the framework is evaluated and demonstrated through a case study. 1 1 1 1 0

The comparison of all these frameworks shows that the framework Context-Aware Media Recommendations for smart devices (CAMR) [5] is the most suitable for our system. In the next chapter, we will adopt this framework and build the architecture of the mobile application.

5 System Architecture

The proposed framework is composed of five components. In the following sections, we will describe each component that will form the architecture of the proposed mobile application.

5.1 Context Recognition

Context recognition is an approach that captures unfiltered contextual information from the sensors of a smartphone and converts it into data that can be used by mobile applications. Device sensors generate low-level data, which are not sufficient for mobile applications. The context recognition module collects this data and transforms it into usable data for mobile applications. As described in the following sections, the context recognition service relies on four key processes to achieve this data transformation.

The collection and preprocessing of sensor data

For the proposed mobile application, we plan to integrate mobile sensors that can collect data from three categories: motion, environment, and position. Table 4 describes each category with the corresponding sensor and the type of data that can be obtained from each sensor, as explained in [33].

Table 4 The collection of sensor data

Category Sensors Data
Motion

• Accelerometer

• Gyroscope Sensor

• Acceleration

• Measuring and maintaining the orientation and angular velocity of an object.

Environment

• Camera

• Ambient Light Sensor (ALS)

• Proximity Sensor (PS)

• Temperature Sensor (TS)

• Humidity Sensor

• Atmospheric Pressure Sensor (APS)

• Taking pictures

• Detecting luminance

• Calculating the distance between sensor and object

• Detecting environmental temperature

• Detecting humidity

• Detecting the environmental pressure

Position

• GPS

• Compass Sensor

• Localization

• Compass functionality

The next step after collecting sensor data is preprocessing, which involves removing outliers from the events. Low-pass and high-pass filters can be used to achieve this goal. Low-pass filters decrease the effect of high-frequency noise and amplify low-frequency signals, thereby improving the detection of changes in the device’s position with respect to direction. On the other hand, high-pass filters emphasize higher frequencies or their transient components while de-emphasizing static or slowly changing sensory information. The application of this filter enhances the detection of sudden movements.

Segmentation and feature extraction

The purpose of this module is to gather sufficient data to describe the user’s activities in a dynamic context. CAMR recognizes activities using a time-window basis instead of a sampling rate basis. It utilizes simple, labeled statistical features such as range, maximum, minimum, mean, and standard deviation, which are effective in distinguishing between time windows. These features are extracted into feature vectors and are subsequently used in the context classification process.

Classification algorithm

Obtaining contextual knowledge from raw sensor data is crucial for generating relevant information for context-aware applications. To extract high-level context from the statistical data, supervised machine learning methods such as Support Vector Machines, Neural Networks (NN), Decision Trees, Nearest Neighbors (KNN), and BayesNet should be employed. These models are integrated into context recognition to predict the precise, independent future actions and contexts of mobile phone users.

5.2 Context Inference

The context information collected from the previous step may not be semantically appropriate for use in recommendations. For instance, it is crucial to recognize the user’s activities based on time and location, but this information may not be helpful to the recommendation application without correlating it with other context information.

To address this issue, CAMR proposes the application of a knowledge-based model that utilizes ontology on top of the context recognition and classification process to relate different atomic context information and obtain contextual information at a higher semantic level in relation to the user’s preferences. For example, if we know that the user is running outside (obtained from environment sensor), it is essential to link this information with the user’s location and conclude that the user may feel cold. The user’s feeling of cold can be inferred using the ontology-based knowledge inference process. Hence, by integrating this module, we can identify information in complex contexts to offer an appropriate and customized context-aware recommendation system

5.3 Contextual User Profiling Service

In general, a user profile combines preferences depending on the history of the user’s behaviors. This module categorizes the user’s material consumptions into a small number of genres, each of which is defined by a variety of criteria. Moreover, it integrates contextual dimensions by connecting one or more inferred contexts to each category-genre-property concept.

The user profile is represented as a four-level tree, with the root of the tree representing the user’s optional demographic information. The first-level nodes correspond to the content category, the second level represents the content genre, and the third level contains the properties of a given category-genre. In our case, this level offers the metadata for the picture, describing the user preferences and the consumed items in more depth.

The leaf nodes provide information about the contexts where the user preferences have been observed. These leaf nodes have four fields: type, weight, intensity, and lifetime, whereas all other nodes have only the type field. In the leaves, the types represent the type of context. The weight represents the user’s preference for a category-genre-property, which is obtained by tracking the number of times the user has consumed an item that matches a given category-genre-property.

The newly introduced concepts in the user profile, the intensity and lifetime, track the user’s contextual consumption history to enhance media personalization. The lifetime is the time (e.g., in hours, days, months, etc.) since the last time a target user consumed an item of the category-genre-property. Using these weighted parameters, the system is able to dynamically ascertain the media content that is relevant to target users based on their contextual preferences

Figure 2 illustrates the four-level, three-structure architecture of the contextual user profiling service.

images

Figure 2 Contextual user profiling service.

5.4 Picture Recommendation Service

For this module, it is recommended to incorporate three recommendation algorithms for context-aware media recommendations: the content-based (CBF), collaborative-based (CF), and hybrid-based recommendation algorithms. However, in our case, CBF, CF, and even hybrid recommendations are not applicable. In fact, recommendations based on what was already selected by the user are not feasible, as their needs may vary widely. Additionally, we have specified in our requirements that users are independent, and the content of each user cannot have an impact on another.

The more appropriate recommendation service in our case is knowledge-based [34]. This method incorporates information about users and items (pictures and categories), and then uses this information for recommendations. Furthermore, this approach associates the user’s needs, preferences, and available content to address special needs by relying on knowledge. This method does not make generalizations of users and content in any case.

The identification of pictures to recommend to the user relies on the contextual user profile model. Thus, the system should integrate the contextual user profile into the proposed traditional recommendation process (knowledge-based) approach. By associating user preferences with contextual information, the system will be useful even if the contextual information is not provided.

5.5 Picture Presentation Adaptation

The proposed architecture includes an additional layer that adapts media, especially audio and video, based on certain conditions such as battery power and network bandwidth. While this layer is optional, we have chosen not to implement it for our specific case. This is because the images we are working with are of the same size, which does not impact network bandwidth or battery consumption.

6 Conclusion and Future Work

This paper presents a context-aware mobile application for PECS, aiming to improve the effectiveness of interventions. Our research involved reviewing a significant amount of research and mobile applications, analyzing design issues, and identifying key aspects to consider when designing mobile context-aware systems. We have also classified observed trends in the research and proposed a new design for a mobile application for PECS.

To the best of our knowledge, this is the first design for a context-aware mobile application for PECS. Our proposed design can serve as a starting point for further development and refinement of mobile context-aware systems for PECS. Collaboration between developers and designers is crucial to enhance the application, and user testing and feedback are necessary to ensure it is intuitive, effective, and meets the satisfaction of the users, their parents, and their caregivers. Such efforts have the potential to significantly enhance communication and quality of life for individuals with ASD.

Declarations

Ethical Approval

Not applicable

Funding

None

Conflicts of Interest/Competing Interests

None

Availability of Data and Material

Authors can confirm that all relevant data are included in the article and/or its supplementary information files.

Code Availability (Software Application or Custom Code)

Not applicable

References

[1] American Psychiatric Association (APA) (Ed.) Diagnostic and Statistical Manual of Mental Disorders, 5th ed.; American Psychiatric Association: Arlington, VA, USA, 2013.

[2] Organization for Autism Research, Inc. (OAR) Life Journey through Autism: Navigating the Special Education System. 2012. Available online: http://www.researchautism.org/resources/reading/documents/SPEDGuide.pdf (accessed on 16 November 2018).

[3] E. 1, «Cinq méthodes pour aider les autistes,» [En ligne]. Available: https://www.europe1.fr/sante/Cinq-methodes-pour-aider-les-autistes-496156.

[4] A. S.-L.-H. Association, «Augmentative and Alternative Communication (AAC),» [En ligne]. Available: https://www.asha.org/njc/aac/. [Accès le 12 7 2022].

[5] Otebolaku, A.M., Andrade, M.T. Context-aware media recommendations for smart devices. J Ambient Intell Human Comput 6, 13–36 (2015). https://doi.org/10.1007/s12652-014-0234-y.

[6] Brumback, R.A., Harper, C.R. and Weinberg, W.A. (1996) ‘Non-Verbal Learning Disabilities, Asperger’s Syndrome, Pervasive Developmental Disorder: Should We Care?’, Journal of Child Neurology 11 (6): 427–9.

[7] Bondy, A. S., and Frost, L. A. (1994). The Picture Exchange Communication System. Focus on Autistic Behavior, 9(3), 1–19. https://doi.org/10.1177/108835769400900301.

[8] F. Foluke, «What is Communication?,» ResearchGate, 2018.

[9] H. M. B. S. L. L. C. Richler Jennifer, «Developmental Trajectories of Restricted and Repetitive Behaviors and Interests in Children with Autism Spectrum Disorders,» PMC, vol. 22, pp. 55–69, 2010.

[10] Hodges H, Fealko C, Soares N. Autism spectrum disorder: definition, epidemiology, causes, and clinical evaluation. Transl Pediatr. 2020 Feb;9(Suppl 1):S55-S65. doi: 10.21037/tp.2019.09.09. PMID: 32206584; PMCID: PMC7082249.

[11] N. I. o. M. Health, «Autism Spectrum Disorder,» [En ligne]. Available: https://www.nimh.nih.gov/health/topics/autism-spectrum-disorders-asd.

[12] Rechowicz KJ, Shull JB, Hascall MM, Diallo SY, O’Brien KJ. Internet-of-Things Devices in Support of the Development of Echoic Skills among Children with Autism Spectrum Disorder. Sensors (Basel). 2021 Jul 5;21(13):4621. doi: 10.3390/s21134621. PMID: 34283166; PMCID: PMC8272129.

[13] Xiaofeng Liu et al., “An interactive training system of motor learning by imitation and speech instructions for children with autism,” 2016 9th International Conference on Human System Interactions (HSI), Portsmouth, 2016, pp. 56–61, doi: 10.1109/HSI.2016.7529609.

[14] Yousif, Jabar and Yousif, Mohammed. (2020). Humanoid Robot as Assistant Tutor for Autistic Children. International Journal of Computation and Applied Sciences. 8. 8–13.

[15] Fatimayin, Foluke. (2018). What is Communication?

[16] N. I. Ishak, H. M. Yusof, S. N. Sidek and Z. Jaalan, “Interactive robotic platform for education and language skill rehabilitation,” 2017 IEEE 4th International Conference on Smart Instrumentation, Measurement and Application (ICSIMA), Putrajaya, Malaysia, 2017, pp. 1–5, doi: 10.1109/ICSIMA.2017.8312031.

[17] I S. A. Farhan, M. N. Rahman Khan, M. R. Swaron, R. N. Saha Shukhon, M. M. Islam and M. A. Razzak, “Improvement of Verbal and Non-Verbal Communication Skills of Children with Autism Spectrum Disorder using Human Robot Interaction,” 2021 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA, 2021, pp. 0356–0359, doi: 10.1109/AIIoT52608.2021.9454193.

[18] X. Liu and W. Zhao, “Buddy: A Virtual Life Coaching System for Children and Adolescents with High Functioning Autism,” 2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech), Orlando, FL, USA, 2017, pp. 293–298, doi: 10.1109/DASC-PICom-DataCom-CyberSciTec.2017.62.

[19] Herrero, J.F., Lorenzo, G. An immersive virtual reality educational intervention on people with autism spectrum disorders (ASD) for the development of communication skills and problem solving. Educ Inf Technol 25, 1689–1722 (2020). https://doi.org/10.1007/s10639-019-10050-0.

[20] Zhang, Bingchen, Wang, Yanqun, Yang, Yuling and Song, Lishu. (2021). ASD Children’s APP Emotional Interaction Design Based on Smart Toys of Internet of Things. Mobile Information Systems. 2021. 1–7. doi: 10.1155/2021/1342538.

[21] Chien, M.-E., Jheng, C.-M., Lin, N.-M., Tang, H.-H., Taele, P., Tseng, W.-S., Chen, M. Y. (2015). iCAN: A tablet-based pedagogical system for improving communication skills of children with autism. International Journal of Human-computer Studies, 73, 79–90. https://doi.org/10.1016/j.ijhcs.2014.06.001.

[22] De Leo, G., Gonzales, C. H., Battagiri, P., and Leroy, G. (2011). A smart-phone application and a companion website for the improvement of the communication skills of children with autism: Clinical rationale, technical development and preliminary results. Journal of Medical Systems, 35(4), 703–711. https://doi.org/10.1007/s10916-009-9407-1.

[23] Tang, Tiffany and Winoto, Pinata. (2018). An Interactive Picture Exchange Communication System (PECS) Embedded with Augmented Aids Enabled by IoT and Sensing Technologies for Chinese Individuals with Autism. 299–302. doi: 10.1145/3267305.3267629.

[24] Miglino, Orazio and Di Ferdinando, Andrea & Di Fuccio, Raffaele & Rega, Angelo & Ricci, Carlo. (2014). Bridging Digital and Physical Educational Games Using RFID/NFC Technologies. Journal of E-Learning and Knowledge Society. 3. 83–104. doi: 10.20368/1971-8829/959.

[25] L. F. Guerrero-Vásquez, J. F. Bravo-Torres and M. López-Nores, “AVATAR “autism: Virtual agents to augment relationships in children”,” 2017 IEEE XXIV International Conference on Electronics, Electrical Engineering and Computing (INTERCON), Cusco, Peru, 2017, pp. 1–4, doi: 10.1109/INTERCON.2017.8079705.

[26] Adrian Iftene, Diana Trandabăţ, Enhancing the Attractiveness of Learning through Augmented Reality, Procedia Computer Science, Volume 126, 2018, Pages 166–175, ISSN 1877-0509, https://doi.org/10.1016/j.procs.2018.07.220.

[27] Holt, S., and Yuill, N. (2017). Tablets for two: How dual tablets can facilitate other-awareness and communication in learning disabled children with autism. International Journal of Child-computer Interaction, 11, 72–82. https://doi.org/10.1016/j.ijcci.2016.10.005.

[28] Miller, T., Leroy, G., Huang, J., Chuang, S., Charlop-Christy, M. H., 2006. Using a digital library of images for communication: comparison of a card-based system to PDA software. In: Proceedings of the 1st International Conference on Design Science Research in Information Systems and Technology (DESRIST). Claremont Colleges Library, Claremont, CA, USA, pp. 454–460.

[29] M. S. A. El-Seoud, A. Karkar, J. M. Al Ja’am and O. H. Karam, “A pictorial mobile-based communication application for non-verbal people with autism,” 2014 International Conference on Interactive Collaborative Learning (ICL), 2014, pp. 529–534, doi: 10.1109/ICL.2014.7017828.

[30] Voon, N.H., Bazilah, S.N., Maidin, A., Jumaat, H., Ahmad, M.Z. (2015). AutiSay: A Mobile Communication Tool for Autistic Individuals. In: Phon-Amnuaisuk, S., Au, T. (eds) Computational Intelligence in Information Systems. Advances in Intelligent Systems and Computing, vol. 331. Springer, Cham. https://doi.org/10.1007/978-3-319-13153-5\_34.

[31] Mohammad, H. and Abu-Amara, F. (2019). A Mobile Social and Communication Tool for Autism. International Journal of Emerging Technologies in Learning (iJET), 14(19), 159–167. Kassel, Germany: International Journal of Emerging Technology in Learning. Retrieved October 10, 2022, from https://www.learntechlib.org/p/217005/.

[32] Mohamed Abdel Hameed, M. Hassaballah, Mosa E. Hosney, Abdullah Alqahtani, “An AI-Enabled Internet of Things Based Autism Care System for Improving Cognitive Ability of Children with Autism Spectrum Disorders”, Computational Intelligence and Neuroscience, vol. 2022, Article ID 2247675, 12 pages, 2022. https://doi.org/10.1155/2022/2247675.

[33] Macias E, Suarez A, Lloret J. Mobile sensing systems. Sensors (Basel). 2013 Dec 16;13(12):17292–321. doi: 10.3390/s131217292. PMID: 24351637; PMCID: PMC3892889.

[34] Tarus, J.K., Niu, Z. and Mustafa, G. Knowledge-based recommendation: a review of ontology-based recommender systems for e-learning. Artif Intell Rev 50, 21–48 (2018).

[35] Grzegorz J. et al., Mobile platform for affective context-aware systems, Future Generation Computer Systems, Volume 92, 2019, Pages 490–503, ISSN 0167-739X, https://doi.org/10.1016/j.future.2018.02.033. (https://www.sciencedirect.com/science/article/pii/S0167739X17312207).

[36] J.-H. Park, W.-I. Park, Young-Kuk Kim and Ji-Hoon Kang, “Personalized recommender system for resource sharing based on context-aware in ubiquitous environments,” 2008 International Symposium on Information Technology, Kuala Lumpur, Malaysia, 2008, pp. 1–5, doi: 10.1109/ITSIM.2008.4631888.

[37] P. Suppa and E. Zimeo, “A Context-Aware Mashup Recommender Based on Social Networks Data Mining and User Activities,” 2016 IEEE International Conference on Smart Computing (SMARTCOMP), St. Louis, MO, USA, 2016, pp. 1–6, doi: 10.1109/SMARTCOMP.2016.7501672.

[38] Y. Salman, A. Abu-Issa, I. Tumar and Y. Hassouneh, “A Proactive Multi-type Context-Aware Recommender System in the Environment of Internet of Things,” 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, Liverpool, UK, 2015, pp. 351–355, doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.50.

[39] H. F. Abdulkarem, G. Y. Abozaid and M. I. Soliman, “Context-Aware Recommender System Frameworks, Techniques, and Applications: A Survey,” 2019 International Conference on Innovative Trends in Computer Engineering (ITCE), Aswan, Egypt, 2019, pp. 180–185, doi: 10.1109/ITCE.2019.8646564.

[40] El Arbaoui F.E.Z., El Hari K., Saidi R. A Survey on the Application of the Internet of Things in the Diagnosis of Autism Spectrum Disorder. In: Advanced Technologies for Humanity. ICATH 2021. Lecture Notes on Data Engineering and Communications Technologies, vol. 110. Springer, Cham. (2022).

[41] El Arbaoui F.E.Z., El Hari K., Saidi R. A Review on The Application of The Internet of Things in Monitoring Autism and Assisting Parents and Caregivers. In: Computational intelligence for medical internet of things MIoT application systems. Elsevier. (2022).

[42] Faieq, S., Saidi, R., El Ghazi, H., Front, A., and Rahmani, M. D. (2021). Building adaptive context-aware service-based smart systems. Service Oriented Computing and Applications, 15(1), 21–42.

Abstract

1 Introduction

2 Related Work

2.1 Assistive IoT Systems for Communication

images

Social skills and interaction

Emotion recognition

Verbal and language skills

2.2 PECS

2.3 PECS and IoT

Existing solution

Limitations

3 System Requirements

4 Methodology

5 System Architecture

5.1 Context Recognition

The collection and preprocessing of sensor data

Segmentation and feature extraction

Classification algorithm

5.2 Context Inference

5.3 Contextual User Profiling Service

images

5.4 Picture Recommendation Service

5.5 Picture Presentation Adaptation

6 Conclusion and Future Work

Declarations

Ethical Approval

Funding

Conflicts of Interest/Competing Interests

Availability of Data and Material

Code Availability (Software Application or Custom Code)

References