Immersive Mobile Telepresence Systems: A Systematic Literature Review

Lohan Rodrigues Narcizo Ferreira, Lidiane Teixeira Pereira, and Rodrigo Luis de Souza da Silva*

Computer Science Department, Federal University of Juiz de Fora, Brazil

E-mail: lohanext@gmail.com; lidianepereira@ice.ufjf.br; rodrigoluis@gmail.com

* Corresponding Author

Received 31 January 2019; Accepted 10 March 2020; Publication 30 April 2020

Abstract

Telepresence can be defined as a system that provides remote collaboration between people in different locations, creating the feeling that both users share in fact the environment. The advances in communication, media and internet, has made possible the popularization of these systems. Smartphones have become increasingly powerful in processing, less expensive and more widespread. A single device combines various sensors, one or more cameras and internet connection, thus a potential hardware for telepresence applications. The main objective of this paper is to present a Systematic Literature Review to identify the main characteristics of immersive telepresence systems designed for mobile environment and to analyze research opportunities that can be further exploited or optimized. This research revealed that the development of immersive telepresence systems for mobile devices has increased in recent years, but is not yet widespread.

Keywords: Mobile devices, telepresence systems.

1 Introduction

The advance of globalization led to an increased demand for immersive telepresence solutions as it is a technology that allows geographically distant people to perform tasks collaboratively as if they were together in the same place. The use of this technology can save time and money while reducing environmental damage by providing an alternative to travel to attend presential meetings. One piece of hardware that can be used for this purpose are mobile devices. Mobile devices, such as smartphones and tablets, are low-cost devices with high processing power and already are widespread. It is desirable to have solutions easily accessible through them.

Telepresence was defined in 1983 by Akin, David L., et al. [1] as a very realistic teleoperator system in which a teleoperator receives instructions from a remote human, and performs actions on the location where he is, based on these instructions. Therefore, telepresence systems has the potential to facilitate communication between geographically distant individuals. It can be useful in world of business, education and medicine, for example, which enables lay people perform complex tasks being instructed by experts. Regarding mobile devices, today existing communication for this platform can become more immersive with telepresence resulting in the improvement of user experience.

The study presented in this paper is organized as a Systematic Literature Review (SLR). SLR is a way to summarize and evaluate relevant works in specific topic or area [2]. The objective of this SLR is to identify the main characteristics in immersive telepresence systems designed for mobile environment and to find research opportunities that can be further exploited or optimized.

This paper is organized as follows. In Section 2 we discuss what are telepresence systems. We present basic definitions, telepresence applications and the feasibility to use mobile devices as a base of these systems. In Section 3 we discuss the conduct of the Systematic Literature Review, presenting research questions, search string extraction, results selection and the analysis of the selected results. Section 4 addresses the major similarities between the evaluated systems, their strengths and possible improvements. Aspects that are not covered by the evaluated telepresence systems that may be of interest for future work are also discussed. Finally, Section 5 presents the conclusions of this paper.

2 Telepresence Systems

In NASA Contractor Report, Akin, David L., et al. [1] defines telepresence as a very realistic teleoperator system in which a teleoperator receives instructions from a remote operator, and performs actions on the location where he is, based on these instructions. The manipulators are people responsible for performing the task personally in place. The operators are people responsible for passing instructions remotely. According to the definition given by Akin, David L., et al., in a telepresence system, the operators should receive sensor information about the manipulators in sufficient quantity and quality to have the feeling of being in place. This way, they can perform tasks remotely as they would in person. It is also necessary that the sense of presence is maximized so that the differences between the telepresence system and reality does not interfere with the execution of tasks. The manipulators must have dexterity to allow operators to exercise human functions remotely.

Some applications of telepresence systems are identified by [3]. In equipment maintenance and installation the manipulator could follow the instructions of the operator who has the technical knowledge without requiring the displacement of the operator to the location. The instructions can be passed more clearly and faster than reading technical documents. Telemedicine is also an important telepresence application area. Through the use of telepresence systems, it would be possible to provide expert medical care in remote locations where a personal service is not possible, like rural surgeries, hospital operating theatres or scenes of accidents. It can also decrease the costs of medical treatments because it would not need a large displacement of the doctor or the patient.

As mentioned in [4], there are applications of telepresence systems in education and in medical training where a high level of similarity to actual surgery is necessary. There are also applications in the business world, where meetings can be held with the help of telepresence; and in research, meeting or experiments, could be carried out between groups of geographically dispersed scientists.

Regarding mobile devices, the advances in technology have made possible the popularization of smartphones and tablets that can support telepresence systems. However, many of the current features available on these devices can not provide the feeling of being present. Zhang et al. [5] lists three features required for good telepresence systems. In this study the user must: be able to view the entire environment and the people present in it; clearly know who is talking on the environment; have access to the data shown in loco on the environment. With the aid of virtual reality and the availability of sensors and processing of current devices, you can create a low-cost telepresence system using mobile devices that meets these criteria.

3 Systematic Literature Review

A Systematic Literature Review is a way to identify, summarize and evaluate relevant works in a determinate area [2]. To make the revision a protocol with the guidelines for the revision is defined. It is a necessary step to allow the reproduction of the same revision in an unbiased way using this protocol [6]. The elaboration of this protocol initiated with the planning of the review that is presented below.

3.1 Planning the Systematic Review

In this work, the planning of the proposed SLR was guided by the suggestions presented in [7] and has the following steps:

  1. Definition of the research questions of this SLR.
  2. Extraction of the keywords from research questions.
  3. Definition of the search strings with the keywords.
  4. Definition of the search bases where the search strings were used.
  5. Download of the results returned from the bases.
  6. Definition of inclusion and exclusion criteria.
  7. Selection of works after title and abstract analysis.
  8. Selection of works after additional read of introduction and conclusion.
  9. Extraction of the answers for the research questions.

3.2 Research Questions

In this SLR we want to answer two questions:

Q1: What techniques have been used to create immersive solutions for telepresence systems using smartphones and tablets?

Q2: What are the characteristics of immersive solutions for telepresence systems using smartphones and tablets?

3.3 Extraction of Keywords and Search Strings Definition

The keywords immersive, telepresence and mobile were extracted from the research questions. The search strings were build joining keywords with operator AND and synonyms with operator OR. The final search string was: immersive AND mobile AND (telepresence OR teleconferencing OR teleimmersive). These keywords were used to search in the following bases: Scopus, IEEE Xplore, Science Direct, Springer, Web of Science, ACM and Compendex.

Table 1 Number of returned articles for each search base

Base With Duplicate After First Duplicate Removal After Second Duplicate Removal
Scopus 37 36 34
IEEE Xplore 67 67 53
Science Direct 135 130 129
Springer 115 105 103
Web of Science 21 21 2
ACM 16 15 11
Compendex 59 58 18

The searches were conducted in the first half of May 2016 and in last half of Feb 2019 an update was conducted. An initial removal of duplicate items was performed with the aid of JabRef tool [8]. However, some articles were not marked as duplicate. A second removal was made manually using a spreadsheet editor. Table 1 shows the number of items returned by each base before and after removing duplicate items.

Book chapters were not considered among Springer’s results. The results of Web of Science only includes papers from Web of Science – Main Collection (Thomson Reuters Scientific).

3.4 Results Selection

For the results selection the following exclusion criteria was applied:

  1. Do not present an immersive telepresence system.
  2. Presents an immersive telepresence system, but not using mobile devices.

The selected articles should also have the following selection criteria:

  1. Presents an immersive telepresence system using mobile devices as part of the system.

After the removal of duplicates, we began to select results. In an initial phase, their titles and abstracts were read and they were classified according to the inclusion and exclusion criteria.

When all exclusion criteria were not assigned, the result was checked as selected. When one or more exclusion criteria were assigned, the result was classified as excluded. Table 2 shows how many articles were selected and excluded in each search base.

A second phase was conducted considering titles, abstracts, introductions and conclusions of the previously selected articles. As in the first phase, results were classified as selected or excluded. The result is shown in Table 3.

Table 2 Number of selected and excluded articles considering titles and abstracts

Base Selected Excluded
Scopus 26 8
IEEE Xplore 32 21
Science Direct 13 116
Springer 17 86
Web of Science 1 1
ACM 8 3
Compendex 5 13

Table 3 Number of selected and excluded articles from the previous phase considering introductions and conclusions

Base Selected Excluded
Scopus 10 16
IEEE Xplore 4 28
Science Direct 1 12
Springer 3 14
Web of Science 0 1
ACM 4 4
Compendex 1 4

For the third phase, four quality criteria were designed. In each one the following scale were applied: (Y) = 1 point, No (N) = 0 point; Partially (P) = 0.5 point. The max value was 4 points and 2,5 was the passing score. The selection criteria are as follows.

  1. There is a usable prototype.
  2. The system was evaluated.
  3. Techniques used were clearly reported.
  4. A smartphone or tablet is part of the system.

The scores were assigned after a full reading of the articles and the articles approved in the passing score are presented in Table 4.

3.5 Analysis of Results

After selecting articles by quality criteria, this section presents the answers to the research questions of this Systematic Literature Review. The main elements we wanted to find in this evaluation were the techniques and characteristics of telepresence systems, with emphasis on the use of mobile systems and what was the main use of these systems in the solution.

Table 4 Articles analyzed on third phase

Article Q1 Q2 Q3 Q4 Total
[A1] 1 0 0,5 1 2,5
[A2] 1 0,5 0,5 1 3
[A3] 1 0,5 0,5 0,5 2,5
[A4] 1 0 1 1 3
[A5] 1 1 1 1 4
[A6] 1 1 0,5 1 3,5
[A7] 1 0 0,5 1 2,5
[A8] 1 0 1 0,5 2,5
[A9] 1 1 0,5 0,5 3
[A10] 1 0,5 1 1 3,5
[A11] 1 1 1 0,5 3,5
[A12] 1 1 1 1 4

Q1: What are the characteristics of the immersive solutions for telepresence systems using smartphones and tablets?

De Greef et al. presents in [A1] a video conference system designed for smartphones based on common video chat systems. Extra resources were included: background replacement for the local user, utilization of Google Maps and Street View for a better info location about remote user and virtual scrapbook build.

The [A2] study shows a system in which the local user controls a remote movable avatar by moving a tablet. Local user image is exhibited on top of the avatar. A sonorous zoom was added increasing sound clarity of a selected source.

Takacs, B presents in [A4] a system where local user receives a 360° streaming and controls the point of view with smartphone’s keyboard. There is a special component named Clickable Content wherewith local user can interact with specifics regions in the video and related contents are exhibited.

In [A3] the authors added to the system shown in his work [A4] a feature to get facial feedback from the users of the system in order to increase interactions in a shared virtual space.

Muller et al. introduces in [A5] a system in which the remote user transmits images from his smartphone’s camera to local user and local user gets a panoramic view from remote user’s locations rotating his smartphone. Both point of view are highlighted in the video to inform one user about the view from other user.

Khan et al. presents in the study [A6] a system where a local user controls a remote tablet position in a conference room by moving his head. Images are captured from tablet frontal camera and tablet screen exhibits local user’s face.

In [A7] Xia shows a system in which the user is inserted in a virtual environment and interacts with virtual objects. Two smartphones are used: one works like a joystick and the second gets head’s position from user. The orientation is used to determine what is shown on the HMD (Head-Mounted Display) of the user.

Jang-Jaccard presents in [A8] a web-based teleconference system that allows sick patients to establish conferences with their care coordinators. The patient access the system from a tablet.

Kratz et al. in [A9] introduces a telepresence system based on Mobile Telepresence Robots (MTR) that allows visualizing the remote environment in an immersive way using a HMD. The system allows three types of viewing: 2D from the image of a single fixed camera displayed on a screen; monocular vision through a HMD with increased field of view through control of camera movement; stereo vision through a HMD also with control of camera movement.

The work [A10] presents a system fully based on smartphone/tablet rotation to control a four-wheeled robot with a camera that streams, in real time, the captured images to the controller device.

The system presented in [A11] shows a mobile application where users can interact with environments, transmitted by other users, using the device sensors and other features like reaction buttons.

Finally, [A12] presents a tourism system that gathers world information from various sources and generates a virtual navigable environment of places in the world with extra information such as explanatory texts and weather.

Q1: What techniques have been used to create immersive solutions for telepresence systems using smartphones and tablets?

The system presented in [A1] has a smartphone as requirement for remote user. Local user can use a desktop in the video chat. A common program of video conferences was used as base and Google’s Map and Street View API were used. These were the only system construction details presented.

The study [A2] shows a system where local user needs a tablet to control a remote robot. Tablet movements are captured by its own sensors and replicated by the remote robot. Local user could move the robot by tilting the tablet back and forth, and turn the robot by turning the tablet right and left. The acoustic zooming feature allows noise reduction by the zooming of acoustic information and is automatically controlled by the system when user moves the avatar.

The study presented in [A4] uses a specific hardware to capture images in 360°. It consists of six cameras fixed on a base that is placed in a remote location. Images are captured and sent to a server where spherical video with a virtual camera is generated. The server distributes for each user only what they currently should see instead of the entire scene. This reduces the computational needs on the receiver side. A local user can control the view from a smartphone controlling the virtual camera. Clickable content feature was implemented using a real-time image processing with OpenCV. As a continuation and improvement of the project [A4], the study exhibited in [A3] uses the same structure.

Muller et al. presents in [A5] a system which needs two smartphones connected through some mobile network, where the images captured by the cameras are transmitted, and OpenGL ES library that is used in the process of assembling the panoramic image for optimization purposes.

Khan et al. in the work [A6] shows a system where a wearable used by the remote user captures user’s head orientation by a head-mounted inertial measurement unit (IMU), transmits this data through the internet to a base that holds a tablet and is controlled by servos. Head movements are reproduced in this base and local user’s face is exhibited on tablet while the frontal camera of the tablet captures images of the remote place.

In the system presented in [A7] the user interacts with a virtual world through two smartphones and one HMD. One smartphone is fixed to one hand and works like a joystick. The other is fixed in the back of the user’s head and captures head movements to determine the field of view exhibited in the HMD.

Jang-Jaccard shows in [A8] a system that requires a tablet where local user accesses a web-based system created with Web-RTC framework. Local user sees the image of his care coordinator on the screen of his tablet. Frontal camera of tablet captures local user’s face and this image is shown onto the screen of care coordinator desktop or tablet.

In the system presented by Kratz et al. in [A9] the robot consists of a modification of the Pioneer P3-DX. A 1.20 meter aluminum bar was fixed at the top of the robot with two cameras and the servo motor as well as the Arduino that controls the pan/tilt movement. The cameras used were two GoPro Hero 4 and the HMD used in the preview was the Oculus Rift DK2. Also attached to the bar was a tablet responsible for 2D image capture and also for capturing and displaying audio for all types of visualization.

The system in [A10] controls the robot rotation using the device’s gyro and magnetic compass sensors which map the device rotation with respect to the center of the body of the user, the robot can also be controlled with touch based controls. To increase the immersion, the on-robot camera sends low resolution images decreasing latency in the transmission.

The application in [A11] uses the Google Cardboard SDK for rotational tracking and WebRTC to create a multi-user system. An user can upload spherical images of its location and let other users interact with it by using the device’s rotation sensors, audio-video communication, drawing and reaction buttons.

The tourism system in [A12] gathers data in two ways: manually (books, Excel sheets, forms) or semiautomatic forms with the use of a mobile application. To store the collected information, a non-relational database (Mongo DB) is used because of the advantages of allowing to store information with a non-preset data structure. The virtual reality app was developed with Unity and uses the collected data to create text windows, images, and videos 360, amid effects and behavior of the virtual environment that obeys the real-time climatic conditions of the place of interest of the user.

4 Discussion

In this study we evaluated the telepresence solutions using mobile devices. We highlight here some aspects of these devices that weren’t exploited or were positively exploited.

In [A2], [A5], [A7] the device’s own sensors are used. They used a tablet and two phones to set the orientation of the remote avatar and to view the orientation of the mobile phone used by the user who is away, and to capture the user’s head and hand position that is using the system, respectively. In [A2] the front camera of the tablet was exploited to capture the face of the local user and the tablet screen to display the images captured by the remote avatar. In [A5] the smartphone’s sensors and the smartphone’s back cameras were used. The system is completely based on mobile devices. It is the only hardware needed for the operation, which is a positive point. However the system was built focusing in transmitting scenes where the two contacts are not visible, as a result of the smartphone’s back camera usage. In [A7] only the sensors of the two phones are used. A Kinect sensor and a HMD were also necessary for the system to work. Then, mobile devices are only components of the system, but they are not the main part thereof.

In [A10] and [A11] the device’s orientation sensors are also used and no other mobile device is really needed for the system to work, however in [A11] the idea of a multi-connection system generates the need for more than one device to fully operate it, like a social network, a positive point is that it is also completely based on mobile devices as in [A5], and in [A10] a robot is needed to accomplish the telepresence feature, which may be an expensive solution.

Mobile devices can also be seen as one small component of the system in the works presented in [A3], [A6], [A8], [A9] and [A12]. In [A3], the smartphone is used only as a screen to watch the transmission and the visualization system is Flash based, a platform being discontinued in current mobile devices. Sensors or cameras of the devices are not used. In [A6] the front camera and the screen of a tablet, that is located in a remote environment, is used. But the data that govern the orientation of this tablet is captured from a hardware exclusively created for the application that should be used by the local user. In [A1] the back camera and the remote user’s smartphone screen are used, but none of the device’s sensors are used. In [A8] only the front camera and the screen of the tablet are used, which is similar to the system presented in [A1]. In [A9] the tablet screen is used to display the image of the remote user and the microphone of the tablet is used to capture the audio from remote environment. Neither camera or sensors of the tablet were used. In [A12] a smartphone is one of the available options that can be integrated to the system in order to navigate trough the virtual environment and nothing more of it is used unless for data collection. Figure 1 summarizes the use of sensors in the selected articles.

images

Figure 1 Use of mobile device’s sensors.

With the data presented, we can see that there are few centralized systems on mobile devices where all their available resources like sensors, cameras, screen, and possible software features are used. The development of such applications tends to be facilitated by hardware upgrades and resources inclusions on these devices and should be regarded as promising.

Another aspect to be considered is that from the selected articles, only a few had their systems evaluated. It would be interesting if there were a better guided evaluation where the opinion of users about utility, immersion and advantages of the systems was also considered.

5 Final Considerations

In this paper, we present a SLR to identify techniques and characteristics of telepresence systems for mobile devices like smartphones and tablets. Through this review, it was possible to identify the main use of mobile devices in this scenario and the most important resources that can be exploited in these systems.

Regarding the characteristics of immersive telepresence systems, it was revealed that mobile systems are used for the most varied functions, ranging from the simple use of the mobile device screen to display remote images to remote robot control using mobile system’s motion sensors. Through the analysis of most used techniques explored in Q2, it was possible to identify that in most cases (75%) motion sensors and camera were used. One of the papers used a HMD instead of a mobile device to capture user’s head orientation, and with the lowering costs of these devices, this can indicate a tendency of the use of HMDs due to the higher quality of the devices (lens, screen e.g.) and higher processing capacity.

This research revealed that the development of immersive telepresence systems for mobile devices has increased in recent years, but is not yet widespread. In some cases in literature, it’s considered as immersive mobile systems those that use some kind of robot or unique hardware that can’t be easily acquired. The development of systems in which the only hardware requirement is a smartphone or tablet is interesting because it includes a larger number of users, since most people currently have at least one of these devices and they are, in general, a low cost hardware.

As a future work this paper suggest the study of telepresence systems with focus on multi-user experience, exploring the used of 360° cameras streaming for multiple users using heterogeneous devices, ranging from mobile devices inside low cost virtual reality headsets to high-end head-mounted displays.

References

[1] D.L. Akin, Massachusetts. Institute of Technology, George C. Marshall Space Flight Center, United States. National Aeronautics, Space Administration. Scientific, and Technical Information Branch. Space Applications of Automation, Robotics and Machine Intelligence Systems (ARAMIS) – phase II. Number v. 3 in NASA contractor report. National Aeronautics and Space Administration, Scientific and Technical Information Branch, 1983.

[2] B. Kitchenham and S. Charters. Guidelines for performing systematic literature reviews in software engineering, 2007.

[3] P. Cochrane, D. J. T. Heatley, and K. H. Cameron. Telepresence-visual telecommunications into the next century. In Telecommunications, 1993. Fourth IEE Conference on, pages 175–180, Apr. 1993.

[4] S. Glasenhardt, M. Cicin-Sain, and Z. Capko. Tele-immersion as a positive alternative of the future. In Information Technology Interfaces, 2003. ITI 2003. Proceedings of the 25th International Conference on, pages 243–248, June 2003.

[5] Z. Zhang. Immersive telepresence: Transcending space and time. In Ubiquitous Virtual Reality (ISUVR), 2012 International Symposium on, pages 6–9. IEEE, 2012.

[6] M. Usman, E. Mendes, F. Weidt, and R. Britto. Effort estimation in agile software development: A systematic literature review. In Proceedings of the 10th International Conference on Predictive Models in Software Engineering, PROMISE ’14, pages 82–91, New York, NY, USA, 2014. ACM.

[7] F. Neiva and R.L.S. Silva. Systematic literature review in computer science – a practical guide. Technical Report 1, Federal University of Juiz de Fora, November 2016.

[8] Jabref. https://www.jabref.org/. Accessed: 2016-09-30.

Appendix

[A1] L. De Greef, M. Morris and K. Inkpen (2016), TeleTourist: Immersive telepresence tourism for mobility-restricted participants, vol. 26-February-2016, pp. 273–276, doi: 10.1145/2818052.2869082, cited By 0.

[A2] M. Izumi, T. Kikuno, Y. Tokuda, A. Hiyama, T. Miura and M. Hirose (2014), Practical use of a remote movable avatar robot with an immersive interface for seniors, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8515 LNCS, pp. 648–659, cited By 1.

[A3] B. Takács (2011), Immersive interactive reality: Internet-based on-demand VR for cultural presentation, Virtual Reality, vol. 15, pp. 267– 278, doi: 10.1007/s10055-010-0157-7, cited By 3.

[A4] B. Takács (2007), PanoMOBI: Panoramic Mobile entertainment system, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 4740 LNCS, pp. 219–224, cited By 0.

[A5] J. Muller, T. Langlotz and H. Regenbrecht (2016), PanoVC: Pervasive telepresence using mobile phones, in 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom), pp. 1–10, doi: 10.1109/PERCOM.2016.7456508.

[A6] M. S. L. Khan, S. ur Réhman, P. L. Hera, F. Liu and H. Li (2014), A pilot user’s prospective in mobile robotic telepresence system, in Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific, pp. 1–4, doi: 10.1109/APSIPA.2014.7041635.

[A7] P. Xia, K. Nahrstedt and M. A. Jurik (2012), TEEVE-Remote: A Novel User-Interaction Solution for 3D Tele-immersive System, in Multimedia (ISM), 2012 IEEE International Symposium on, pp. 378–379, doi: 10.1109/ISM.2012.77.

[A8] J. Jang-Jaccard, S. Nepal, B. Celler and B. Yan (2016), WebRTC-based Video Conferencing Service for Telehealth, Computing, vol. 98, pp. 169–193, ISSN 0010-485X, doi: 10.1007/s00607-014-0429-2.

[A9] S. Kratz and F. R. Ferriera (2016), Immersed remotely: Evaluating the use of Head Mounted Devices for remote collaboration in robotic telepresence, in Robot and Human Interactive Communication (ROMAN), 2016 25th IEEE International Symposium on, pp. 638–645, IEEE.

[A10] J. Ahn and G. J. Kim (2018), SPRinT: A Mixed Approach to a Hand-Held Robot Interface for Telepresence, International Journal of Social Robotics, pp. 1–16.

[A11] B. Ryskeldiev, M. Cohen and J. Herder (2017), Applying rotational tracking and photospherical imagery to immersive mobile telepresence and live video streaming groupware, in SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications, p. 5, ACM.

[A12] A. F. Acosta, W. X. Quevedo, V. H. Andaluz, C. Gallardo, J. Santana, J. C. Castro, M. Quisimalin and V. H. Córdova (2017), Tourism Marketing Through Virtual Environment Experience, in Proceedings of the 2017 9th International Conference on Education Technology and Computers, ICETC 2017, pp. 262–267, ACM, New York, NY, USA, ISBN 978-1-4503-5435-6, doi: 10.1145/3175536.3176651, URL http: //doi.acm.org/10.1145/3175536.3176651.

Biographies

images

Lohan Rodrigues Narcizo Ferreira is a M.Sc. student at the University of São Paulo since march 2020. He attended the Federal University of Juiz de Fora, Brazil where he received his B.Sc. in 2019. Lohan has worked with telepresence and multimedia systems for studying and learning. Between 2019 and 2020 he collaborated with RNP (brazilian network for higher education, research and innovation) in the project GT-SADI, focused on the development of a system for network monitoring and diagnosys. In his Master, Lohan is working with Item Response Theory and Computerized Adaptive Testing for application in Distance education.

images

Lidiane Teixeira Pereira is a M.Sc. student in Computer Science at the Federal University of Juiz de Fora, the same institution where she received a B.Sc. in Computer Science. Throughout her graduation, she was an undergraduate student researcher on Virtual and Augmented Reality projects.

images

Rodrigo Luis de Souza da Silva is an Associate Professor in the Department of Computer Science at Federal University of Juiz de Fora. He has a B.S. in Computer Science from the Catholic University of Petropolis (1999), M.S. in Computer Science from Federal University of Rio de Janeiro (2002), Ph.D. in Civil Engineering from Federal University of Rio de Janeiro (2006) and a postdoc in Computer Science from the National Laboratory for Scientific Computing (2008). His main research interests are Augmented Reality, Virtual Reality, Scientific Visualization and Computer Graphics.

Abstract

1 Introduction

2 Telepresence Systems

3 Systematic Literature Review

3.1 Planning the Systematic Review

3.2 Research Questions

3.3 Extraction of Keywords and Search Strings Definition

3.4 Results Selection

3.5 Analysis of Results

4 Discussion

images

5 Final Considerations

References

Appendix

Biographies