See Also: Students projects and Previous projects
Immersive Dental Training through Virtual Reality
The training of invasive medical procedures in the pre-clinical stage is a great educational challenge. The traditional approach, with animals and corpses, presents ethical and practical problems. A promising solution is the use of virtual reality (VR), which allows for realistic simulations of low cost, receptivity and feedback in real time. This finding motivated the development of the VIDA Odonto environment, which uses advanced VR technologies, realistic patient modeling and three-dimensional immersive interaction. The prototype developed from the virtual environment module of this system was tested and evaluated by professionals with experience in clinical practice and beginners, demonstrating the feasibility of becoming a relevant educational resource in dentistry courses. The VIDA Odonto Project is currently being developed at Interlab / USP and has FAPESP funding and collaboration from LApIS (EACH / USP) and LaSIT (FOB / USP).
Keywords: 3D interaction, Virtual Reality, Dental training.
Romero Tori, Gustavo Ziyu Wang, Lucas Henna Sallaberry, Allan Amaral Tori, Elen Collaço de Oliveira e Maria Ap. de A. M. Machado.
Project poster: Vida Odonto.
Realistic Haptic Interaction in Virtual Training System for Application of Dental Anesthesia
The Virtual Reality has presented important contributions to the health area, mainly in the field of training, acting in the acquisition of knowledge and skills in performing certain procedures and minimizing or eliminating risks to patients. However, the development of a Virtual Reality-based computational system is a complex task, especially with respect to the realism provided by a virtual training session and the devices adopted for tactile simulation. A procedure that is not yet simulated consists of the application of local anesthesia for dental treatment. The training to perform such a procedure presents a high failure rate, raising the risks to the patients and the insecurity of the apprentices. In the literature, there are a series of Virtual Reality simulators, specifically for the application of anesthesia in other areas of the human body and operations, such as biopsy and endoscopy, for example, involving the manipulation of medical instruments (needle, endoscope, catheter and syringe) . In this context, the present work aims to develop a computational system based on Virtual Reality for the simulation of anesthesia, initially for the inferior alveolar nerve block, emphasizing the stage of the anesthesia application procedure that includes the correct manipulation and insertion of the needle, or in terms of Virtual Reality, the part of the simulation that consists of the haptic interaction (tactile interaction). An important aspect in haptic interaction is the force model for haptic feedback, which involves equations, properties, and behavior of dental tissues and instruments. In the case of human tissues, thickness of each layer, elasticity and resistance are properties to be considered, and in the case of the behavior of instruments such as needle, movements, speed, position and angle of movement of the needle are important for a realistic reproduction of the inferior alveolar nerve block training. A requirement survey step was performed at a dental college for the proper development of the training system. It is hoped to obtain a haptic interface of a computer training system for the application of dental anesthesia, validated by dentistry specialists.
Keywords: Haptic interaction, Virtual Reality, dental training.
Cléber Gimenez Corrêa e Romero Tori.
Project poster:: PDF
Tridimensional Real-Time Reconstruction of Humans Based on Record and Accumulation of Depth Maps
The insertion of a realistic, real-time graphical representation of the user in a three-dimensional virtual environment is one of the problems widely explored in augmented reality research. One way of achieving this insertion is by means of reconstruction methods. However, many methods are dependent on the information capture model, which may depend on complex equipment configurations, not allowing for three-dimensional reconstruction or real-time reconstruction. This project aims at the creation of a complete real-time three-dimensional reconstruction method of human, for immersive telecommunication systems, based on the recording and accumulation maps of depths captured and processed over time. For this task, a reconstruction model of articulated bodies with deformable surface will be used, model that allows to represent dynamic objects like humans. For the registration of the meshes and accumulation of new data, a model based on Bayesian networks of first order will be used, in order to allow the reconstruction in real time.
Keywords: Video Avatar, Augmented Reality, Surface Geometric Reconstruction.
Daniel Makoto Tokunaga e Romero Tori.
Realism in Deformable Virtual Objects for Medicine Teaching Applications
Virtual and Augmented Reality are increasingly being used in the development of computational applications for medical training. In this type of applications, visual and haptic realism are necessary in order to provide the user with sensations similar to those found in the actual procedure. The deformation of three-dimensional objects is one of the behaviors that increases the realism of these applications. To achieve the required realism, methods based on physics are the most commonly used. However, these methods do not provide real-time interaction. Thus, the objective of this work is to present a proposal of a method to simulate the deformation of three-dimensional objects representing soft tissues. In this method the three-dimensional objects will be used surface meshes to simulate the volume. Each layer will represent a part of the organ and / or tissues, it being possible to assign physical parameters and thus improve the realism of physical behavior. Since the three-dimensional object will be complex and with different physical characteristics, different deformation methods may be used. In this way, we intend to achieve visual realism, haptic realism and real-time interaction, but with a minimum computational cost. The method was proposed after conducting a Systematic Review (RS) to find out which methods and approaches have been used in applications for medical training and what level of visual and haptic realism have been achieved. From the RS results, it was found that the Massa Mola method has been widely used to simulate deformation in computational applications for medical training. There are several techniques to solve the Mass Spring system, in this work we intend to use the Gauss-Seidel iterative method to solve the linear system and also an approach that separates active, passive and fixed vertices. In this way, the Rigidity matrix becomes dispersed and only the vertices affected by the applied external force. It is intended to use this adaptation of Massa Mola in the outermost layer of the three-dimensional model, in which visual realism is important because it represents the part that is seen by the usurer. In the more internal layers will be used methods that calculate the return of force is the biggest concern. The tests of performance and interaction will be performed from an Object Oriented Framework, Virtual Medical Training (ViMeT), which was developed in the Laboratory of Interactive Technologies of the Polytechnic School and in the Laboratory of Applications of Health Informatics of the School of Arts Sciences and Humanities, both from the University of São Paulo. In the current version of the ViMeT it is possible to load a three-dimensional model that simulates a human organ and one that simulates a medical instrument, in addition, it also has three functionalities, being: i) a deformation method based on Massa Mola, ii) three methods (iii) a stereoscopic vision method using the anaglyph technique. The loading of the three-dimensional models, as well as the functionalities with their respective parameters are assigned by means of an automatic instantiation tool called ViMeTWizard.
Keywords: Deformation, Teaching, Massa Mola, Medicine and Soft tissue.
Ana Cláudia M. T. of Oliveira and Romero Tori.
Project poster: PDF
Architectural Proposal for Virtual and Augmented Reality Applications in the Cloud
Virtual reality and augmented reality are used in the production of complex virtual environments using non-trivial input and output devices to provide users with a sense of immersion in real-time synthetic worlds. The development of applications that involve virtual reality and augmented reality, is usually characterized as a complex project of high cost, since they involve several areas of knowledge. Thus, it is of fundamental importance to research and develop methods and techniques that enable the reuse of software artifacts with the objective of reducing costs and delivery time and increasing the quality of the software produced. In this scenario the reuse of software is not an alternative, but a necessity to enable its development. This work presents a study of the main approaches to software reuse, focusing on two of them, component-based development and service-oriented development. In general, the research evidenced the high percentage of works that use the component-based approach to the reuse of software artifacts in virtual and augmented reality applications. It was observed in about 80% of the works, evidencing the strong adoption of components as a form of distribution, flexibility and reuse of software elements. It also showed that the service orientation approach was identified at an expressive percentage, approximately 40%. However, the work that presented such an approach does not adhere to the precepts of service-oriented computing. The vast majority relates services with features that certain software artifacts expose, but not as a service-oriented development paradigm. The objective is to produce an architectural model of a platform for the production and sharing of software components for applications involving virtual and augmented reality. The proposed logical architectural model brings together the paradigms of component-based development, service-oriented computing, and cloud computing with the goal of producing a platform for the production and execution of collaborative and virtual-enhanced applications in the cloud. The proposed architecture will be validated through two strategies. The first validation will be done through a formal software architecture validation method called the Achitecture Tradeoff Analysis Method (ATAM).
Keywords: Virtual Reality, Augmented Reality, Reuse, Component-Based Development, Service Oriented Computing and Cloud Computing.
Evandro César Freiberger and Romero Tori.
Project poster: PDF.
Real-time Video Targeting Method for Applications in Teleimersion
Segmenting videos with the goal of extracting a person in the foreground is a common need in many applications, including Augmented Reality (RA) systems. When this task is to be performed in uncontrolled environments where background color or lighting homogeneity can not be expected, appropriate targeting methods for these conditions should be used. Two main approaches can be identified among the segmentation methods used in RA or similar systems, the one based on the background subtraction technique and the one based on energy minimization frameworks. The second approach has been more robust, but more costly computationally. Although sophisticated, such methods are still more prone to errors than traditional ones, based on the subtraction of a homogeneous background color. In order for the use of these methods to be feasible for certain applications, such as the one that is part of the context of this work, it is important to study these errors and their impacts on the user’s perception, in order to concentrate efforts in the development of segmentation algorithms that reduce errors of greater impact, which may vary depending on the application. Although there are researches on segmentation quality, few works have been concerned with identifying the artifacts that cause the most inconvenience to the user. The subjective methods of evaluating image quality (and videos), traditionally used to evaluate television images and, more recently, multimedia applications, have proved to be efficient as a way to evaluate the quality of segmentation. The most common segmentation errors presented by state-of-the-art methods can be simulated – using videos that display images in the context of a particular application – and the results can be evaluated by a subjective procedure. In this way, one can identify the artifacts (or a combination of them) that cause the user more annoyance and use this knowledge in the development of a new algorithm. In this context, it is proposed the development of a segmentation method that considers the results of subjective evaluations of segmentation quality and prevents the most perceptible errors from being displayed to the user. The method will be applied to an RA system used for immersive teleconferencing. In this system, video capture takes place in an uncontrolled environment and the targeted image will be used to generate a billboard-type avatar.
Keywords: Targeting; Subjective Evaluation; Augmented Reality.
Silvio Ricardo Rodrigues Sanches and Romero Tori.
Project poster: PDF.
Quantitative Assessment of Presence Perception in Teaching Activities Using Eye Tracking.
Despite the strong growth of distance education in Brazil (EAD) and in the world, both EAD and in-person education do not yet have a tool that provides the teacher with real-time information on the perception (meaningful interpretation) of students about a developing activity. Presence perception (PP, abbreviation of telepresence), defined as a perceptual illusion of non-mediation, can be used as an indicator of attention and engagement, since they are related. The most used method for the evaluation of PP makes use of questionnaires applied to the subjects, after the experience. In addition to not providing information in real time this method suffers a lot of noise from both the subjects submitted to the experiment and from the questionnaire evaluators. The most effective methods for PP evaluation, in real time, make use of physiological signals such as heart beat, electrocardiogram, electroencephalogram, resistivity and skin moisture, since these signs vary independently of the subjects’ will. The physiological signals, however, only vary significantly in stress situations, making it impossible to use them in normal teaching activities. Another way to evaluate PP is to use look tracking systems. Eye tracking systems have been studied and developed since the 19th century. These systems provide a mapping of eye movement and in addition indicating where the subjects are looking may also monitor pupil dilation and blinking. Currently the systems of tracking of the look are composed by hardware and software and the main difficulty for the wide use of these equipment is the high cost of the hardware. To circumvent the high value of the hardware of the tracking of the look is developing, in Interlab, a hardware of low cost. In the same way as physiological signs the movement of the eyes happens independently of the subject’s will and does not occur smoothly and continuously, but in balances and stops. Balances are fast movements with duration of 20 to 100 ms and fixations are stopped with duration of 100 to 400 ms. When looking at a scene (a picture for example), our brain determines regions of interest in the scene and uses the balconies to focus the eyes in those regions and the fixations to capture the information, the eyes capture only small regions of the scene, charge the brain the construction and interpretation of the scene. Studies relate eye movement to behavior, perception, and cognition. It has recently been shown that the conditional entropy of eye movement is related to PP in a static three-dimensional scenario. The greater the conditional entropy of the motion, the smaller the PP. Based on the relationship between PP and the conditional entropy of eye movement, a method will be developed that is able to indicate, quantitatively and in real time, a value for PP with the use of eye tracking in a teaching activity or distance) that has visual and sound stimuli.
Keywords: perception of presence, attention, engagement, teaching, quantitative evaluation and eye tracking.
Fernando Yoiti Obana and Romero Tori
Project poster: PDF
Ae3D: Connecting Virtual Worlds to LCM’s
Cilene Ap. Mainente Lora and Romero Tori.
Virtual online worlds have potential use in teaching. Users, represented in the virtual world through avatars, can interact with each other and with environments and objects, often specially developed for teaching. Students’ engagement with the course, as well as the sense of belonging to a community, can help motivate students, and consequently broaden their learning experience. However, evidence of the efficacy of such environments in improving learning still lacks further study. One of the main barriers to the development of more in-depth studies is the overloading of tasks for educators in the technical preparation of such environments. One possible solution is to use preconfigured and elaborate environments in order to facilitate the educators’ experiments. The proposal of this work is the construction of an interoperability layer, called Ae3D, between traditional LCMS, and a virtual world, providing an interactive and attractive interface for students, more and more accustomed to video games. Such a layer would eliminate rework of teachers, in the preparation and availability of materials, which may contribute to the realization of studies to prove or not the gains of the use of three-dimensional environments in education.
Keywords: Virtual, MUVE and Teaching.
Project poster: PDF
Open Virtual 3D World as Interface for Learning Environments
Fábio Martins do Carmo and Romero Tori.
This work is a sequential study on the development and feasibility of applying three-dimensional virtual worlds as alternative interfaces for access to Learning Management Systems (LMSs). Information and Communication Technologies (TICs), used in LMSs, and Virtual Reality (VR) technologies, used in 3D Virtual Worlds (MV3D), are discussed with a scope in the development and application of these in teaching-learning processes. The Ae-3D project, related to this work, is also discussed, in order to analyze the proposal of a model that integrates MV3D Second LifeTM (SL) and LMS Tidia-Ae. Thus, the research described here delineates as a continuation and improvement of the Ae-3D project and suggests an alternative for the construction of virtual worlds, which allows the mapping of the LMS functionalities and allows greater control over the components of the hybrid system. This alternative materializes in an open source platform that has features and characteristics not far from those offered by SL. As proof of concept, a software module was implemented that connects the MV3D OpenSimulator platform to the LMS Tidia-Ae, enabling interoperability between systems, regardless of the server where the LMS is run. This module allows the LMS functionality to be modeled in different ways, which is described in the mapping performed by the Open Ae-3D project. The resulting environment functions as a 3D interface to access LMS features and functionality. This study had as its precepts that educators and instructional designers do not have to worry about learning a new environment and can continue to use conventional LMS, while interested students can enter and participate in online activities using this new interface. This integration of technologies seeks to provide the interactivity provided by immersive 3D environments and to take advantage of the knowledge, skills and experience acquired by educators during the implementation and use of electronic learning systems.
Keywords: Computer Teaching , Virtual Reality, Distance Education, Technology Integration.
Project poster: PDF
Open Source Hardware Applied to Interactivity
Diego Spinola and Romero Tori.
The project is currently under development, with the objective of studying the main open source hardware (OSHW) platforms currently available, aiming its applicability in projects involving interactivity. It is discussed how the recent advances in the area of ”personal fabrication” (low cost 3d printing) and low-cost prototyping can contribute (effectively lowering the entry barrier) for testing, concept tests in hardware in the academic environment. For this, a tool (development platform consisting of an RTOS, a set of compatible architectures and server software) is being developed in order to make it easier and cheaper to develop projects with hardware components in academic laboratories. The tool is already being tested and will be tested on existing Interlab projects that require specialized hardware components. So far the effectiveness of the platform is being measured by its application in the project: VIMPHIN (http://hackeneering.com/vimphin) a low cost non-conventional haptic interaction device (which can be replicated openly and without limitations ) In addition, platform tests are planned for at least two more projects: A cloud of intelligent infrared markers (IR Diego Spinola, Interlab USP) In the embedded gaze-tracking hardware implementation, to be used in the Interlab / UNEMAT partnership project title “Evaluation of the perception of presence in learning activities using eye tracking” by Fernando Yoiti Obana Universidade do Estado de Mato Grosso. At the end of the master’s project the tool will be published under the OSHW license (available at http://freedomdefined.org/OSHW) with freely distributed software and schematics online.
Keywords: OSHW, Haptic Interactivity , Haptic Device.
Project poster: PDF
A Method to Evaluate Knowledge Acquisition in Interactive Three-Dimensional Virtual Learning Environments
Eunice P. Santos Nunes and Fatima L. S. Nunes.
Many researchers have argued that Virtual Reality (RV) and Augmented Reality (RA) systems are trends in the area of educational training. This is because RV and RA systems offer opportunities for immersive and non-immersive experiences, realistic contexts and activities for experiential learning, training simulations, complex scenario modeling, and multi-user collaboration interacting with each other within the same virtual world. RV and RA applications are found on a large scale in three-dimensional virtual training systems in the most diverse fields of knowledge, such as Medicine, Engineering, Industry, Science and Mathematics, among others. Thus, with the popularization of three-dimensional Virtual Learning Environments (3D AVAs) from simulations of real situations, the students’ knowledge acquisition process (cognition) in 3D AVAs has been the subject of research in order to make this aspect considered since the design phase of these Virtual Environments. In general, research aimed at evaluating the application of 3D AVAs in learning activities has identified among the participants in the environment engagement and motivation, among other positive aspects. However, studies have shown that currently there are many researches related to the subject of 3D AVAs, but still incipient from the point of view of evaluation of the acquisition of learner knowledge. Thus, there is a gap to be filled, when studying the evaluation of the acquisition of knowledge through 3D AVAs, in order to provide models and evaluation tools that enable the evaluation of learning by educational researchers who use this type of virtual environment. Considering this scenario, the main objective of this work is to establish a 3D Assessment Method of Learning in AVAs, from the perspective of Cognitive Theories, as a way to make feasible experiments that try to verify how and to what extent 3D AVAs collaborate with the process of knowledge. Thus, it was initially verified the process of knowledge acquisition in Virtual Learning Environments, considering the three-dimensional paradigm, as well as the existing methods to evaluate if this knowledge acquisition actually occurs and to what degree it occurs. From this study, a conceptual model was defined to evaluate the acquisition of knowledge, using a Cognitive Model, in an innovative way, complementing the questionnaires widely used as evaluation tools. Currently the model is in the phase of computational implementation and definition of experiments to be conducted. In order to validate the proposed method, applications will be built from frameworks developed at the Interactive Technologies Laboratory (Interlab) of the Polytechnic School of USP and the LApIS (Laboratory of Applications of Informatics in Health) of the School of Arts Sciences and Humanities of USP. will be used in experiments in real learning situations.
Key words: Virtual Reality, Augmented Reality, Evaluation, Knowledge Acquisition.
Project poster: PDF
3D reconstruction technique based on structured light for application in the video avatar
Daniel de Andrade Lemeszenski and Ricardo Nakamura
Recent technological advances allow a conventional videoconference to evolve into an immersive teleconferencing system where the presenter is scanned and its three-dimensional geometric representation (realistic avatar) is inserted into a 3D virtual environment. The objective of this work is to propose a real – time three – dimensional reconstruction method of a dynamic geometric model representing a moving human being for use in an immersive teleconferencing system.
Project poster: PDF
Methodology for Evaluation of Sensory-Motor Abilities in Virtual Medical Training Applications
Alexandre Martins dos Anjos and Fatima L. S. Nunes
The use of Virtual Reality (VR) environments is an increasingly well-known practice in applications developed in the area of education and training. A special case is his employment in processes of acquisition of Sensory-Motor Skills (HSM) in contexts of VR and medical training. In this sense, it is necessary to measure the capacity of an individual to produce movements from objectives expressed in tasks of specific domains using applications developed for VR contexts. In order to develop this measurement, it is necessary to construct and validate a model that can abstract possible methods and parameters of an HSM acquisition evaluation process. Considering the faculties that have RV technologies along with the HSM learning field, it is observed in the medical area, in particular, the lack of research that consolidates a methodology so that health professionals can evaluate skills acquisition activities that focus on motor learning in VR contexts. The present work aims at a methodology to measure the acquisition of HSM using RV environments. To achieve this objective, a Systematic Review (RS) was conducted to identify studies that address methods and parameters used in HSM acquisition assessment processes. Next, a conceptual model was defined to evaluate the acquisition of HSM. Currently, the computational implementation of the conceptual model is being conducted, as well as designed experiments to validate the model after its implementation. Proof of concept is intended to capture interactions performed in applications for medical training simulation of biopsy tests. These applications are generated from the adaptation of a Virtual Medical Training (ViMeT) framework, developed by LApIS (Laboratory of Computer Applications in Health), EACH (School of Arts, Sciences and Humanities), and by Interlab (Laboratory of Technologies Interactive from the Polytechnic School), both from the University of São Paulo – USP. ViMeT is an object-oriented framework that implements a set of classes in Java language, generating applications to simulate biopsy exams. The framework allows the construction of virtual environments for the manipulation of synthetic models that represent human organs and medical instruments. It also offers functionality common to applications such as: stereoscopy, precision collision detection, deformation and interaction with non-conventional devices. The modular structure of the framework enables new methods for these features or even new features to be seamlessly coupled.
Keywords: Evaluation of acquisition of sensory-motor skills; Virtual reality; Medical training; Virtual Environments; Virtual Learning Environments.
Project poster: PDF