• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 15
  • 10
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 192
  • 192
  • 70
  • 63
  • 63
  • 43
  • 39
  • 37
  • 33
  • 32
  • 27
  • 18
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

The Effects Of The Combination Of Interview Practice In A Mixed-reality Environment And Coaching On The Interview Performance Of Young Adults With Intellectual Disabilities

Walker, Zachary M 01 January 2012 (has links)
The purpose of this study was to identify if a functional relationship exists between a treatment combination of interview practice in a mixed-reality learning modality (TLE TeachLivETM) and the use of individualized coaching sessions on the interview performance of young adults with intellectual disabilities (ID). Student participants took part in live preinterviews with the University of Central Florida (UCF) Office of Career Services measuring their current levels of employment interview performance. Student participants then engaged in interviews with avatars in the TLE TeachLivETM lab. After each treatment interview in the lab, student participants received individualized coaching sessions to assist them in improving their interview performance. Interview performance was rated in order to determine if the combination of interview practice and coaching increased student participant performance as measured on an interview rubric. Finally, student participants participated in live post-interviews with Office of Career Services to determine if the two-step instructional training intervention resulted in the improvement of interview performance in a natural, live setting. In addition, student participants, parents/primary caregivers, and an employee expert panel participated in a survey rating the goals, procedures, and outcomes of the study. Results indicated that the combination of interview practice in the TLE TeachLivETM setting and coaching was associated with immediate gains in the interview performance of student participants. Student participant performance also improved in live interview settings. Social validity data indicated that using this combination intervention was both valuable and appropriate in preparing individuals with ID for employment interviews.
92

Direct Manipulation Of Virtual Objects

Nguyen, Long 01 January 2009 (has links)
Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities--proprioception, haptics, and audition--and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum--Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.
93

Development of the sustained-pain treatment through augmented-reality occupation-based protocol (STAR-OP)

Fride, Yaara 19 January 2022 (has links)
Chronic pain (CP) significantly affects participation in meaningful occupations. It is a public health problem that comes with substantial social and economic costs (Dagenais et al., 2008; Dahlhamer et al., 2018; Geurts et al., 2018; Treede et al., 2015; Willems et al., 2018). Creating a successful intervention for CP is challenging due to the subjectivity of the pain experience and the complexity of factors associated with pain behavior (Newton et al., 2013; Polacek et al., 2020; Van Huet et al., 2012). This doctoral project details the development of the Sustained-pain Treatment through Augmented Reality Occupation-based Protocol (STAR-OP), a novel treatment protocol that offers practical solutions for an outpatient occupation-based CP intervention. The STAR-OP addresses critical issues for the CP population, including expectation management, home assignments adherence, and the generalization process from clinical practice to the client's home environment. The STAR-OP program uses Augmented Reality technology to facilitate a gradual generalization process, Motivational Interviewing techniques to enhance the therapeutic relationship's effectiveness, and the educational content of the Lifestyle Redesign® protocols presented through an occupational perspective (A. Simon & Collins 2017). The STAR-OP program evaluation examines the effectiveness of the STAR-OP, via a multiple-baseline, single-subject design, institutional review board (IRB)-approved study to be conducted at Lowenstein Rehabilitation Center in Israel.
94

From E-Learning to M-Learning – the use of Mixed Reality Games as a New Educational Paradigm

Fotouhi-Ghazvini, Faranak, Earnshaw, Rae A., Moeini, A., Robison, David J., Excell, Peter S. January 2011 (has links)
No / This paper analyses different definitions of mobile learning which have been proposed by various researchers. The most distinctive features of mobile learning are extracted to propose a new definition for Mobile Educational Mixed Reality Games (MEMRG). A questionnaire and a quantifying scale are designed to assist the game developers in designing MEMRG. A new psycho-pedagogical approach to teaching is proposed for MEMRG. This methodology is based on the theme of "conversation" between different actors of the learning community with the objective of building the architectural framework for MEMRG.
95

The Augmented Worker

Becerra-Rico, Josue January 2022 (has links)
Augmented Reality (AR) and Mixed Reality (MR) have increased in attention recently and there are several implementations in video games and entertainment but also work-related applications. The technology can be used to guide workers in order to do the work faster and reduce human error while performing their tasks.  The potential of this kind of technology is evaluated in this thesis through a proof-of-concept prototype which guides a novice in the kitchen in following a recipe and completing a dish. The thesis shows a comparison between five different object detection algorithms, selecting the best in terms of time performance, energy performance and detection accuracy. Then the selected object detection algorithm is implemented in the prototype application.
96

The early stages of extended reality : An analysis of the opportunities and challenges faced by early stage businesses within the extended reality (XR) industry

Johannesson, Philip, Karlsson, Julia January 2023 (has links)
In recent years, the extended reality (XR) industry has witnessed remarkable growth, revolutionizing various sectors. The potential of XR to reshape industries and create new business opportunities has captured the attention of entrepreneurs and investors alike, leading to the emergence of numerous early stage businesses venturing into this exciting field. Despite the promising prospects, the XR industry remains in a dynamic and evolving state, presenting both opportunities and challenges for early stage businesses seeking to establish themselves within this competitive landscape. This master thesis aims to explore the experiences of early stage businesses within the extended reality (XR) industry, with the goal of understanding the opportunities and challenges they encounter in the current period of study, spring 2023. The study takes a qualitative research approach, employing observations of early stage XR businesses and semi-structured interviews with professionals within the industry and academia of XR. The analysis is based on thematic analysis applying Disruptive Innovation theory and Adoption Curve Theory combined with Gartner’s Hype Cycle. Together with existing literature the analysis creates a picture of the current opportunities and challenges that exists in the XR field, visualized in a prototype of a 3D mind map. These range from areas such as recruitment, financing and technology development to user adoption, ethics and inclusion. They reflect what businesses are facing in early stages of a relatively new industry as well as some of the political, economical and sociocultural factors. Some of the opportunities are high potential to transform and disrupt industries and markets, creating new ways of interacting in virtual worlds as well as new revenue streams. Some of the challenges are low adoption, funding issues and competition with leading players. Ultimately, the research provides valuable insights of the XR industry which could be used when making strategic decisions for professionals and stakeholders.
97

Enhancing human-robot interaction using mixed reality

Molina Morillas, Santiago January 2023 (has links)
Industry 4.0 is a new phase of industrial growth that has been ushered in by the quick development of digital technologies like the Internet of Things (IoT), artificial intelligence (AI), and robots. Collaborative robotic products have appeared in this changing environment, enabling robots to collaborate with people in open workspaces. The paradigm changes away from autonomous robotics and toward collaborative human-robot interaction (HRI) has made it necessary to look at novel ways to improve output, effectiveness, and security. Many benefits, including more autonomy and flexibility, have been made possible by the introduction of Autonomous Mobile Robots (AMRs) and later Automated Guided Vehicles (AGVs) for material handling. However, this incorporation of robots into communal workspaces also brings up safety issues that must be taken into account. This thesis aims to address potential threats arising from the increasing automation in shopfloors and shared workplaces between AMRs and human operators by exploring the capabilities of Mixed Reality (MR) technologies. By harnessing MR's capabilities, the aim is to mitigate safety concerns and optimize the effectiveness of collaborative environments. To achieve this the research is structured around the following sub-objectives: the development of a communication network enabling interaction among all devices in the shared workspace and the creation of a MR user interface promoting accessibility for human operators. A comprehensive literature review was conducted to analyse existing proposals aimed at improving HRI through various techniques and approaches. The objective was to leverage MR technologies to enhance collaboration and address safety concerns, thereby ensuring the smooth integration of AMRs into shared workspaces. While the literature review revealed limited research utilizing MR for data visualization in this specific domain, the goal of this thesis was to go beyond existing solutions by developing a comprehensive approach that prioritizes safety and facilitates operator adaptation. The research findings highlight the superiority of MR in displaying critical information regarding robot intentions and identifying safe zones with reduced AMR activity. The utilization of HoloLens 2 devices, known for their ergonomic design, ensures operator comfort during extended use while enhancing the accuracy of tracking positions and intentions in highly automated environments. The presented information is designed to be concise, customizable, and easily comprehensible, preventing information overload for operators.  The implementation of MR technologies within shared workspaces necessitates ethical considerations, including transparent data collection and user consent. Building trust is essential to establish MR as a reliable tool that enhances operator working conditions and safety. Importantly, the integration of MR technologies does not pose a threat to job displacement but rather facilitates the smooth adaptation of new operators to collaborative environments. The implemented features augment existing safety protocols without compromising efficacy, resulting in an overall improvement in safety within the collaborative workspace. In conclusion, this research showcases the effectiveness of MR technologies in bolstering HRI, addressing safety concerns, and enhancing operator working conditions within collaborative shopfloor environments. Despite encountering limitations in terms of time, complexity, and available information, the developed solution showcases the potential for further improvements. The chosen methodology and philosophical paradigm have successfully attained the research objectives, and crucial ethical considerations have been addressed. Ultimately, this thesis proposes and provides a comprehensive explanation for potential future implementations, aiming to expand the actual capabilities of the solution.
98

Faktoren zur Akzeptanz von Virtual Reality Anwendungen / Factors for the acceptance of virtual reality applications

von Eitzen, Ingo Martin January 2024 (has links) (PDF)
Immersive Technologien, wie Augmented und Virtual Reality, können bestehende Geschäftsmodelle entweder verbessern oder gefährden. Jedoch kann sich das förderliche Potential nur entfalten, wenn die Anwender:innen die Technologien akzeptieren und letztendlich auch nutzen. In dieser Arbeit wird beschrieben, was Akzeptanz ist und welche Einflussgrößen (Faktoren) für die Akzeptanz von Virtual Reality besonders relevant sind. Anschließend ist, basierend auf der diskutierten Fachliteratur, ein neuartiges, holistisches Akzeptanzmodell für Virtual Reality entworfen und mit drei Studien überprüft worden. In der ersten Studie wurden 129 Personen gebeten entweder in Augmented oder Virtual Reality ein Schulungsszenario oder ein Mini-Spiel auszuprobieren (2x2-Design). In beiden Anwendungen sollten Flaschen von einem virtuellen Fließband entfernt werden. Im Fokus der Untersuchung stand die Immersion, die Nützlichkeit, das empfundene Vergnügen (Hedonismus) und die Zufriedenheit. Die Ergebnisse ergaben zum einen, dass sich die Immersion zwischen Augmented und Virtual Reality unterscheidet, und zum anderen, dass das empfundene Vergnügen und die Nützlichkeit signifikante Prädiktoren für die Zufriedenheit darstellen. An der zweiten Studie nahmen 62 Personen teil. Sie wurden gebeten das Schulungsszenario erneut zu absolvieren, wobei dieses mit auditiven Inhalten und animierten Figuren angereicht wurde, sowie über eine etwas bessere Grafikqualität verfügte. Die Daten wurden mit den Virtual Reality Szenarien aus der ersten Studie verglichen, um den Einfluss der Präsenz auf den Hedonismus zu untersuchen. Obwohl kein relevanter Unterschied zwischen den Gruppen festgestellt wurde, konnte nachgewiesen werden, dass Präsenz Hedonismus signifikant vorhersagt. An der dritten Studie beteiligten sich insgesamt 35 Personen. Untersuchungsgegenstand der Studie war die virtuelle Darstellung der eigenen Person in der virtuellen Realität (Verkörperung) und dessen Einfluss auf den Hedonismus. Die Versuchspersonen wurden gebeten das Schulungsszenario erneut zu durch-laufen, wobei sie diesmal das Eingabegerät (Controller) der Visieranzeige (head-mounted display) zur Steuerung benutzen. In der ersten Studie erfolgte die Bedienung über eine Gestensteuerung. Die Analyse dieser Manipulation offenbarte keinerlei Auswirkungen auf die Verkörperung. Allerdings stellte die Verkörperung einen signifikanten Prädiktor für den Hedonismus dar. Im Anschluss an die Studien ist das Modell mit den Daten aus den Virtual Reality Gruppen der ersten Studie beurteilt worden, wobei es sich weitgehend bestätigt hat. Abschließend werden die Befunde in Bezug auf die Fachliteratur eingeordnet, mögliche Ursachen für die Ergebnisse diskutiert und weitere Forschungsbedarfe aufgezeigt. / Immersive technologies, such as augmented and virtual reality, can either improve or endanger existing business models. However, the beneficial potential can only unfold if users accept the technologies and ultimately use them. This paper describes what acceptance is and which influencing variables (factors) are particularly relevant for the acceptance of virtual reality. Subsequently, a novel, holistic acceptance model for virtual reality was designed based on the discussed literature and tested with three studies. In the first study, 129 subjects were asked to try out either a training scenario or a mini-game in augmented or virtual reality (2x2 design). In both applications bottles should be removed from a virtual assembly line. The study investigated immersion, usefulness, pleasure (hedonism) and satisfaction. The results revealed that immersion differs between augmented and virtual reality, plus that perceived pleasure and usefulness are significant predictors of satisfaction. In the second study, 62 persons participated. They were asked to complete the training scenario again, which was enriched with auditory content, animated figures and with slightly better graphics quality. The data were compared to the virtual reality scenarios from the first study to examine the impact of presence on hedonism. Although no relevant difference was found between the groups, presence was shown to significantly predict hedonism. A total of 35 subjects took part in the third study. The object of the study was the virtual representation of oneself (embodiment) in virtual reality and its influence on hedonism. The subjects were asked to go through the training scenario again, this time using the input device (controller) of the head-mounted display for control. In the first study, gesture control was used to operate the device instead. The analysis of this manipulation revealed no effects on embodiment. However, embodiment predicted hedonism significantly. Following the studies, the model has been assessed with the data from the virtual reality groups of the first study and has been largely confirmed. Finally, the findings are classified in relation to the literature, possible causes for the results are discussed, and further research needs are identified.
99

Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed reality

Xiong, Yiyan 01 January 2014 (has links)
3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence for the human who controls the virtual character's behavior. In this dissertation, a human modeling pipeline is proposed for the problem of creating a 3D digital model of a real person. Our solution involves reshaping a 3D human template with a 2D contour of the participant and then mapping the captured texture of that person to the generated mesh. Our method produces an initial contour of a participant by extracting the user image from a natural background. One particularly novel contribution in our approach is the manner in which we improve the initial vertex estimate. We do so through a variant of the ShortStraw corner-finding algorithm commonly used in sketch-based systems. Here, we develop improvements to ShortStraw, presenting an algorithm called IStraw, and then introduce adaptations of this improved version to create a corner-based contour segmentatiuon algorithm. This algorithm provides significant improvements on contour matching over previously developed systems, and does so with low computational complexity. The system presented here advances the state of the art in the following aspects. First, the human modeling process is triggered automatically by matching the participant's pose with an initial pose through a tracking device and software. In our case, the pose capture and skeletal model are provided by the Microsoft Kinect and its associated SDK. Second, color image, depth data, and human tracking information from the Kinect and its SDK are used to automatically extract the contour of the participant and then generate a 3D human model with skeleton. Third, using the pose and the skeletal model, we segment the contour into eight parts and then match the contour points on each segment to a corresponding anchor set associated with a 3D human template. Finally, we map the color image of the person to the 3D model as its corresponding texture map. The whole modeling process only take several seconds and the resulting human model looks like the real person. The geometry of the 3D model matches the contour of the real person, and the model has a photorealistic texture. Furthermore, the mesh of the human model is attached to the skeleton provided in the template, so the model can support programmed animations or be controlled by real people. This human control is commonly done through a literal mapping (motion capture) or a gesture-based puppetry system. Our ultimate goal is to create a mixed reality (MR) system, in which the participants can manipulate virtual objects, and in which these virtual objects can affect the participant, e.g., by restricting their mobility. This MR system prototype design motivated the work of this dissertation, since a realistic 3D human model of the participant is an essential part of implementing this vision.
100

Remote collaboration within a Mixed Reality rehabilitation environment : The usage of audio and video streams for mixed platform collaboration

Eriksson, Hanna January 2022 (has links)
This thesis investigates methods for remote collaboration and communication within a Mixed Reality (MR) rehabilitation environment. Based on the research on remote communication methods and an interview with an occupational therapist with previous experience in MR rehabilitation, a video and audio stream communication method was chosen to be implemented. The implementation consists of two applications, one patient application developed for HoloLens 2 and one therapist application for Android devices. The latter was tested on professional occupational therapists to investigate the feasibility of the method.  The result of the test indicated that the general attitude toward remote rehabilitation was positive. However, the chosen method did not allow the therapist to see the patient's face and surroundings which was a problem for a majority of the test participants. The cognitive workload for the therapist when communicating with the patient was in magnitude to similar tasks and the application was relatively easy to navigate.

Page generated in 0.1204 seconds