• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 15
  • 4
  • 1
  • Tagged with
  • 118
  • 25
  • 23
  • 17
  • 12
  • 12
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Vertex classification for non-uniform geometry reduction

Fernando dos Santos Fradinho Duarte de Oliveira, J. January 2008 (has links)
Complex models created from isosurface extraction or CAD and highly accurate 3D models produced from high-resolution scanners are useful, for example, for medical simulation, Virtual Reality and entertainment. Often models in general require some sort of manual editing before they can be incorporated in a walkthrough, simulation, computer game or movie. The visualization challenges of a 3D editing tool may be regarded as similar to that of those of other applications that include an element of visualization such as Virtual Reality. However the rendering interaction requirements of each of these applications varies according to their purpose. For rendering photo-realistic images in movies computer farms can render uninterrupted for weeks, a 3D editing tool requires fast access to a model's fine data. In Virtual Reality rendering acceleration techniques such as level of detail can temporarily render parts of a scene with alternative lower complexity versions in order to meet a frame rate tolerable for the user. These alternative versions can be dynamic increments of complexity or static models that were uniformly simplified across the model by minimizing some cost function. Scanners typically have a fixed sampling rate for the entire model being scanned, and therefore may generate large amounts of data in areas not of much interest or that contribute little to the application at hand. It is therefore desirable to simplify such models non-uniformly. Features such as very high curvature areas or borders can be detected automatically and simplified differently to other areas without any interaction or visualization. However a problem arises when one wishes to manually select features of interest in the original model to preserve and create stand alone, non-uniformly reduced versions of large models, for example for medical simulation. To inspect and view such models the memory requirements of LoD representations can be prohibitive and prevent storage of a model in main memory. Furthermore, although asynchronous rendering of a base simplified model ensures a frame rate tolerable to the user whilst detail is paged, no guarantees can be made that what the user is selecting is at the original resolution of the model or of an appropriate LoD owing to disk lag or the complexity of a particular view selected by the user. This thesis presents an interactive method in the con text of a 3D editing application for feature selection from any model that fits in main memory. We present a new compression/decompression of triangle normals and colour technique which does not require dedicated hardware that allows for 87.4% memory reduction and allows larger models to fit in main memory with at most 1.3/2.5 degrees of error on triangle normals and to be viewed interactively. To address scale and available hardware resources, we reference a hierarchy of volumes of different sizes. The distances of the volumes at each level of the hierarchy to the intersection point of the line of sight with the model are calculated and these distances sorted. At startup an appropriate level of the tree is automatically chosen by separating the time required for rendering from that required for sorting and constraining the latter according to the resources available. A clustered navigation skin and depth buffer strategy allows for the interactive visualisation of models of any size, ensuring that triangles from the closest volumes are rendered over the navigation skin even when the clustered skin may be closer to the viewer than the original model. We show results with scanned models, CAD, textured models and an isosurface. This thesis addresses numerical issues arising from the optimisation of cost functions in LoD algorithms and presents a semi-automatic solution for selection of the threshold on the condition number of the matrix to be inverted for optimal placement of the new vertex created by an edge collapse. We show that the units in which a model is expressed may inadvertently affect the condition of these matrices, hence affecting the evaluation of different LoD methods with different solvers. We use the same solver with an automatically calibrated threshold to evaluate different uniform geometry reduction techniques. We then present a framework for non-uniform reduction of regular scanned models that can be used in conjunction with a variety of LoD algorithms. The benefits of non-uniform reduction are presented in the context of an animation system.
52

Developing assistive haptic guidelines for improving non-visual access to the web

Kuber, Ravi Anil January 2008 (has links)
Haptic technologies have the potential to help the blind community overcome many of the challenges experienced when accessing the Web. With limited design guidance available to web developers, haptic effects could be selected arbitrarily for use on a web page, with minimal consideration given as to how the sense of touch could assist a blind user. Poor interface design is known to reduce the quality of the subjective browsing experience. In this thesis, research has been conducted with the aim of developing effective spatial and navigational cues to address issues of accessibility on a web interface. Using a structured participatory-based design approach, force-feedback cues have been developed to represent objects commonly found on a web page (e.g. images and hyperlinks). The application of a modified version of the approach, has led to the design of tactile pin-based stimuli, which provides similar levels of structural and navigational support to the force-feedback cues. Findings have informed a library of software, with accompanying guidelines for their application on a web page. These are housed within a haptic framework. This tool provides a vital reference for developers, allowing them to replicate effects on their own sites, and offers them support during both the design and evaluation processes. It is left to the discretion of the developer to include the mappings that are most appropriate to the context of the web-based task, and ensure that these cues are targeted to the needs of a broad range of blind individuals when using a tactile or force-feedback device.
53

The practice of everyday (virtual) life : a participatory and performative artistic enquiry

Gamble, R. January 2016 (has links)
In contemporary culture, human-to-human communication is becoming mediated through digital screens and virtual communication1. Our everyday lives are now lived in and between physical and virtual spaces, in a 'hybrid space', augmented with technologies, in which individuals increasingly perform online as digital versions of themselves: avatars. As a result, 'everyday life' has become 'everyday virtual life' in which new communication practices and social behaviours emerge. This research is a critique of everyday (virtual) life. As with Michel de Certeau’s analysis of the practice of everyday life in the 1980’s, in which the day-to-day practices of human behaviour were critiqued, the increased familiarity of 'everyday virtual life' necessitates new critical questioning: How do we live online? What are the common virtual communication practices? And how can this emergent 'hybrid space' be critically questioned through a participatory performance enquiry? This is an embodied practice, in which the contributions to knowledge are gained through the action and reflection of participatory performance; each raising new critical questioning and an embodied understanding of the critique of everyday (virtual) life: specifically the communication practices and human behaviours present in the digital, which are brought to the foreground through their re-framing and reperformance in a physical space. The research is presented as a textual-visual thesis and online platform, which together reveal the context, methodology, documentation and critical analysis of a body of practice-led research carried out by the author. The reader is invited to view both alongside each other: www.thepracticeofeverydayvirtuallife.com.
54

Effective delivery of believable behaviour for embodied conversational agents

Kamyab Tehrani, Kaveh Richard January 2006 (has links)
No description available.
55

Phenomenal regression as a potential metric of veridical perception in virtual environments

Elner, Kevin William January 2015 (has links)
It is known that limitations of the visual presentation and sense of presence in a virtual environment (VE) can result in deficits of spatial perception such as the documented depth compression phenomena. Investigating size and distance percepts in a VE is an active area of research, where different groups have measured the deficit by employing skill-based tasks such as walking, throwing or simply judging sizes and distances. A psychological trait called phenomenal regression (PR), first identified in the 1930s by Thouless, offers a measure that does not rely on either judgement or skill. PR describes a systematic error made by subjects when asked to match the perspective projections of two stimuli displayed at different distances. Thouless’ work found that this error is not mediated by a subject’s prior knowledge of its existence, nor can it be consciously manipulated, since it measures an individual’s innate reaction to visual stimuli. Furthermore he demonstrated that, in the real world, PR is affected by the depth cues available for viewing a scene. When applied in a VE, PR therefore potentially offers a direct measure of perceptual veracity that is independent of participants’ skill in judging size or distance. Experimental work has been conducted and a statistically significant correlation of individuals’ measured PR values (their ‘Thouless ratio’, or TR) between virtual and physical stimuli was found. A further experiment manipulated focal depth to mitigate the mismatch that occurs between accommodation and vergence cues in a VE. The resulting statistically significant effect on TR demonstrates that it is sensitive to changes in viewing conditions in a VE. Both experiments demonstrate key properties of PR that contribute to establishing it as a robust indicator of VE quality. The first property is that TR exhibits temporal stability during the period of testing and the second is that it differs between individuals. This is advantageous as it yields empirical values that can be investigated using regression analysis. This work contributes to VE domains in which it is desirable to replicate an accurate perception of space, such as training and telepresence, where PR would be a useful tool for comparing subjective experience between a VE and the real world, or between different VEs.
56

Perception of emotional body language displayed by animated characters

Beck, Aryel January 2011 (has links)
Virtual Environments have demonstrated effectiveness for social task training such as medical training (Anolli, Vescovo, Agliati, Mantovani, & Zurloni, 2006). These types of Virtual Environments have used emotional animated characters. Even though emotions have a strong influence on human-human interactions (Gratch, Mao, & Marsella, 2006), typical system evaluation does not assess whether human and animated emotional displays are perceived similarly by observers. Moreover, the Uncanny Valley, which is a drop in believability as characters become more realistic, threatens the assumption that emotions displayed by an animated character and a human would be interpreted similarly. Thus, it is not known how appropriate the perception to a realistic emotional animated character is. This issue is especially important for social task training which require animated characters to be perceived as social and emotional partners so that trainees would be confronted with situations comparable to real life ones. Using an approach similar to the one proposed by Nass & Moon (2000) in their work on the Media Equation, this thesis investigates how emotional body language displayed by animated characters is interpreted. A psychological experiment was conducted to investigate if emotional body language would be an appropriate way for animated characters to display emotion. This was done by comparing the interpretation of emotional body language displayed by animated characters with that by real actors. The results showed that animated body language can be accurately interpreted. However, the videos of the actor were found to be more emotional, more believable and more natural than the animated characters, whilst displaying the same emotional body language. Moreover, there was a significant difference in the number of correctly interpreted negative emotions displayed. Although, there was not a difference for positive emotions. This could be due to the physical appearance of the animated character or to the loss of micro-gestures inherent to Motion Capture technology. Thus, a second comparative study was conducted to investigate the potential causes for this drop in believability and recognition. It investigated the effect of changing the level of physical realism of the animation as well as deteriorating the quality of the emotional body language itself. Whilst no effect was found regarding the deterioration of the emotional body language, the results show that the videos of the Actor were found to be more emotional, more believable and more natural than the two animated characters. These findings have strong implications for the use of Virtual Environments for social task training.
57

Biometric identification using user interaction with virtual worlds

Al-Khazzar, Ahmed M. A. January 2012 (has links)
A virtual world is an interactive 3D virtual environment that visually resembles complex physical spaces, and provides an online community through which the users can connect, shop, work, learn, establish emotional relations, and explore different virtual environments. The use of virtual worlds is becoming popular in many fields such as education, economy, space, and games. With the widespread use of virtual worlds, establishing the security of these systems becomes more important. To this date, there is no mechanism to identify users of virtual worlds based on their interactions with the virtual world. Current virtual worlds use knowledge-based authentication mechanisms such as passwords to authenticate users. However they are not capable of distinguishing between genuine users and imposters who possess the knowledge needed to gain access to the virtual world. The aim of the research reported in this thesis is to develop a behavioural biometric system to identify the users of a virtual world based on their behaviour inside these environments. In this thesis, three unique virtual worlds are designed and implemented with different 3D environments and avatars simulating the different environments of virtual worlds. Two experiments are conducted to collect data from user interactions with the virtual worlds. In the first experiment 53 users participated and in the second experiment, a year later, 66 different users participated in the experiment. This research also studies the parameters of user behaviour inside virtual worlds and presents novel feature extraction methods to extract four main biometric features from the collected data, namely: action, time, speed, and entropy biometric features. A sample classification methodology is formulated. Using distance measure algorithms and based on the collected data, users are identified inside the virtual worlds. Also in this thesis the application of biometric fusion in enhancing the performance of the behavioural biometric system is studied. The achieved average equal error rates in this research were between 26-33% depending on the virtual world environment and movement freedom inside virtual worlds. It has been found that avatar actions inside virtual worlds carry more identifying attributes than parameters such as the avatar position inside the virtual world. Also it has been found that virtual worlds with very open environments with respect to avatar movement showed higher EERs when using the biometric system implemented in this research.
58

Supporting collocated and at-a-distance experiences with TV and VR displays

McGill, Mark January 2016 (has links)
Televisions (TVs) and VR Head-Mounted Displays (VR HMDs) are used in shared and social spaces in the home. This thesis posits that these displays do not sufficiently reflect the collocated, social contexts in which they reside, nor do they sufficiently support shared experiences at-a-distance. This thesis explores how the role of TVs and VR HMDs can go beyond presenting a single entertainment experience, instead supporting social and shared use in both collocated and at-a-distance contexts. For collocated TV, this thesis demonstrates that the TV can be augmented to facilitate multi-user interaction, support shared and independent activities and multi-user use through multi-view display technology, and provide awareness of the multi-screen activity of those in the room, allowing the TV to reflect the social context in which it resides. For at-a-distance TV, existing smart TVs are shown to be capable of supporting synchronous at-a-distance activity, broadening the scope of media consumption beyond the four walls of the home. For VR HMDs, collocated proximate persons can be seamlessly brought into mixed reality VR experiences based on engagement, improving VR HMD usability. Applied to at-a-distance interactions, these shared mixed reality VR experiences can enable more immersive social experiences that approximate viewing together as if in person, compared to at-a-distance TV. Through an examination of TVs and VR HMDs, this thesis demonstrates that consumer display technology can better support users to interact, and share experiences and activities, with those they are close to.
59

Virtual reality for fixture design and assembly

Li, Qiang January 2009 (has links)
Due to today's heavy, growing competition environment, manufacturing companies have to develop and employ new emerging technologies to increase productivity, reduce production costs, improve product quality, and shorten lead time. The domain of Virtual Reality (VR) has gained great attention during the past few years and is currently explored for practical uses in various industrial areas e.g. CAD, CAM, CAE, CIM, CAPP and computer simulation etc. Owing to the trend towards reducing lead time and human effort devoted to fixtureplanning, the computerization of fixture design is required. Consequently, computer aided fixture design (CAFD) has become an important role of computer aided design/manufacture (CAD/CAM integration. However, there is very little ongoing research specially focused on using the VR technology as a promising solution to enhance CAFD systems' capability and functionality. This thesis reviews the possibility of using interactive Virtual Reality (VR) technology to support the conventional fixture design and assembly process. The trend that the use of VR benefits to fulfil the optimization of fixture design and assembly in VE has been identified and investigated. The primary objectives were to develop an interactive VR system entitled Virtual Reality Fixture Design & Assembly System (VFDAS), which will allow fixture designers to complete the entire design process for modular fixtures within the Virtual Environment (VE) for instance: Fixture element selection, fixture layout design, assembly, analysis and so on. The main advantage of VFDAS is that the VR system has the capability of simulating the various physical behaviours for virtual fixture elements according to Newtonian physical laws, which will be taken into account throughout the fixture design and evaluation process. For example: gravity, friction, collision detection, mass, applied force, reaction force and elasticity. Almost the whole fixture design and assembly process is achieved as if in the real physics world, and this provides a promise for computer aided fixture design (CAFD) in the future. The VFDAS system was validated in terms of the collision detection, rendering speed, friction, mass, gravity, applied force, elasticity and toppling. These simulation results are presented and quantified by a series of simple examples to show what the system can achieve and what the limitations are. The research concluded VR is a useful technology and VFDAS has potential to support education and application for fixture design. There is scope for further development to add more useful functionality to the VFDAS system.
60

Unwritten procedural modeling with the straight skeleton

Kelly, Tom January 2014 (has links)
Creating virtual models of urban environments is essential to a disparate range of applications, from geographic information systems to video games. However, the large scale of these environments ensures that manual modeling is an expensive option. Procedural modeling is a automatic alternative that is able to create large cityscapes rapidly, by specifying algorithms that generate streets and buildings. Existing procedural modeling systems rely heavily on programming or scripting - skills which many potential users do not possess. We therefore introduce novel user interface and geometric approaches, particularly generalisations of the straight skeleton, to allow urban procedural modeling without programming. We develop the theory behind the types of degeneracy in the straight skeleton, and introduce a new geometric building block, the mixed weighted straight skeleton. In addition we introduce a simplifcation of the skeleton event, the generalised intersection event. We demonstrate that these skeletons can be applied to two urban procedural modeling systems that do not require the user to write programs. The first application of the skeleton is to the subdivision of city blocks into parcels. We demonstrate how the skeleton can be used to create highly realistic city block subdivisions. The results are shown to be realistic for several measures when compared against the ground truth over several large data sets. The second application of the skeleton is the generation of building's mass models. Inspired by architect's use of plan and elevation drawings, we introduce a system that takes a floor plan and set of elevations and extrudes a solid architectural model. We evaluate the interactive and procedural elements of the user interface separately, finding that the system is able to procedurally generate large urban landscapes robustly, as well as model a wide variety of detailed structures.

Page generated in 0.0347 seconds