• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1168
  • 606
  • 415
  • 139
  • 105
  • 41
  • 24
  • 20
  • 14
  • 13
  • 12
  • 10
  • 10
  • 7
  • 6
  • Tagged with
  • 3044
  • 1030
  • 690
  • 619
  • 445
  • 359
  • 340
  • 298
  • 271
  • 268
  • 234
  • 233
  • 223
  • 207
  • 151
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Improved control of photovoltaic interfaces

Xiao, Weidong 11 1900 (has links)
Photovoltaic (solar electric) technology has shown significant potential as a source of practical and sustainable energy; this study focuses on increasing the performance of photovoltaic systems through the use of improved control and power interfaces. The main objective is to find an effective control algorithm and topology that are optimally suited to extracting the maximum power possible from photovoltaic modules. The thesis consists of the following primary subjects: photovoltaic modelling, the topological study of photovoltaic interfaces, the regulation of photovoltaic voltage, and maximum power tracking. In photovoltaic power systems both photovoltaic modules and switching mode converters present non-linear and time-variant characteristics, resulting in a difficult control problem. This study applies in-depth modelling and analysis to quantify these inherent characteristics,s pecifically using successive linearization to create a simplified linear problem. Additionally, Youla Parameterisation is employed to design a stable control system for regulating the photovoltaic voltage. Finally, the thesis focuses on two critical aspects to improve the performance of maximum power point tracking. One improvement is to accurately locate the position of the maximum power point by using centred differentiation. The second is to reduce the oscillation around the steady-state maximum power point by controlling active perturbations. Adopting the method of steepest descent for maximum power point tracking, which delivers faster dynamic response and a smoother steady-state than the hill climbing method, enables these improvements. Comprehensive experimental evaluations have successfully illustrated the effectiveness of the proposed algorithms. Experimental evaluations show that the proposed control algorithm harvests about 1% more energy than the traditional method under the same evaluation platform and weather conditions without increasing the complexity of the hardware.
162

Djasa : a language, environment and methodology for interaction design

Sadun, Erica 05 1900 (has links)
No description available.
163

The use of ambient audio to increase safety and immersion in location-based games

KURCZAK, JOHN JASON 01 February 2012 (has links)
The purpose of this thesis is to propose an alternative type of interface for mobile software being used while walking or running. Our work addresses the problem of visual user interfaces for mobile software be- ing potentially unsafe for pedestrians, and not being very immersive when used for location-based games. In addition, location-based games and applications can be dif- ficult to develop when directly interfacing with the sensors used to track the user’s location. These problems need to be addressed because portable computing devices are be- coming a popular tool for navigation, playing games, and accessing the internet while walking. This poses a safety problem for mobile users, who may be paying too much attention to their device to notice and react to hazards in their environment. The difficulty of developing location-based games and other location-aware applications may significantly hinder the prevalence of applications that explore new interaction techniques for ubiquitous computing. We created the TREC toolkit to address the issues with tracking sensors while developing location-based games and applications. We have developed functional location-based applications with TREC to demonstrate the amount of work that can be saved by using this toolkit.In order to have a safer and more immersive alternative to visual interfaces, we have developed ambient audio interfaces for use with mobile applications. Ambient audio uses continuous streams of sound over headphones to present information to mobile users without distracting them from walking safely. In order to test the effectiveness of ambient audio, we ran a study to compare ambient audio with handheld visual interfaces in a location-based game. We compared players’ ability to safely navigate the environment, their sense of immersion in the game, and their performance at the in-game tasks. We found that ambient audio was able to significantly increase players’ safety and sense of immersion compared to a visual interface, while players performed signifi- cantly better at the game tasks when using the visual interface. This makes ambient audio a legitimate alternative to visual interfaces for mobile users when safety and immersion are a priority. / Thesis (Master, Computing) -- Queen's University, 2012-01-31 23:35:28.946
164

Speech Perception in Virtual Environments

Verwey, Johan 01 January 2006 (has links)
Many virtual environments like interactive computer games, educational software or training simulations make use of speech to convey important information to the user. These applications typically present a combination of background music, sound effects, ambient sounds and dialog simultaneously to create a rich auditory environment. Since interactive virtual environments allow users to roam freely among different sound producing objects, sound designers do not always have exact control over what sounds a user will perceive at any given time. This dissertation investigates factors that influence the perception of speech in virtual environments under adverse listening conditions. A virtual environment was created to study hearing performance under different audio-visual conditions. The two main areas of investigation were the contribution of "spatial unmasking" and lip animation to speech perception. Spatial unmasking refers to the hearing benefit achieved when the target sound and masking sound are presented from different locations. Both auditory and visual factors influencing speech perception were considered. The capability of modern sound hardware to produce a spatial release from masking using real-time 3D sound spatialization was compared with the pre-computed method of creating spatialized sound. It was found that spatial unmasking could be achieved when using a modern consumer 3D sound card and either a headphone or surround sound speaker display. Surprisingly, masking was less effective when using real-time sound spatialization and subjects achieved better hearing performance than when the pre-computed method was used. Most research on the spatial unmasking of speech has been conducted in pure auditory environments. The influence of an additional visual cue was first investigated to determine whether this provided any benefit. No difference in hearing performance was observed when visible objects were presented at the same location as the auditory stimuli. Because of inherent limitations of display devices, the auditory and visual environments are often not perfectly aligned, causing a sound-producing object to be seen at a different location from where it is heard. The influence of audio-visual integration between the conflicting spatial information was investigated to see whether it had any influence on the spatial unmasking of speech in noise. No significant difference in speech perception was found regardless of whether visual stimuli was presented at the correct location matching the auditory position, at a spatially disparate location from the auditory source. Lastly the influence of rudimentary lip animation on speech perception was investigated. The results showed that correct lip animations significantly contribute to speech perception. It was also found that incorrect lip animation could result in worse performance than when no lip animation is used at all. The main conclusions from this research are: That the 3D sound capabilities of modern sound hardware can and should be used in virtual environments to present speech; Perfectly align auditory and visual environments are not very important for speech perception; Even rudimentary lip animation can enhance speech perception in virtual environments.
165

Mobile Media Distribution in Developing Contexts

Smith, Graeme 01 January 2011 (has links)
There are a growing number of mobile phones being used in developing contexts, such as Africa. A large percentage of these phones have the capability to take photographs and transmit them freely using Bluetooth. In order to provide people with media on their mobile phones public displays are becoming more common. Three problems with current public displays – cost, security and mobility – are discussed and system proposed that uses a mobile phone as a server. Media is displayed on specially designed paper posters, which users can photograph using their mobile phones. The resulting photographs are sent, via Bluetooth, to the server, which analyses them in order to locate a specially designed barcode, representing the media, which is then decoded and the requisite media returned to the user. Two barcoding systems are tested in laboratory conditions, and a binary system is found to perform best. The system is then deployed on a campus transportation system to test the effects of motion. The results show that the system is not yet ready for deployment on moving transport.
166

Gesture Based Interface for Asynchronous Video Communication for Deaf People in South Africa

Ramuhaheli, Tshifhiwa 01 January 2011 (has links)
The preferred method of communication amongst Deaf people is that of sign language. There are problems with the video quality when using the real-time video communication available on mobile phones. The alternative is to use text-based communication on mobile phones, however findings from other research studies show that Deaf people prefer using sign language to communicate with each other rather than text. This dissertation looks at implementing a gesture-based interface for an asynchronous video communication for Deaf people. The gesture interface was implemented on a store and forward video architecture since this preserves the video quality even when there is low bandwidth. In this dissertation three gesture-based video communication prototypes were designed and implemented using a user centred design approach. These prototypes were implemented on both the computer and mobile devices. The first prototype was computer based and the evaluation of this prototype showed that the gesture based interface improved the usability of sign language video communication. The second prototype is set up on the mobile device and it was tested on several mobile devices but the device limitation made it impossible to support all the features needed in the video communication. The different problems experienced on the dissimilar devices made the task of implementing the prototypes on the mobile platform challenging. The prototype was revised several times before it was tested on a different mobile phone. The final prototype used both the mobile phone and the computer. The computer served to simulate a mobile device with greater processing power. This approach simulated a more powerful future mobile device capable of running the gesture-based interface. The computer was used for video processing but to the user it was as if the whole system was running on the mobile phone. The evaluation process was conducted with ten Deaf users in order to determine the efficiency and usability of the prototype. The results showed that the majority of the users were satisfied with the quality of the video communication. The evaluation also revealed usability problems but the benefits of communicating in sign language outweighed the usability difficulties. Furthermore the users were more interested in the video communication on the mobile devices than on the computer as this was a much more familiar technology and offered the convenience of mobility.
167

Designing Digital Storytelling for Rural African Communities

Reitmaier, Thomas 01 August 2011 (has links)
Chongilala – a long time ago – says Mama Rhoda of Adiedo, Kenya. She looks deeply into our eyes. We record her rhythms and rhymes as she sings and tells a story about her grandparents. She shows us the exact spot where her great-grandfathers and his friends used to sit and drink and how her grandmother used to dance. This thesis situates digital storytelling in rural African communities to enable rural people, like Mama Rhoda, to record and share their stories and to express their imaginations digitally. We explore the role of design, and the methods and perspectives designers need to take on to design across cultures and to understand the forms and meanings behind rural African interpretations of digital storytelling. These perspectives allow us to 'unconceal' how our Western storytelling traditions have influenced design methods and obscure the voices of ‘other’ cultures. By integrating ethnographic insights with previous experiences of designing mobile digital storytelling systems, we implement a method using cell-phones to localize storytelling and involve rural users in de- sign activities – probing ways to incorporate visual and audio media in storytelling. Products from this method help us to generate design ideas for our system, most notably flexibility. Leveraging this prototype as a probe and observing villagers using it in two villages in South Africa and Kenya, we report on situated use of our prototype and discuss, and relate to usage, the insights we gathered on our prototype, the users, their needs, and their context. We use these insights to uncover further implications for situating digital storytelling within those communities and reflect on the importance of spending time in-situ when designing across cultures. Deploying our prototype through an NGO, we stage first encounters with digital storytelling and show how key insiders can introduce the system to a wider community and make it accessible through their technical and social expertise. Our mobile digital storytelling system proved to be both useable and useful and its flexibility allowed users to form their own interpretations of digital storytelling and (re)appropriate our system to alternative ends. Results indicate that our system accommodates context and that storytelling activities around our system reflect identity. Our activities in communities across Africa also show that our system can be used as a digital voice that speaks to us, by allowing users to express themselves – through digital stories – in design.
168

Effects of an immediate feedback tool on designer productivity and design usability

Schneier, Carol A. January 1987 (has links)
No description available.
169

Model-based user interface design by demonstration and by interview

Frank, Martin Robert 12 1900 (has links)
No description available.
170

An investigation of the nature of the interaction of dicyandiamide cured epoxy resin adhesives with aluminium substrates, using model compounds

Robb, David Andrew January 2001 (has links)
No description available.

Page generated in 0.0744 seconds