• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 135
  • 25
  • 17
  • 13
  • 10
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Design for usability of interactive multimedia services to the home

McKay, Iain G. January 1999 (has links)
This thesis investigates the design of attracting and memorable on-screen artefacts for use within interactive multimedia services to the home and assesses the contribution that such artefacts and other factors make to overall service usability. A suite of demographic and socio-technical factors are used to perform statistical analysis and characterise the key usability issues from a user perspective after the participant cohort has been separated according to such parameters. Experimental results from research carried out in the UK and USA using internet and interactive television systems quantify user attitude to the usability of such services - measuring both explicit response and implicit choice. Systems under test include prototypes commissioned for the experiments which allow comparisons with existing systems. Both traditional paper-based and novel screen-based questionnaires capture user attitude; the latter affording a more reliable data source for statistical analysis and experimentally secure presentation to participants. The relative impact of on-screen artefacts is investigated with regards to the relative salience and attraction between icons competing for user attention and retention in memory. Various experiments construct a 'league table of salience' for 3D icon designs under test, covering effects with low-level psychological attractors and others with higher-level emotive effects applied in order to grab user attention. Having identified which 3D icons attract the attention of the casual browser it is suggested how this 'power' of attraction may be used by on-line retailers to draw user attention towards products and services according to the vendor's priorities. To this end, a virtual 'shopping mall' and 'video store' are used to investigate the effect of such effects within a retail environment. Being internet-delivered, one of the key factors in user's perceived usability is that of system latency - a key component of which is the bandwidth available between client and server. Experiments are built to explore the relationship between bandwidth available and overall attitude for different experimental scenarios - one requiring a relatively large initial download and another using streamed digital video where the video quality (frame rate) depends on the underlying network characteristics.
2

A scenario based approach to speech-enabled computer assisted language learning based on automated speech recognition and virtual reality graphics

Morton, Hazel January 2007 (has links)
By using speech recognition technology, Computer Assisted Language Learning (CALL) programs can provide learners with opportunities to practise speaking in the target language and develop their oral language skills. This research is a contribution to the emerging and innovative area of speech-enabled CALL applications. It describes a CALL application, SPELL (Spoken Electronic Language Learning), which integrates software for speaker independent continuous speech recognition with embodied virtual agents and virtual worlds to create an immersive environment in which learners can converse in the target language in contextualized scenarios. The design of the program is based on a communicative approach to second language acquisition which posits that learning activities should give learners opportunities to communicate in the target language in meaningful contexts. In applying a communicative approach to the design of a CALL program, the speech recogniser is programmed to allow a variety of responses form the learner and to recognise grammatical and ungrammatical utterances so that the learner can receive relevant and immediate feedback to their utterance. Feedback takes two key forms: <i>reformations</i>, where the system repeats or reformulates the agent’s initial speech, and <i>recasts</i>, where the system repeats the learner’s utterance, implicitly correcting any errors. This research claims that speech-enabled CALL systems which employ an open-ended approach to the recognition grammars and which adapt a communicative approach are usable, engaging and motivating conversational tools for language learners. In addition, by employing implicit feedback strategies in the design, speech recognition errors can be mitigated such that interactions between learners and embodied virtual agents can proceed while providing learners with valuable target language input during the interactions. These claims are based on a series of three empirical studies conducted with end users of the system.
3

The use of IT in enhancing the literacy and communication skills of deaf Chinese school pupils

Clubb, Orville Leverne January 2002 (has links)
No description available.
4

Visual words for automatic lip-reading

Hassanat, Ahmad Basheer January 2009 (has links)
No description available.
5

Nonlinear modelling of drum sounds

Hovell, Simon A. January 1994 (has links)
The aim of this work was to design a model of a simple drum that could reproduce all the nuances found in a real drum effectively and convincingly. In the past, this approach had often failed due to an inability to regenerate the very beginning of the sound - known as the percussive attack - successfully, possibly because of nonlinear information present in this part of the sound. One tool for detecting the presence of such nonlinear information is higher order spectral analysis. Detection of phase coupling between signals is one of the principal features of higher order signal analysis. It is shown that the presence of such phase coupling is a measure of dependence between different signal components. Furthermore, it is shown that examination of the power bispectrum can be used to detect the presence of nonlinear interactions between signals. Examination of the bispectra of a database of acoustic drum records gathered under strictly monitored conditions shows the presence of such interactions in the initial percussive attack. In order to exploit this information, it is necessary to use a nonlinear filter structure. Two such structures are examined, the Volterra filter, and the radial basis function network. It is found that the Volterra filter is capable of accurate reproduction of the percussive attack. Both filter structures suffer from a large degree of redundancy, and two techniques for reducing the size of the filters are successfully applied. It is seen that the simple least squares noise thresholding method performs better than the established orthogonal least squares algorithm, although at the cost of significant computational overhead.
6

Developing new techniques for modelling crowd movement

Thompson, Peter A. January 1994 (has links)
This thesis describes the analysis and development of new systems for modelling the movement of individuals in crowded situations. A literature review of previous research in this field is presented, and is accompanied by an analysis and appraisal of the methods and findings of these studies. Specific areas for potential research are identified and discussed, and the subsequent investigations by the author are described in detail. Although some investigation into the potential use of hydraulic modelling is described, the majority of the research work is concerned with the computer simulation of the escape movement of individuals from a building. The computer program assigns a variety of attributes to each individual in the building population. These attributes include gender, age and body size. Specific algorithms that facilitate the simulation of escape movement include distance mapping, wayfinding, overtaking, route deviation, and adjustments to individual speeds due to the proximity of crowd members. These algorithms contribute to a computer package that displays the building plan and the position and progress of individual building occupants as they walk to the exits. Walking speeds, flow rates and movement parameters are compared to real-life data, and the success of applying the package to real-life problems is discussed. The thesis also describes the collection of new crowd data by the use of image analysis techniques.
7

User participation in standardisation processes : impact, problems and benefits

Jakobs, Kai January 1998 (has links)
The thesis first provides an in-depth review of the relevant literature on innovation processes, the social shaping of technology, and on standardisation. In addition, the crucial term 'user' is thoroughly discussed. This review serves as the framework within which subsequent analyses will be placed. Subsequently, a brief account of the methodology applied to compile the primary data is given. As the major part of the survey was done via e-mail, this also includes a discussion of the pros and cons of this medium for doing survey research. Some rather more 'technical' background material is provided in chapter four. The formal processes adopted by the standards setting bodies represented in the study are briefly described, and the functionalities of the two messaging standards looked at (i.e. the ITU-T X.400 and X.500 series of recommendations on e-mail and the directory service, respectively) are outlined. The remaining chapters present an analysis of the compiled data, and offer some conclusions. In particular, I discuss to what extent corporate users' requirements on messaging services are met, and identify the remaining gaps. Different categories of 'strategies' for the introduction of an electronic mail service in an organisational environment are identified; these are reviewed as well. Subsequently, some issues surrounding the standardisation process are addressed. The initial idea of this process is developed into a more realistic model, largely based on comments made by committee members in the survey. User participation in this process is another focus; the associated pros and cons, as perceived by different stakeholders, are presented and discussed. Finally, I attempt to form a coherent picture out of the various topics addressed so far. The existing relations between innovation theory, user requirements, introduction strategies, and standardisation processes are pointed out, and some conclusions that can be drawn from these relations are presented.
8

Fingerprint comparison by template matching

Bruce, William Henry January 1993 (has links)
A technique for fingerprint comparison based on template matching is presented. A digitised greyscale image is initially pre-processed, from which the template is derived. The use of "<I>don't care</I>" states in the template, which inhibit pixel comparisons, prevents environmental variations and noise from adversely affecting correlation results with subsequent images. The novelty of this approach is that although template matching is a mature method of pattern recognition, there are no reported successful attempts that solve the problem of fingerprint comparison using this technique. The fingerprint reference comprises a set of sub-templates in order to overcome localised skin stretching. These are individually correlated with the processed binary image. Significant correlation scores of each of the sub-templates are posted in a voting area. After all the sub-templates have correlated with the image, this area is then polled for clusters of votes, whose density determines the success of the comparison. It is seen that pattern matching techniques are dependent on the clarity of data they process, and a method for capturing fingerprint images of a consistently high quality is presented. A parallel template matching architecture comprising an array of 32 correlation cells is also presented. The array enables the simultaneous correlation of four sub-templates with eight areas of the image. This architecture makes use of industry standard byte wide random access memories (RAM) for storing the reference templates and the image. The algorithms that comprise the fingerprint comparison system are taken from concepts, through a stage of empirical development and extensive field trials, to an eventual compact and cost effective Very Large Scale Integration (VLSI) Application Specific Integrated Circuit (ASIC) based implementation.
9

Network analysis of semantic spaces with application in computer supported collaborative learning

Tzoumakas, Vasileios January 2008 (has links)
No description available.
10

Photorealistic retrieval of occluded facial information using a performance-driven face model

Berisha, F. January 2009 (has links)
Facial occlusions can cause both human observers and computer algorithms to fail in a variety of important tasks such as facial action analysis and expression classification. This is because the missing information is not reconstructed accurately enough for the purpose of the task in hand. Most current computer methods that are used to tackle this problem implement complex three-dimensional polygonal face models that are generally timeconsuming to produce and unsuitable for photorealistic reconstruction of missing facial features and behaviour. In this thesis, an image-based approach is adopted to solve the occlusion problem. A dynamic computer model of the face is used to retrieve the occluded facial information from the driver faces. The model consists of a set of orthogonal basis actions obtained by application of principal component analysis (PCA) on image changes and motion fields extracted from a sequence of natural facial motion (Cowe 2003). Examples of occlusion affected facial behaviour can then be projected onto the model to compute coefficients of the basis actions and thus produce photorealistic performance-driven animations. Visual inspection shows that the PCA face model recovers aspects of expressions in those areas occluded in the driver sequence, but the expression is generally muted. To further investigate this finding, a database of test sequences affected by a considerable set of artificial and natural occlusions is created. A number of suitable metrics is developed to measure the accuracy of the reconstructions. Regions of the face that are most important for performance-driven mimicry and that seem to carry the best information about global facial configurations are revealed using Bubbles, thus in effect identifying facial areas that are most sensitive to occlusions. Recovery of occluded facial information is enhanced by applying an appropriate scaling factor to the respective coefficients of the basis actions obtained by PCA. This method improves the reconstruction of the facial actions emanating from the occluded areas of the face. However, due to the fact that PCA produces bases that encode composite, correlated actions, such an enhancement also tends to affect actions in non-occluded areas of the face. To avoid this, more localised controls for facial actions are produced using independent component analysis (ICA). Simple projection of the data onto an ICA model is not viable due to the non-orthogonality of the extracted bases. Thus occlusion-affected mimicry is first generated using the PCA model and then enhanced by accordingly manipulating the independent components that are subsequently extracted from the mimicry. This combination of methods yields significant improvements and results in photorealistic reconstructions of occluded facial actions.

Page generated in 0.0184 seconds