• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3288
  • 1226
  • 892
  • 505
  • 219
  • 178
  • 161
  • 161
  • 160
  • 160
  • 160
  • 160
  • 160
  • 159
  • 77
  • Tagged with
  • 8758
  • 4096
  • 2548
  • 2471
  • 2471
  • 808
  • 805
  • 588
  • 580
  • 555
  • 554
  • 525
  • 486
  • 480
  • 472
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Experimental studies of the interaction between people and virtual humans with a focus on social anxiety

Pan, X. January 2010 (has links)
Psychotherapy has been one of the major applications of Virtual Reality technology; examples include fear of flying, heights, spiders, and post‐traumatic stress disorder. Virtual reality has been shown to be useful, in the context of exposure therapy for the treatment of social anxiety, such as fear of public speaking, where the clients learn how to conquer their anxiety through interactions with Virtual Characters (avatars). This thesis is concerned with the interaction between human participants and avatars in a Virtual Environment (VE), with the main focus being on Social Anxiety. It is our hypothesis that interactions between people and avatars can evoke in people behaviours that correspond to their degree of social anxiety or confidence. Moreover the responses of people to avatars will also depend on their degree of exhibited social anxiety – they will react differently to a shy avatar compared to a confident avatar. The research started with an experimental study on the reaction of shy and confident male volunteers to an approach by an attractive and friendly virtual woman in a VE. The results show that the participants responded according to expectations towards the avatar at an emotional, physiological, and behavioural level. The research then studied a particular cue which represents shyness – “blushing”. Experiments were carried out on how participant responds towards a blushing avatar. The results suggested that, even without consciously noticing the avatar’s blushing, the participants had an improved relationship with her when she was blushing. Finally, the research further investigated how people respond towards a shy avatar as opposed to a confident one. The results show that participants gave more positive comments to the personality of the avatar displaying signs of shyness.
192

Eye tracking and avatar-mediated communication in immersive collaborative virtual environments

Steptoe, W. A. H. January 2010 (has links)
The research presented in this thesis concerns the use of eye tracking to both enhance and understand avatar-mediated communication (AMC) performed by users of immersive collaborative virtual environment (ICVE) systems. AMC, in which users are embodied by graphical humanoids within a shared virtual environment (VE), is rapidly emerging as a prevalent and popular form of remote interaction. However, compared with video-mediated communication (VMC), which transmits interactants’ actual appearance and behaviour, AMC fails to capture, transmit, and display many channels of nonverbal communication (NVC). This is a significant hindrance to the medium’s ability to support rich interpersonal telecommunication. In particular, oculesics (the communicative properties of the eyes), including gaze, blinking, and pupil dilation, are central nonverbal cues during unmediated social interaction. This research explores the interactive and analytical application of eye tracking to drive the oculesic animation of avatars during real-time communication, and as the primary method of experimental data collection and analysis, respectively. Three distinct but interrelated questions are addressed. First, the thesis considers the degree to which quality of communication may be improved through the use of eye tracking, to increase the nonverbal, oculesic, information transmitted during AMC. Second, the research asks whether users engaged in AMC behave and respond in a socially realistic manner in comparison with VMC. Finally, the degree to which behavioural simulations of oculesics can both enhance the realism of virtual humanoids, and complement tracked behaviour in AMC, is considered. These research questions were investigated over a series of telecommunication experiments investigating scenarios common to computer supported cooperative work (CSCW), and a further series of experiments investigating behavioural modelling for virtual humanoids. The first, exploratory, telecommunication experiment compared AMC with VMC in a three-party conversational scenario. Results indicated that users employ gaze similarly when faced with avatar and video representations of fellow interactants, and demonstrated how interaction is influenced by the technical characteristics and limitations of a medium. The second telecommunication experiment investigated the impact of varying methods of avatar gaze control on quality of communication during object-focused multiparty AMC. The main finding of the experiment was that quality of communication is reduced when avatars demonstrate misleading gaze behaviour. The final telecommunication study investigated truthful and deceptive dyadic interaction in AMC and VMC over two closely-related experiments. Results from the first experiment indicated that users demonstrate similar oculesic behaviour and response in both AMC and VMC, but that psychological arousal is greater following video-based interaction. Results from the second experiment found that the use of eye tracking to drive the oculesic behaviour of avatars during AMC increased the richness of NVC to the extent that more accurate estimation of embodied users’ states of veracity was enabled. Rather than directly investigating AMC, the second series of experiments addressed behavioural modelling of oculesics for virtual humanoids. Results from the these experiments indicated that oculesic characteristics are highly influential to the perceived realism of virtual humanoids, and that behavioural models are able to complement the use of eye tracking in AMC. The research presented in this thesis explores AMC and eye tracking over a range of collaborative and perceptual studies. The overall conclusion is that eye tracking is able to enhance AMC towards a richer medium for interpersonal telecommunication, and that users’ behaviour in AMC is no less socially ‘real’ than that demonstrated in VMC. However, there are distinct differences between the two communication mediums, and the importance of matching the characteristics of a planned communication with those of the medium itself is critical.
193

Algorithms for computational argumentation in artificial intelligence

Efstathiou, V. January 2011 (has links)
Argumentation is a vital aspect of intelligent behaviour by humans. It provides the means for comparing information by analysing pros and cons when trying to make a decision. Formalising argumentation in computational environment has become a topic of increasing interest in artificial intelligence research over the last decade. Computational argumentation involves reasoning with uncertainty by making use of logic in order to formalize the presentation of arguments and counterarguments and deal with conflicting information. A common assumption for logic-based argumentation is that an argument is a pair < Φ α > where Φ is a consistent set which is minimal for entailing a claim α. Different logics provide different definitions for consistency and entailment and hence give different options for formalising arguments and counterarguments. The expressivity of classical propositional logic allows for complicated knowledge to be represented but its computational cost is an issue. This thesis is based on monological argumentation using classical propositional logic [12] and aims in developing algorithms that are viable despite the computational cost. The proposed solution adapts well established techniques for automated theorem proving, based on resolution and connection graphs. A connection graph is a graph where each node is a clause and each arc denotes there exist complementary disjuncts between nodes. A connection graph allows for a substantially reduced search space to be used when seeking all the arguments for a claim from a given knowledgebase. In addition, its structure provides information on how its nodes can be linked with each other by resolution, providing this way the basis for applying algorithms which search for arguments by traversing the graph. The correctness of this approach is supported by theoretical results, while experimental evaluation demonstrates the viability of the algorithms developed. In addition, an extension of the theoretical work for propositional logic to first-order logic is introduced.
194

The derivation of principled algorithms for systematic trading using generalisation bounds

Hosseini, D. January 2013 (has links)
No description available.
195

Context flow architecture

Lees, Timothy January 1990 (has links)
Good computer architecture is, in many ways, very similar to good building architecture. Its effectiveness can only be judged by the way in which the implementation of the design - whether a computer or a building - fulfils its given role. In computer architecture, this judgement is based on one criterion - <i>speed</i>. In the best computers, speed is obtained by coupling the best current technology and materials with the best design. This Thesis presents a novel way in which to design pipelined computers. Speed is achieved by maximising the use of hardware resources to provide an environment in which many independent processes can execute concurrently in a single system. The design method is called <i>context flow</i>. Two different facets of context flow are discussed. An underlying theory of context flow is established which is used to prove certain properties of context flow systems. These theoretical results show context flow machines to be implementable. Using these results, a practical approach to the creation of context flow systems is presented, leading to the design and analysis of an example context flow processor. The result is an architectural design technique with a formal foundation which can be used to build efficient pipelined computers.
196

Static allocation of computation to processors in multicomputers

Norman, Michael G. January 1993 (has links)
In this thesis we address the static mapping problem - that is the problem of allocating computation to processors - in a MIMD, distributed-memory architecture: a multicomputer. We are primarily interested in the way in which the computation and the multicomputer can be modelled: the features of the multicomputer and the computation that are included and left out, and the way in which that impacts upon the predictions made by the models for the performance of computations. We try to put the various published formulations of the mapping problem into the context of the multicomputer, and to identify correspondences between features of the models underlying the formulations, and features of the multicomputer and the computation. The two types of models which we choose to consider in detail are precedence constrained scheduling with interprocessor communication delay, and static process based models. We review approaches to hybridizing the two types of model and propose such a model of our own. We also consider the impact of message contention in the multicomputer. We analyse the models underlying formulations of the mapping problem in a number of ways. We look at the way in which performance gains can be achieved by adding more processors to the models. We consider the way in which the complexity of mapping problems depends upon the modelling of interprocessor communication. We compare bounds of performance given for approximation algorithms in different, but related models. We show, for an example computation, how the predictions of the various model differ and how these differences might lead the multicomputer programmer to different conclusions. Finally we relate the predicted performance in some of the models of our example computation with that observed when executing it on a real multicomputer.
197

Formal synthesis of control signals for systolic arrays

Xue, Jingling January 1992 (has links)
The distinguishing features characteristic of systolic arays are synchrony of computations, local and regular connections between processors and massive decentralised parallelism. The potential of the systolic array lies in its suitability for VLSI fabrication and its practicality for a variety of application areas such as signal or image processing and numeric analysis. With the increasing possibilitites promised by advances in VLSI technology and computer architecture, more and more complex problems are now solvable by systolic arrays. This thesis describes a systematic method for the synthesis of control signals for systolic arrays that are realised in hardware. Control signals ensure that the right computations are executed at the right processors at the right time. The proposed method applies for iterative algorithms defined over a domain that can be expressed by a convex set of integer coordinates. Algorithms that can be implemented as systolic arrays can be expressed this way; a large subclass can be phrased as affine (or uniform) recurrence equations in the functional style and as nested loops in the imperative style. The synthesis of control signals from a program specification is a process of program transformation and construction. The basic idea is to replace the domain predicates in the initial program specification which constitute the abstract specification of control signals by a system of uniform recurrence equations by means of data pipelining. Then, systolic arrays with a description of both data and control signals can be obtained by a direct application of the standard space-time mapping technique.
198

High-level synthesis of VLSI circuits

Yeung, Ping F. January 1992 (has links)
Following the widespread acceptance and application of logic synthesis, we are on the way to establishing synthesis methodologies which can handle higher levels of abstraction. High-level synthesis is the focal point. It should be able to take a behavioural description of the design, a set of constraints and goals, then construct a structural implementation that performs the circuit function while satisfying the constraints. In order to ensure a smooth transformation and mapping of high-level description onto hardware, a new strategy for high-level synthesis, flexibility damping, is introduced. It allows a large design space to be explored progressively and systematically. It facilitates the propagation of constraints and helps the introduction of user-specified information. To carry out the strategy, two algorithms, resource restricted scheduling and integrated concurrent mapping are developed. Resource restricted scheduling handles complex control structures and schedules operations across basic blocks in order to utilise all the available resources. After the scheduling has established the flexibility of the abstract elements, concurrent mapping is performed to bind operations, storage, and communications onto functional units, register files and buses concurrently. By considering all the resources at the same time, this mapping process ensures an overall minimum cost of implementation.
199

Versatile communication cost modelling for multicomputer task scheduling

Boeres, Cristina January 1997 (has links)
Programmers face daunting problems when attempting to design portable programs for multicomputers. This is mainly due to the huge variation in communication performance on the range of multicomputer platforms currently in use. These programmers require a computational model that is sufficiently abstract to allow them to ignore machine-specific performance features, and yet is sufficiently versatile to allow the computational structure to be mapped efficiently to a wide range of multicomputer platforms. This dissertation focusses on parallel computations that can be expressed as task graphs: tasks that must be scheduled on the multicomputer's processors. In the past, scheduling models have only considered the message delay as the predominant communication parameter. In the current generation of parallel machines, however, latency is negligible compared to the CPU penalty of communication-related activity associated with inter-processor communication. This CPU penalty cannot be modelled by a latency parameter because the CPU activity consumes time otherwise available for useful computation. In view of this, we consider a model in which the CPU penalty is significant and is associated with communication events that are incurred when applications execute in parallel. In this dissertation a new multi-stage scheduling approach that takes into account these communication parameters is proposed. Initially, in the first stage, the input task graph is transformed into a new structure that can be scheduled with a smaller number of communication events. Task replication is incorporated to produce clusters of tasks. However, a different view of clusters is adopted. Tasks are clustered so that messages are bundled and consequently, the number of communication events is decreased. The communication event tasks are associated with the relationship between the clusters. More specifically, this stage comprises a family of scheduling heuristics that can be customised to classes of parallel machines, according to their communication performance characteristics, through parameterisation and by varying the order in which the heuristics are applied. A second stage is necessary, where the actual schedule on the target machine is defined. The mechanisms implemented analyse carefully the clusters and their relationship so that communication costs are minimised and the degree of parallelism is exploited. Therefore, the aim of the proposed approach is to tackle the min-max problem, considering realistic architectural issues.
200

Pulse-stream binary stochastic hardware for neural computation : the Helmholtz Machine

Astaras, Alexander January 2004 (has links)
This thesis proposes a novel hardware implementation for a binary-state, probabilistic artificial neuron using the pulse-stream integrated circuit design methodology. The artificial neural network architecture targeted for implementation is the Helmholtz Machine, an auto-encoder trained by the unsupervised Wake-Sleep algorithm. A dual-layer network was implemented on the second of two prototype integrated circuit prototypes, intended for hardware-software comparative experiments in unsupervised probabilistic neural computation. Circuit modules have been designed to perform the synaptic multiplication and integration functions, the sigmoid activation function and to provide probabilistic output. All circuit design is modular and scaleable, with particular attention given to silicon area and power consumption. The neuron outputs the calculated probability as a mark-to-period modulated stream of pulses, which is then randomly sampled to determine the next state for the neuron. Implementation issues are discussed, such as a tendency for the probabilistic oscillators inside each neuron to phase-lock or become unstable at higher frequencies and how to overcome these issues through careful analogue circuit design and low-power operation. Results from parallel hardware-software experiments clearly show that learning takes place consistently on both networks, verifying that the proposed hardware is capable of unsupervised probabilistic neural computation. As expected due to its superior mathematical precision, the software-simulated network learns more efficiently when using the same training sets and learning parameters. The hardware implementation on the other hand has the advantage of speed, particularly when full advantage is taken of its parallel processing potential. The developed hardware can also accept pulse-width analogue neural states, a feature that can be exploited for the implementation of other existing and future auto-encoder artificial neural network architectures.

Page generated in 0.0302 seconds