91 |
Creating emotionally aware performance environments : a phenomenological exploration of inferred and invisible data spacePovall, Richard Mark January 2003 (has links)
The practical research undertaken for this thesis - the building of interactive and non-interactive environments for performance - posits a radical recasting of the performing body in physical and digital space. The choreographic and thematic context of the performance work has forced us', as makers, to ask questions about the nature of digital interactivity which in turn feeds the work theoretically, technically and thematically. A computer views (and attempts to interpret) motion information through a video camera, and, by way of a scripting language, converts that information into MIDI' data. As the research has developed, our company has been able to design environments which respond sensitivelyto particular artistic / performance demands. I propose to show in this research that is it possible to design an interactive system that is part of a phenomenological performance space, a mechanical system with an ontological heart. This represents a significant shift in thinking from existing systems, is at the heart of the research developments and is what I consider to be one of the primary outcomes of this research, outcomes that are original and contribute to the body of knowledge in this area. The phenomenal system allows me to use technology in a poetic way, where the poetic aesthetic is dominant - it responds to the phenomenal dancer, rather than merely to the 'physico-chemical' (Merleau-Ponty 1964 pp. 10-I I) dancer. Other artists whose work attempts phenomenological approaches to working with technology and the human body are referenced throughout the writing.
|
92 |
Analysis of images under partial occlusionRamakrishnan, Sowmya January 2002 (has links)
In order to recognise objects from images of scenes that typically involve overlapping and partial occlusion, traditional computer vision systems have relied on domain knowledge to achieve acceptable performance. However there is much useful structural information about the scene, for example the resolution of figure-ground ambiguity, which can be recovered or at least plausibly postulated in advance of applying domain knowledge. This thesis proposes a generic information theoretic approach to the recognition and attribution of such structure within an image. It reinterprets the grouping process as a model selection process with MDL (minimum description length) as its information criterion. Building on the Gestalt notion of whole-part relations, a computational theory for grouping is proposed with the central idea that the description length of a suitably structured whole entity is shorter than that of its individual parts. The theory is applied in particular to form structural interpretations of images under partial occlusion, prior to the application of domain knowledge. An MDL approach is used to show that increasingly economical structural models (groups) are selected to describe the image data while combining lower level primitives to form higher level structures. From initially fitted segments, progressive groups are formed leading to closed structures that are eventually classified as foreground or background. Results are observed which conform well with human interpretations of the same scenes.
|
93 |
Behavioural morphisms in virtual environmentsNee, Simon Peter January 2001 (has links)
One of the largest application domains for Virtual Reality lies in simulating the Real World. Contemporary applications of virtual environments include training devices for surgery, component assembly and maintenance, all of which require a high fidelity reproduction of psychomotor skills. One extremely important research question in this field is: "How closely does our facsimile of a real task in a virtual environment reproduce that Task?" At present the field of Virtual Reality is answering this question in subjective terms by the concept of presence and in objective terms by measures of task performance or training effectiveness ratios.
|
94 |
The specification, analysis and metrics of supervised feedforward artificial neural networks for applied science and engineering applicationsLeung, Wing Kai January 2002 (has links)
Artificial Neural Networks (ANNs) have been developed for many applications but no detailed study has been made in the measure of their quality such as efficiency and complexity using appropriate metrics. Without an appropriate measurement, it is difficult to tell how an ANN performs on given applications. In addition, it is difficult to provide a measure of the algorithmic complexity of any given application. Further, it is difficult to make use of the results obtained in an application to predict the ANN's quality in a similar application. This research was undertaken to develop metrics, named Neural Metrics, that can be used in the measurement, construction and specification of backpropagation based supervised feedforward ANNs for applied science and engineering applications. A detailed analysis of backpropagation was carried out with a view to studying the mathematical definitions of the proposed metrics. Variants of backpropagation using various optimisation techniques were evaluated with similar computational and metric analysis. The research involved the evaluation of the proposed set of neural metrics using the computer implementation of training algorithms across a number of scientific and engineering benchmark problems including binary and real type training data. The result of the evaluation, for each type of problem, was a specification of values for all neural metrics and network parameters that can be used to successfully solve the same type of problem. With such a specification, neural users can reduce the uncertainty and hence time in choosing the appropriate network details for solving the same type of problem. It is also possible to use the specified neural metric values as reference points to further the experiments with a view to obtaining a better or sub-optimal solution for the problem. In addition, the generalised results obtained in this study provide users not only with a better understanding of the algorithmic complexity of the problem but also with a useful guideline on predicting the values of metrics that are normally determined empirically. It must be emphasised that this study only considers metrics for assessment of construction and off-line training of neural networks. The operational performance (e.g. on-line deployment of the trained networks) is outside the scope. Operational results (e.g. CPU time and run time errors) on training the networks off-line were obtained and discussed for each type of application problem.
|
95 |
Reliable mobile agents for distributed computingWagealla, Waleed January 2003 (has links)
The emergence of platform-independent, mobile code technologies has created big opportunities for Internet-based applications. Mobile agents are being utilized to perform a variety of tasks from personalized computing to business-critical transactions. Unfortunately, these advances were not matched by correspondent research into the reliability of these new technologies. This work has been undertaken to investigate the faulttolerance of this new paradigm. Agent programs' mobility and autonomy of execution has introduced a new class of failures different to that of traditional distributed systems. Therefore, fault tolerance is one of the main problems that must be resolved to improve the adoption of an agents' paradigm. The investigation of mobile agents reliability in this thesis resulted in the development of REMA (REliable Mobile Agents), which guarantees the reliable execution, migration, and communication of mobile agents in the presence of faults that might affect the agents hosts or their communication network. We introduced an algorithm for the transparent detection of faults that might affect agent execution, migration, and communication. A decentralized structure was used to divide the agent dynamic distributed system into network-partitioning proof spaces. Lightweight messaging was adopted as the basic error detection engine, which together with the loosely coupled detection managers provided an efficient, low overhead detection mechanism for agent-based distributed processing. The problem of taking checkpoint of agent execution is hampered by the lack of the accessibility of the underlying structure of the JVM. Thus, an alternative solution has been achieved through the REMA Checkpoint and Recovery (REMA-CR) package. REMA-CR provides the developer with powerful classes and methods that allow for capturing the critical data of agents' execution. The developed recovery protocol offers a communication-pairs, independent checkpointing strategy at a low-cost, that covers all possible faults that might invalidate reliable agent execution, migration and communication and maintains the exactly once execution property. The results and the performance of REMA confirmed our objectives of providing a fault tolerant wrapper for agents and their applications with acceptable overhead cost.
|
96 |
Design as interactions of problem framing and problem solving : a formal and empirical basis for problem framing in designDzbor, Martin January 2002 (has links)
In this thesis, I present, illustrate and empirically validate a novel approach to modelling and explaining design process. The main outcome of this work is the formal definition of the problem framing, and the formulation of a recursive model of framing in design. The model (code-named RFD), represents a formalisation of a grey area in the science of design, and sees the design process as a recursive interaction of problem framing and problem solving. The proposed approach is based upon a phenomenon introduced in cognitive science and known as (reflective) solution talkback. Previously, there were no formalisations of the knowledge interactions occurring within this complex reasoning operation. The recursive model is thus an attempt to express the existing knowledge in a formal and structured manner. In spite of rather abstract, knowledge level on which the model is defined, it is a firm step in the clarification of design process. The RFD model is applied to the knowledge-level description of the conducted experimental study that is annotated and analysed in the defined terminology. Eventually, several schemas implied by the model are identified, exemplified, and elaborated to reflect the empirical results. The model features the mutual interaction of predicates ‘specifies’ and ‘satisfies’. The first asserts that a certain set of explicit statements is sufficient for expressing relevant desired states the design is aiming to achieve. The validity of predicate ‘specifies’ might not be provable directly in any problem solving theory. A particular specification can be upheld or rejected only by drawing upon the validity of a complementary predicate ‘satisfies’ and the (un-)acceptability of the considered candidate solution (e.g. technological artefact, product). It is the role of the predicate ‘satisfies’ to find and derive such a candidate solution. The predicates ‘specifies’ and ‘satisfies’ are contextually bound and can be evaluated only within a particular conceptual frame. Thus, a solution to the design problem is sound and admissible with respect to an explicit commitment to a particular specification and design frame. The role of the predicate ‘acceptable’ is to compare the admissible solutions and frames against the ‘real’ design problem. As if it answered the question: “Is this solution really what I wanted/intended?” Furthermore, I propose a set of principled schemas on the conceptual (knowledge) level with an aim to make the interactive patterns of the design process explicit. These conceptual schemas are elicited from the rigorous experiments that utilised the structured and principled approach to recording the designer’s conceptual reasoning steps and decisions. They include • the refinement of an explicit problem specification within a conceptual frame; • the refinement of an explicit problem specification using a re-framed reference; and • the conceptual re-framing (i.e. the identification and articulation of new conceptual terms) Since the conceptual schemas reflect the sequence of the ‘typical’ decisions the designer may make during the design process, there is no single, symbol-level method for the implementation of these conceptual patterns. Thus, when one decides to follow the abstract patterns and schemas, this abstract model alone can foster a principled design on the knowledge level. It must be acknowledged that for the purpose of computer-based support, these abstract schemas need to be turned into operational models and consequently suitable methods. However, such operational perspective was beyond the time and resource constraints placed on this research.
|
97 |
Studies on the theory and design space of memetic algorithmsKrasnogor, Natalio January 2002 (has links)
No description available.
|
98 |
Monte Carlo methods for radiosityTaft, Keith January 2002 (has links)
No description available.
|
99 |
The application of genetic and evolutionary algorithms to spanning tree problemsThompson, Evan Benjamin January 2003 (has links)
No description available.
|
100 |
Planning with neural networks and reinforcement learningBaldassarre, Gianluca January 2001 (has links)
No description available.
|
Page generated in 0.032 seconds