• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 15
  • 2
  • Tagged with
  • 83
  • 9
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Engaging older adults with age-related macular degeneration in the design and evaluation of mobile assistive technologies

Hakobyan, Lilit January 2016 (has links)
Ongoing advances in technology are undoubtedly increasing the scope for enhancing and supporting older adults’ daily living. The digital divide between older and younger adults, however, raises concerns about the suitability of technological solutions for older adults, especially for those with impairments. Taking older adults with Age-Related Macular Degeneration (AMD) – a progressive and degenerative disease of the eye – as a case study, the research reported in this dissertation considers how best to engage older adults in the design and evaluation of mobile assistive technologies to achieve sympathetic design of such technologies. Recognising the importance of good nutrition and the challenges involved in designing for people with AMD, this research followed a participatory and user-centred design (UCD) approach to develop a proof–of–concept diet diary application for people with AMD. Findings from initial knowledge elicitation activities contribute to the growing debate surrounding the issues on how older adults’ participation is initiated, planned and managed. Reflections on the application of the participatory design method highlighted a number of key strategies that can be applied to maintain empathic participatory design rapport with older adults and, subsequently, lead to the formulation of participatory design guidelines for effectively engaging older adults in design activities. Taking a novel approach, the final evaluation study contributed to the gap in the knowledge on how to bring closure to the participatory process in as positive a way as possible, cognisant of the potential negative effect that withdrawal of the participatory process may have on individuals. Based on the results of this study, we ascertain that (a) sympathetic design of technology with older adults will maximise technology acceptance and shows strong indicators for affecting behaviour change; and (b) being involved in the design and development of such technologies has the capacity to significantly improve the quality of life of older adults (with AMD).
12

Making representations matter : understanding practitioner experience in participatory sensemaking

Selvin, Albert M. January 2011 (has links)
Appropriating new technologies in order to foster collaboration and participatory engagement is a focus for many fields, but there is relatively little research on the experience of practitioners who do so. The role of technology-use mediators is to help make such technologies amenable and of value to the people who interact with them and each other. When the nature of the technology is to provide textual and visual representations of ideas and discussions, issues of form and shaping arise, along with questions of professional ethics. This thesis examines such participatory representational practice, specifically how practitioners make participatory visual representations (pictures, diagrams, knowledge maps) coherent, engaging and useful for groups tackling complex societal and organizational challenges. This thesis develops and applies a method to analyze, characterize, and compare instances of participatory representational practice in such a way as to highlight experiential aspects such as aesthetics, narrative, improvisation, sensemaking, and ethics. It extends taxonomies of such practices found in related research, and contributes to a critique of functionalist or techno-rationalist approaches to studying professional practice. It studies how fourteen practitioners using a visual hypermedia tool engaged participants with the hypermedia representations, and the ways they made the representations matter to the participants. It focuses on the sensemaking challenges that the practitioners encountered in their sessions, and on the ways that the form they gave the visual representations (aesthetics) related to the service they were trying to provide to their participants. Qualitative research methods such as grounded theory are employed to analyze video recordings of the participatory representational sessions. Analytical tools were developed to provide a multi-perspective view on each session. Conceptual and normative frameworks for understanding the practitioner experience in participatory representational practice in context, especially in terms of aesthetics, ethics, narrative, sensemaking, and improvisation, are proposed. The thesis places these concerns in context of other kinds of facilitative and mediation practices as well as research on reflective practice, aesthetic experience, critical HCI, and participatory design.
13

A fault tolerant microarchitecture for safety-related automotive control

Touloupis, Emmanuel January 2005 (has links)
The successful use of fly-by-wire systems in aviation along with the positive experience of drive-by-wire systems with mechanical backup for braking and power steering have led to the development of complete drive-by-wire systems that reduce the cost of a vehicle, are lighter and provide better passive safety to the passenger. These systems have the form of a distributed, real-time embedded system. Similar architectures can be found in other safetycritical and mission-critical applications in avionics, as mentioned before, medical equipment, and the industrial sector. The advances in embedded system technology has enabled designers to implement low-cost and small form factor electronics. However shrinking CMOS technologies are facing considerable reliability problems since they become more sensitive to transient faults. This thesis investigates the application of traditional methods for the development of safety critical computer systems and their application on single-chip devices. The contributions of this work are briefly summarised as follows: • The development of a novel fault-tolerant architecture for protecting the processor core. • Methods for performing fault-injection experiments on embedded processor architectures. • Fault-models for multiple faults on digital systems with the use of statistical distributions. • An extensive study of a processor's behaviour under the presence of faults within its pipelined execution unit.
14

Composing requirements dependencies across architectural views for improving change impact analysis

Khan, Safoora Shakil January 2013 (has links)
Due to the ever-changing needs of stakeholders, changes in environment and technology requirements have a tendency to evolve during system development. It is essential that prior to incorporating change the analyst must determine the impact of change on the system. It is not,desirable to incorporate a change without understanding and determining its impacts on the system. Requirements traceability (traceability) is the ability to "follow the life of a requirement. in both a forwards and backwards direction (i .e .. from. its origins. through its development and specification. to its subsequent deployment and use. and th.rough periods of on-going refinement and iteration in any of these phases)" (Gotel and Finkelstein 1994; Gotel and Finkelstein 1997). When requirements evolve, the analyst determines the impact by following the traces from the changing requirement to the dependent requirements, design, architecture, and source code. A few traceability approaches and tools capture requirements traces by referential links (trace From and traceTo, or source and target. id= "]". "2" ... id= "n ") or hyperlinks. Such traces are not expressive and they do not explicitly capture the rationale for the existence of dependencies between requirements. When a requirement evolves the impact set includes all the requirements that trace from the changing requirement. The analyst will have to analyse each individual requirement to identify the requirements actually impacted. For a large system with many requirements such trace information is not sufficient to rationalize and assess the impact of change. Hence, finer and more expressive traces will provide an effective and improved impact analysis. This thesis presents a requirements dependency taxonomy that comprises of the minimal set of semantically expressive and architecturally significant dependencies. It captures the rationale for the existence of dependencies between artefacts through explicitly defined meta-models that provide guidelines for capturing requirements dependencies. The aim is to enrich the traces with additional information (metadata), which makes the traces less coupled to the structure of the artefacts and more meaningful independently. There is a huge traceability gap between requirements and architecture (Griinbacher et at. 2001; Matthias 2006; Omoronyia et al. 2011). The existing approaches fall short in providing a better understanding of the system. A few deficiencies in the traceability approaches are because of the syntactic nature of the traces and mapping requirements to standalone architecture. The syntactic traces couple tightly to the structure of the artefact. The requirements trace to a standalone architecture that may not capture complete information and may not be adequate for change impact analysis. The requirements dependencies syntactically map to subsequent phase artefacts, which causes a loss of useful traceability information.
15

Exploiting development to enhance the scalability of hardware evolution

Gordon, Timothy Glennie Wilson January 2005 (has links)
Evolutionary algorithms do not scale well to the large, complex circuit design problems typical of the real world. Although techniques based on traditional design decomposition have been proposed to enhance hardware evolution's scalability, they often rely on traditional domain knowledge that may not be appropriate for evolutionary search and might limit evolution's opportunity to innovate. It has been proposed that reliance on such knowledge can be avoided by introducing a model of biological development to the evolutionary algorithm, but this approach has not yet achieved its potential. Prior demonstrations of how development can enhance scalability used toy problems that are not indicative of evolving hardware. Prior attempts to apply development to hardware evolution have rarely been successful and have never explored its effect on scalability in detail. This thesis demonstrates that development can enhance scalability in hardware evolution, primarily through a statistical comparison of hardware evolution's performance with and without development using circuit design problems of various sizes. This is reinforced by proposing and demonstrating three key mechanisms that development uses to enhance scalability: the creation of modules, the reuse of modules, and the discovery of design abstractions. The thesis includes several minor contributions: hardware is evolved using a common reconfigurable architecture at a lower level of abstraction than reported elsewhere. It is shown that this can allow evolution to exploit the architecture more efficiently and perhaps search more effectively. Also the benefits of several features of developmental models are explored through the biases they impose on the evolutionary search. Features that are explored include the type of environmental context development uses and the constraints on symmetry and information transmission they impose, genetic operators that may improve the robustness of gene networks, and how development is mapped to hardware. Also performance is compared against contemporary developmental models.
16

System software support for possible hardware deficiency

Kägi, Thomas January 2012 (has links)
To days , computer systems are applied in safety critical areas such as military, aviation, intensive health care, industrial control, space exploration etc. All these areas demand highest possible reliability of functional operation. However, ionized particles and radiation impact on current semiconductor hardware leads inevitable to faults in the system. It is expected that such phenomena will be observed much more often in the future due to the ongoing miniaturisation of hardware structures. In this thesis we want to tackle the question of how system software should be designed in the event of such faults, and which fault tolerance features it should provide for highest reliability. We also show how the system software interacts with the hardware to tolerate these faults. In a first step, we analyse and further develop the theory of fault tolerance to understand the different ways how to increase the reliability of a system. Ultimately, the key is to use redundancy in all its different appearances. We we revise and further develop the general algorithm of fault tolerance (GAFT) with its three main processes hardware testing, preparation for recovery and the recovery procedure as our approach to the design of a fault tolerant system. For each of the three processes, we analyse the requirements and properties theoretically and give possible implementation scenarios. Based on the theoretical results, we derive an Oberon based programming language with direct support of the three processes of GAFT. In the last part of the thesis, we analyse a simulator based proof of concept implementation of a novel fault tolerant processor architecture (ERRIC) and its newly developed runtime system feature wise and performance wise.
17

Parameterisation of Markovian queueing models for IT systems

Pacheco-Sachez, Sergio January 2012 (has links)
Modern IT systems are continuously growing both in size and complexity, thus making performance analysis and modelling an increasingly difficult task, if not intractable. As a result, in some cases the task has to be significantly simplified by focusing on selected system components or by reducing it to a smaller scale. One of the biggest challenges in system performance modelling is the accurate determination of service requirements. This thesis presents an investigation into novel statistical inference methods to effectively parameterise queueing models from system traces which exhibit diverse characteristics. I present two case studies, namely one of an Enterprise Resource Planning (ERP) application and one of a web server. Firstly, I propose a modelling methodology to address the limitations of service demand estimation based on CPU utilisation measurements by directly calibrating queueing models based on response time measurements. Secondly, with the aim of capturing more representative web workloads, I propose a methodology to parameterise queueing models for web server performance analysis that extracts hidden Markov models from real HTTP traffic via fitting. Experimental results indicate that all the methods developed as part of this thesis are highly competitive against state-of-the-art techniques.
18

Integrating formal verification and simulation of hybrid systems

Savicks, Vitaly January 2016 (has links)
An increasing number of today's systems can be characterised as cyber-physical, or hybrid systems that combine the concurrent continuous environment and discrete computational logic. In order to develop such systems as safe and reliable one needs to be able to model and verify them from the early stages of the development process. Current modelling technologies allow us to specify the abstractions of these systems in terms of the procedural or declarative modelling languages and visual notations, and to simulate their behaviour over a period of time for analysis. Other means of modelling are formal methods, which define systems in terms of logics and enable rigorous analysis of system properties. While the first class of technologies provides a natural notation for describing physical processes, but lacks the formal proof, the second class relies on mathematical abstractions to rationalise and automate the complex task of formal verification. The benefits of both technologies can be significantly enhanced by a collaborative methodology. Due to the complexity of the considered systems and the formal proof process it is critical that such a methodology is based on a top-down development process that fully supports abstraction and refinement. We develop this idea into a tool extension for the state of the art Rodin platform for system-level formal modelling and analysis in the Event-B language. The developed tool enables integration of the physical simulation with refinement-based formal verification in Event-B, thus enhancing the capabilities of Rodin with the simulation-based validation that supports refinement. The tool utilises the Functional Mock-up Interface (FMI) standard for industrial-grade model exchange and co-simulation and is based on a co-simulation principle between the discrete models in Event-B and continuous physical models of FMI. It provides a graphical environment for model import, composition and co-simulation, and implements a generic simulation algorithm for discrete-continuous co-simulation.
19

The design philosophy of a small automatic digital computer

Thomas, Paul A. V. January 1961 (has links)
No description available.
20

Self-organising service composition in open service-oriented architecture systems

Papadopoulos, Petros January 2012 (has links)
No description available.

Page generated in 0.0258 seconds