• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Matching algorithms for interest management in distributed virtual environments

Liu, Sze-Yeung January 2012 (has links)
Interest management in distributed virtual environments (DVEs) is a data filtering technique which is designed to reduce bandwidth consumption and therefore enhances the scalability of the system. This technique usually involves a process called “interest matching", which determines what data should be sent to the participants as well as what data should be filtered. This thesis surveys the state of the art in interest management systems and defines three major design requirements. Based on the requirement analysis, it can be summarised that most of the existing interest matching approaches are developed to solve the trade-off between runtime efficiency and filtering precision. Although these approaches have been shown to meet their runtime performance requirements, they have a fundamental disadvantage - they perform interest matching at discrete time intervals. As a result, they would fail to report events between discrete time-steps. If participants of the DVE ignore these missing events, they would most likely perform incorrect simulations. This thesis presents a new approach called space-time interest matching, which aims to capture the missing events between discrete time-steps. Although this approach requires additional matching effort, a number of novel algorithms are developed to significantly improve its runtime efficiency through the exploitation of parallelism.
122

The application of software product line engineering to energy management in the cloud and in virtualised environments

Murwantara, I. Made January 2016 (has links)
Modern software is created from components which can often perform a large number of tasks. For a given task, often there are many variations of components that can be used. As a result, software with comparable functionality can often be produced from a variety of components. The choice of software components influences the energy consumption. A popular method of software reuse with the components' setting selection is Software Product Line (SPL). Even though SPL has been used to investigate the energy related to the combination of software components, there has been no in depth study of how to measure the consumption of energy from a configuration of components and the extent to which the components contribute to energy usage. This thesis investigates how software components' diversity affects energy consumption in virtualised environments and it presents a method of identifying combinations of components that consume less energy. This work gives insight into the cultivation of the green software components by identifying which components influence the total consumption of energy. Furthermore, the thesis investigates how to use component diversity in a dynamic form in the direction of managing the consumption of energy as the demand on the system changes.
123

Taxonomies for software security

Corcalciuc, Horia V. January 2014 (has links)
A reoccurring problem with software security is that programmers are encouraged to reason about correctness either at code-level or at the design level, while attacks often tend to take places on intermediary layers of abstraction. It may happen that the code itself may seem correct and secure as long as its functionality has been demonstrated - for example, by showing that some invariant has been maintained. However, from a high-level perspective, one can observe that parallel executing processes can be seen as one single large program consisting of smaller components that work together in order to accomplish a task and that, for the duration of that interaction, several smaller invariants have to be maintained. It is frequently the case that an attacker manages to subvert the behavior of a program in case the invariants for intermediary steps can be invalidated. Such invariants become difficult to track, especially when the programmer does not explicitly have security in mind. This thesis explores the mechanisms of concurrent interaction between concurrent processes and tries to bring some order to synchronization by studying attack patterns, not only at code level, but also from the perspective of abstract programming concepts.
124

A framework for the analysis and comparison of process mining algorithms

Weber, Philip January 2014 (has links)
Process mining algorithms use event logs to learn and reason about business processes. Although process mining is essentially a machine learning task, little work has been done on systematically analysing algorithms to understand their fundamental properties, such as how much data is needed for confidence in mining. Nor does any rigorous basis exist on which to choose between algorithms and representations, or compare results. We propose a framework for analysing process mining algorithms. Processes are viewed as distributions over traces of activities and mining algorithms as learning these distributions. We use probabilistic automata as a unifying representation to which other representation languages can be converted. To validate the theory we present analyses of the Alpha and Heuristics Miner algorithms under the framework, and two practical applications. We propose a model of noise in process mining and extend the framework to mining from ‘noisy’ event logs. From the probabilities and sub-structures in a model, bounds can be given for the amount of data needed for mining. We also consider mining in non-stationary environments, and a method for recovery of the sequence of changed models over time. We conclude by critically evaluating this framework and suggesting directions for future research.
125

Learning deep representations for robotics applications

Aktaş, Ümit Ruşen January 2018 (has links)
In this thesis, two hierarchical learning representations are explored in computer vision tasks. First, a novel graph theoretic method for statistical shape analysis, called Compositional Hierarchy of Parts (CHOP), was proposed. The method utilises line-based features as its building blocks for the representation of shapes. A deep, multi-layer vocabulary is learned by recursively compressing this initial representation. The key contribution of this work is to formulate layerwise learning as a frequent sub-graph discovery problem, solved using the Minimum Description Length (MDL) principle. The experiments show that CHOP employs part shareability and data compression features, and yields state-of- the-art shape retrieval performance on 3 benchmark datasets. In the second part of the thesis, a hybrid generative-evaluative method was used to solve the dexterous grasping problem. This approach combines a learned dexterous grasp generation model with two novel evaluative models based on Convolutional Neural Networks (CNNs). The data- efficient generative method learns from a human demonstrator. The evaluative models are trained in simulation, using the grasps proposed by the generative approach and the depth images of the objects from a single view. On a real grasp dataset of 49 scenes with previously unseen objects, the proposed hybrid architecture outperforms the purely generative method, with a grasp success rate of 77.7% to 57.1%. The thesis concludes by comparing the two families of deep architectures, compositional hierarchies and DNNs, providing insights on their strengths and weaknesses.
126

Web page performance analysis

Chiew, Thiam Kian January 2009 (has links)
Computer systems play an increasingly crucial and ubiquitous role in human endeavour by carrying out or facilitating tasks and providing information and services. How much work these systems can accomplish, within a certain amount of time, using a certain amount of resources, characterises the systems’ performance, which is a major concern when the systems are planned, designed, implemented, deployed, and evolve. As one of the most popular computer systems, the Web is inevitably scrutinised in terms of performance analysis that deals with its speed, capacity, resource utilisation, and availability. Performance analyses for the Web are normally done from the perspective of the Web servers and the underlying network (the Internet). This research, on the other hand, approaches Web performance analysis from the perspective of Web pages. The performance metric of interest here is response time. Response time is studied as an attribute of Web pages, instead of being considered purely a result of network and server conditions. A framework that consists of measurement, modelling, and monitoring (3Ms) of Web pages that revolves around response time is adopted to support the performance analysis activity. The measurement module enables Web page response time to be measured and is used to support the modelling module, which in turn provides references for the monitoring module. The monitoring module estimates response time. The three modules are used in the software development lifecycle to ensure that developed Web pages deliver at worst satisfactory response time (within a maximum acceptable time), or preferably much better response time, thereby maximising the efficiency of the pages. The framework proposes a systematic way to understand response time as it is related to specific characteristics of Web pages and explains how individual Web page response time can be examined and improved.
127

Infrastructure support for adaptive mobile applications

Friday, Adrian January 1996 (has links)
Recent growth in the number and quality of wireless network technologies has led to an increased interest in mobile computing. Furthermore, these technologies have now advanced sufficiently to allow 'advanced applications' to be engineered. Applications such as these are characterised by complex patterns of distribution and interaction, support for collaboration and multimedia data, and are typically required to operate over heterogeneous networks and end-systems. Given these operating requirements, it is the author's contention that advanced applications must adapt their behaviour in response to changes in their environment in order to operate effectively. Such applications are termed adaptive applications. This thesis investigates the support required by advanced applications to facilitate operation in heterogeneous networked environments. A set of generic techniques are presented that enable existing distributed systems platforms to provide support for adaptive applications. These techniques are based on the provision of a QoS framework and a supporting infrastructure comprising a new remote procedure call package and supporting services. The QoS framework centres on the ability to establish explicit bindings between objects. Explicit bindings enable application requirements to be specified and provide a handle through which they can exert control and, more significantly, be informed of violations in the requested QoS. These QoS violations enable the applications to discover changes in their underlying environment and offer them the opportunity to adapt. The proposed architecture is validated through an implementation of the framework based on an existing distributed systems platform. The resulting architecture is used to underpin a novel collaborative mobile application aimed at supporting field workers within the utilities industry. The application in turn is used as a measure to gauge the effectiveness of the support provided by the platform. In addition, the design, implementation and evaluation of the application is used throughout the thesis to illustrate various aspects of platform support.
128

The impact of localized road accident information on road safety awareness

Zheng, Yunan January 2007 (has links)
The World Health Organization (WHO) estimate that road traffic accidents represent the third leading cause of ‘death and disease’ worldwide. Many countries have, therefore, launched safety campaigns that are intended to reduce road traffic accidents by increasing public awareness. In almost every case, however, a reduction in the total number of fatalities has not been matched by a comparable fall in the total frequency of road traffic accidents. Low severity incidents remain a significant problem. One possible explanation is that these road safety campaigns have had less effect than design changes. Active safety devices such as anti-lock braking, and passive measures, such as side impact protection, serve to mitigate the consequences of those accidents that do occur. A number of psychological phenomena, such as attribution error, explain the mixed success of road safety campaigns. Most drivers believe that they are less likely to be involved in an accident than other motorists. Existing road safety campaigns do little to address this problem; they focus on national and regional statistics that often seem remote from the local experiences of road users. Our argument is that localized road accident information would have better impact on people’s safety awareness. This thesis, therefore, describes the design and development of a software tool to provide the general public with access to information on the location and circumstances of road accidents in a Scottish city. We also present the results of an evaluation to determine whether the information provided by this software has any impact on individual risk perception. A route planing experiment was also carried out. The results from the experiment gives more positive feedback that road users would consider accident information if such information was available for them.
129

Analysing accident reports using structured and formal methods

Burns, Colin Paul January 2000 (has links)
Formal methods are proposed as a means to improve accident reports, such as the report into the 1996 fire in the Channel Tunnel between the UK and France. The size and complexity of accident reports create difficulties for formal methods, which traditionally suffer from problems of scalability and poor readability. This thesis demonstrates that features of an engineering-style formal modelling process, particularly the structuring of activity and management of information, reduce the impact of these problems and improve the accuracy of formal models of accident reports. This thesis also contributes a detailed analysis of the methodological requirements for constructing accident report models. Structured, methodical construction and mathematical analysis of the models elicits significant problems in the content and argumentation of the reports. Once elicited, these problems can be addressed. This thesis demonstrates the benefits and limitations of taking a wider scope in the modelling process than is commonly adopted for formal accident analysis. We present a deontic action logic as a language for constructing models of accident reports. Deontic action models offer a novel view of the report, which highlights both the expected and actual behaviour in the report, and facilitates examination of the conflict between the two. This thesis contributes an objective analysis of the utility of both deontic and action logic operators to the application of modelling accident reports. A tool is also presented that executes a subset of the logic, including these deontic and action logic operators.
130

Embedding expert systems in semi-formal domains : examining the boundaries of the knowledge base

Whitley, Edgar A. January 1990 (has links)
This thesis examines the use of expert systems in semi-formal domains. The research identifies the main problems with semi-formal domains and proposes and evaluates a number of different solutions to them. The thesis considers the traditional approach to developing expert systems, which sees domains as being formal, and notes that it continuously faces problems that result from informal features of the problem domain. To circumvent these difficulties experience or other subjective qualities are often used but they are not supported by the traditional approach to design. The thesis examines the formal approach and compares it with a semiformal approach to designing expert systems which is heavily influenced by the socio-technical view of information systems. From this basis it examines a number of problems that limit the construction and use of knowledge bases in semi-formal domains. These limitations arise from the nature of the problem being tackled, in particular problems of natural language communication and tacit knowledge and also from the character of computer technology and the role it plays. The thesis explores the possible mismatch between a human user and the machine and models the various types of confusion that arise. The thesis describes a number of practical solutions to overcome the problems identified. These solutions are implemented in an expert system shell (PESYS), developed as part of the research. The resulting solutions, based on non-linear documents and other software tools that open up the reasoning of the system, support users of expert systems in examining the boundaries of the knowledge base to help them avoid and overcome any confusion that has arisen. In this way users are encouraged to use their own skills and experiences in conjunction with an expert system to successfully exploit this technology in semi-formal domains.

Page generated in 0.1302 seconds