• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Prediction learning in robotic manipulation

Kopicki, Marek January 2010 (has links)
This thesis addresses an important problem in robotic manipulation, which is the ability to predict how objects behave under manipulative actions. This ability is useful for planning of object manipulations. Physics simulators can be used to do this, but they model many kinds of object interactions poorly, and unless there is a precise description of an object’s properties their predictions may be unreliable. An alternative is to learn a model for objects by interacting with them. This thesis specifically addresses the problem of learning to predict the interactions of rigid bodies in a probabilistic framework, and demonstrates results in the domain of robotic push manipulation. During training, a robotic manipulator applies pushes to objects and learns to predict their resulting motions. The learning does not make explicit use of physics knowledge, nor is it restricted to domains with any particular physical properties. The prediction problem is posed in terms of estimating probability densities over the possible rigid body transformations of an entire object as well as parts of an object under a known action. Density estimation is useful in that it enables predictions with multimodal outcomes, but it also enables compromise predictions for multiple combined expert predictors in a product of experts architecture. It is shown that a product of experts architecture can be learned and that it can produce generalization with respect to novel actions and object shapes, outperforming in most cases an approach based on regression. An alternative, non-learning, method of prediction is also presented, in which a simplified physics approach uses the minimum energy principle together with a particle-based representation of the object. A probabilistic formulation enables this simplified physics predictor to be combined with learned predictors in a product of experts. The thesis experimentally compares the performance of product of densities, regression, and simplified physics approaches. Performance is evaluated through a combination of virtual experiments in a physics simulator, and real experiments with a 5-axis arm equipped with a simple, rigid finger and a vision system used for tracking the manipulated object.
32

Automated composition of sequence diagrams

Alwanain, Mohammed Ibrahim January 2016 (has links)
Software design is a significant stage in software development life cycle as it creates a blueprint for the implementation of the software. Design-errors lead to costly and insufficient implementation. Hence, it is crucial to provide solutions to discover the design error in early stage of the system development and solve them. Inspired by various engineering disciplines, the software community proposed the concept of modelling in order to reduce these costly errors. Modelling provides a platform to create an abstract representation of the software systems concluding to the birth of various modelling languages such as Unified Modelling Language (UML), Automata, and Petri Net. Due to the modelling raises the level of abstraction throughout the analysis and design process, it enables the system discovers to efficiently identify errors. Since modern systems become more complex, models are often produced part-by-part to help reduce the complexity of the design. This often results in partial specifications captured in models focusing on a subset of the system. To produce an overall model of the system, such partial models must be composed together. Model composition is the process of combining partial models to create a single coherent model. Due to manual model composition is error prone, time-consuming and tedious, it must be replaced by automated model compositions. This thesis presents a novel approach for an automatic composition technique for creating behaviour models, such as a sequence diagram, from partial specifications captured in multiple sequence diagrams with the help of constraint solvers.
33

Self-aware software architecture style and patterns for cloud-based applications

Faniyi, Funmilade Olugbenga January 2015 (has links)
Modern cloud-reliant software systems are faced with the problem of cloud service providers violating their Service Level Agreement (SLA) claims. Given the large pool of cloud providers and their instability, cloud applications are expected to cope with these dynamics autonomously. This thesis investigates an approach for designing self-adaptive cloud architectures using a systematic methodology that guides the architect while designing cloud applications. The approach termed \(Self-aware\) \(Architecture\) \(Pattern\) promotes fine-grained representation of architectural concerns to aid design-time analysis of risks and trade-offs. To support the coordination and control of architectural components in decentralised self-aware cloud applications, we propose a \(Reputation-aware\) \(posted\) \(offer\) \(market\) \(coordination\) \(mechanism\). The mechanism builds on the classic posted offer market mechanism and extends it to track behaviour of unreliable cloud services. The self-aware cloud architecture and its reputation-aware coordination mechanism are quantitatively evaluated within the context of an Online Shopping application using synthetic and realistic workload datasets under various configurations (failure, scale, resilience levels etc.). Additionally, we qualitatively evaluated our self-aware approach against two classic self-adaptive architecture styles using independent experts' judgment, to unveil its strengths and weaknesses relative to these styles.
34

Theory grounded design of genetic programming and parallel evolutionary algorithms

Mambrini, Andrea January 2015 (has links)
Evolutionary algorithms (EAs) have been successfully applied to many problems and applications. Their success comes from being general purpose, which means that the same EA can be used to solve different problems. Despite that, many factors can affect the behaviour and the performance of an EA and it has been proven that there isn't a particular EA which can solve efficiently any problem. This opens to the issue of understanding how different design choices can affect the performance of an EA and how to efficiently design and tune one. This thesis has two main objectives. On the one hand we will advance the theoretical understanding of evolutionary algorithms, particularly focusing on Genetic Programming and Parallel Evolutionary algorithms. We will do that trying to understand how different design choices affect the performance of the algorithms and providing rigorously proven bounds of the running time for different designs. This novel knowledge, built upon previous work on the theoretical foundation of EAs, will then help for the second objective of the thesis, which is to provide theory grounded design for Parallel Evolutionary Algorithms and Genetic Programming. This will consist in being inspired by the analysis of the algorithms to produce provably good algorithm designs.
35

Users' trust in open learner models

Ahmad, Norasnita January 2014 (has links)
This thesis is to investigate learner trust in an open learner model. Issues of trust become more important in an open learner model (OLM) because the model is available for learners to inspect and this may increase their perceptions of how a system evaluates their knowledge and updates the model. It is important to provide learners with a trustworthy environment because it can engage them to continue to use the system. In this thesis we investigate learner trust in two main perspectives: from the perspective of the system as a whole and from the perspective of OLM features. From the perspective of the system as a whole, we investigate the extent to which learners trust and accept the OLM system on their first use, the extent to which learners continue using the OLM optionally after their initial use, and the extent to which learner trust and accept the OLM after long term of use. From the perspective of OLM features in the OLM environment, we investigate learner trust based on most common features: (i) complexity of model presentation; (ii) level of learner control over the model; (iii) the facility to view peer models and release one's own model to peers. Learners appear to have a different level of trust in the OLM. Learners trust the system more in the short period of time. Learners also trust the different view of model presentation and the different level of learner control in OLM. In terms of peer models, the named peer model is trusted more than the anonymous model. Based on the findings, a set of requirements is established to help the designer in OLM to design a more trustable OLM.
36

Multi-touch and mobile technologies for galleries, libraries, archives and museums

Hakvoort, Gido Albert January 2016 (has links)
Technological developments open new opportunities to meet the increasing expectations of visitors to galleries, libraries, archives or museums. Although these technologies provide many new possibilities, individual challenges and limitations are rife. Galleries, libraries, archives and museums should aim to unify many such technologies in order to capture visitor attention, engage interaction and facilitate both personal and social experiences. By incorporating objects, devices and people into a network of interconnected systems, new patterns, interaction types and social relations are expected to emerge. This thesis explores the unification of these technologies, identifies behavioural patterns emerging from visitor interactions and examines how combining these technologies can contribute to engaging visitor interactions and the effects they have on both individuals and groups. The thesis states that combining mobile devices and interactive displays will offer new engaging interactions for museum visitors. This will allow them to spread their interactions throughout the museum and easily switch between private and social experiences. Museums should therefore adopt combinations of mobile devices and interactive displays to meet the increasing expectations of their visitors and offer both private and social experiences.
37

Matching algorithms for interest management in distributed virtual environments

Liu, Sze-Yeung January 2012 (has links)
Interest management in distributed virtual environments (DVEs) is a data filtering technique which is designed to reduce bandwidth consumption and therefore enhances the scalability of the system. This technique usually involves a process called “interest matching", which determines what data should be sent to the participants as well as what data should be filtered. This thesis surveys the state of the art in interest management systems and defines three major design requirements. Based on the requirement analysis, it can be summarised that most of the existing interest matching approaches are developed to solve the trade-off between runtime efficiency and filtering precision. Although these approaches have been shown to meet their runtime performance requirements, they have a fundamental disadvantage - they perform interest matching at discrete time intervals. As a result, they would fail to report events between discrete time-steps. If participants of the DVE ignore these missing events, they would most likely perform incorrect simulations. This thesis presents a new approach called space-time interest matching, which aims to capture the missing events between discrete time-steps. Although this approach requires additional matching effort, a number of novel algorithms are developed to significantly improve its runtime efficiency through the exploitation of parallelism.
38

The application of software product line engineering to energy management in the cloud and in virtualised environments

Murwantara, I. Made January 2016 (has links)
Modern software is created from components which can often perform a large number of tasks. For a given task, often there are many variations of components that can be used. As a result, software with comparable functionality can often be produced from a variety of components. The choice of software components influences the energy consumption. A popular method of software reuse with the components' setting selection is Software Product Line (SPL). Even though SPL has been used to investigate the energy related to the combination of software components, there has been no in depth study of how to measure the consumption of energy from a configuration of components and the extent to which the components contribute to energy usage. This thesis investigates how software components' diversity affects energy consumption in virtualised environments and it presents a method of identifying combinations of components that consume less energy. This work gives insight into the cultivation of the green software components by identifying which components influence the total consumption of energy. Furthermore, the thesis investigates how to use component diversity in a dynamic form in the direction of managing the consumption of energy as the demand on the system changes.
39

Taxonomies for software security

Corcalciuc, Horia V. January 2014 (has links)
A reoccurring problem with software security is that programmers are encouraged to reason about correctness either at code-level or at the design level, while attacks often tend to take places on intermediary layers of abstraction. It may happen that the code itself may seem correct and secure as long as its functionality has been demonstrated - for example, by showing that some invariant has been maintained. However, from a high-level perspective, one can observe that parallel executing processes can be seen as one single large program consisting of smaller components that work together in order to accomplish a task and that, for the duration of that interaction, several smaller invariants have to be maintained. It is frequently the case that an attacker manages to subvert the behavior of a program in case the invariants for intermediary steps can be invalidated. Such invariants become difficult to track, especially when the programmer does not explicitly have security in mind. This thesis explores the mechanisms of concurrent interaction between concurrent processes and tries to bring some order to synchronization by studying attack patterns, not only at code level, but also from the perspective of abstract programming concepts.
40

A framework for the analysis and comparison of process mining algorithms

Weber, Philip January 2014 (has links)
Process mining algorithms use event logs to learn and reason about business processes. Although process mining is essentially a machine learning task, little work has been done on systematically analysing algorithms to understand their fundamental properties, such as how much data is needed for confidence in mining. Nor does any rigorous basis exist on which to choose between algorithms and representations, or compare results. We propose a framework for analysing process mining algorithms. Processes are viewed as distributions over traces of activities and mining algorithms as learning these distributions. We use probabilistic automata as a unifying representation to which other representation languages can be converted. To validate the theory we present analyses of the Alpha and Heuristics Miner algorithms under the framework, and two practical applications. We propose a model of noise in process mining and extend the framework to mining from ‘noisy’ event logs. From the probabilities and sub-structures in a model, bounds can be given for the amount of data needed for mining. We also consider mining in non-stationary environments, and a method for recovery of the sequence of changed models over time. We conclude by critically evaluating this framework and suggesting directions for future research.

Page generated in 0.1139 seconds