• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Engineering a Semantic Web trust infrastructure

Cobden, Marcus January 2014 (has links)
The ability to judge the trustworthiness of information is an important and challenging problem in the field of Semantic Web research. In this thesis, we take an end-to-end look at the challenges posed by trust on the Semantic Web, and present contributions in three areas: a Semantic Web identity vocabulary, a system for bootstrapping trust environments, and a framework for trust aware information management. Typically Semantic Web agents, which consume and produce information, are not described with sufficient information to permit those interacting with them to make good judgements of trustworthiness. A descriptive vocabulary for agent identity is required to enable effective inter agent discourse, and the growth of trust and reputation within the Semantic Web; we therefore present such a foundational identity ontology for describing web-based agents. It is anticipated that the Semantic Web will suffer from a trust network bootstrapping problem. In this thesis, we propose a novel approach which harnesses open data to bootstrap trust in new trust environments. This approach brings together public records published by a range of trusted institutions in order to encourage trust in identities within new environments. Information integrity and provenance are both critical prerequisites for well-founded judgements of information trustworthiness. We propose a modification to the RDF Named Graph data model in order to address serious representational limitations with the named graph proposal, which affect the ability to cleanly represent claims and provenance records. Next, we propose a novel graph based approach for recording the provenance of derived information. This approach offers computational and memory savings while maintaining the ability to answer graph-level provenance questions. In addition, it allows new optimisations such as strategies to avoid needless repeat computation, and a delta-based storage strategy which avoids data duplication.
82

A framework for the real-time analysis of musical events

Ibbotson, John Bryan January 2009 (has links)
In this thesis I propose a framework for the real-time creation of a harmonic structural model of music. Unlike most uses of computing in musicology which are based on batch processing, the framework uses publish/subscribe messaging techniques found in business systems to create an interconnected set of collaborating applications within a network that process streamed events of the kind generated during a musical performance. These applications demonstrate the transformation of data in the form of MIDI commands into information and knowledge in the form of the music’s harmonic structure represented as a model using semantic web techniques. With such a framework, collaborative performances over the network become possible with a shared representation of the music being performed accessible to all performers both human and potentially software agents. The framework demonstrates novel real-time implementations of pitch spelling, chord and key extraction algorithms interacting with semantic web and database technologies in a collaborative manner. It draws on relevant research in information science, musical cognition, semantic web and business messaging technologies to implement a framework and set of software components for the real-time analysis of musical events, the output of which is a description of the music’s harmonic structure. Finally, it proposes a pattern based approach to querying the generated model which suggests a visual query and navigation paradigm.
83

On evidence gathering in 3D point clouds of static and moving objects

Abuzaina, Anas January 2015 (has links)
The recent and considerable progress in 3D sensing technologies mandates the development of efficient algorithms to process the sensed data. Many of these algorithms are based on computing and matching of 3D feature descriptors in order to estimate point correspondences between 3D datasets. The dependency on 3D feature description and computation can be a significant limitation to many 3D perception tasks; the fact that there are a variety of criteria used to describe 3D features, such as surface normals and curvature, makes feature-based approaches sensitive to noise and occlusion. In many cases, such as smooth surfaces, computation of feature descriptors can be non-informative. Moreover, the process of computing and matching features requires more computational overhead than using points directly. On the other hand, there has not been much focus on employing evidence gathering frameworks to obtain solutions for 3D perception problems. Evidence gathering approaches, which use data directly, have proved to provide robust performance against noise and occlusion. More importantly, evidence gathering approaches do not require initialisation or training, and avoid the need to solve the correspondence problem. The capability to detect, extract and reconstruct 3D bjects without relying on feature matching and estimating correspondences between 3D datasets has not been thoroughly investigated, and is certainly desirable and has many practical applications. In this thesis we present theoretical formulations and practical solutions to 3D perceptual tasks, that are based on evidence gathering. We propose a new 3D reconstruction algorithm for rotating objects that is based on motion-compensated temporal accumulation. We also propose two fast and robust Hough Transform based algorithms for 3D static parametric object detection and 3D moving parametric object extraction. Furthermore, we introduce two algorithms for 3D motion parameter estimation that are based on Reuleaux's and Chasles' kinematic theorems. The proposed algorithms estimate 3D motion parameters directly from the data by exploiting the geometry of rigid transformation. Moreover, they provide an alternative to the both local and global feature description and matching pipelines commonly used by numerous 3D object recognition and registration algorithms. Our objective is to provide new means for understanding static and dynamic scenes, captured by new 3D sensing technologies as we believe that these technologies will be dominant in the perception field as they are going under rapid development. We provide alternative solutions to commonly used feature based approaches by using new evidence gathering based methods for the processing of 3D range data.
84

Multiple objective optimisation of data and control paths in a behavioural silicon compiler

Baker, Keith Richard January 1992 (has links)
The objective of this research was to implement an `intelligent' silicon compiler that provides the ability to automatically explore the design space and optimise a design, given as a behavioural description, with respect to multiple objectives. The objective has been met by the implementation of the MOODS Silicon Compiler. The user submits goals or objectives to the system which automatically finds near optimal solutions. As objectives may be conflicting, trade-offs between synthesis tasks are essential and consequently their simultaneous execution must occur. Tasks are decomposed into behaviour preserving transformations which, due to their completeness, can be applied in any sequence to a multi-level representation of the design. An accurate evaluation of the design is ensured by feeding up technology dependent information to a cost function. The cost function guides the simulated annealing algorithm in applying transformations to iteratively optimise the design. The simulated annealing algorithm provides an abstractness from the transformations and designer's objectives. This abstractness avoids the construction of tailored heuristics which pre-program trade-offs into a system. Pre-programmed trade-offs are used in most systems by assuming a particular shape to the trade-off curve and are inappropriate as trade-offs are technology dependent. The lack of pre-programmed trade-offs in the MOODS system allows it to adapt to changes in technology or library cells. The choice of cells and their subsequent sharing are based on the user's criteria expressed in the cost function, rather than being pre-programmed into the system. The results show that implementations created by MOODS are better than or equal to those achieved by other systems. Comparisons with other systems highlighted the importance of specifying all of a design's data as the lack of data misrepresents the design leading to misleading comparisons. The MOODS synthesis system includes an efficient method for automated design space exploration where a varied set of near optimal implementations can be produced from a single behavioural specification. Design space exploration is an important aspect of designing by high-level synthesis and in the development of synthesis systems. It allows the designer to obtain a perspicuous characterization of a design's design space allowing him to investigate alternative designs.
85

Extending Event-B with discrete timing properties

Sarshogh, Mohammad Reza January 2013 (has links)
Event-B is a formal language for systems modelling, based on set theory and predicate logic. It has the advantage of mechanized proof, and it is possible to model a system in several levels of abstraction by using re�nement. Discrete timing properties are important in many critical systems. However, modelling of timing properties is not directly supported in Event-B. In this work, we identify three main categories of discrete timing properties for trigger response patterns, deadline, delay and expiry. We introduce language constructs for each of these timing properties that augment the Event-B language. We describe how these constructs have been given a semantics in terms of the standard Event-B constructs. To ease the process of using timing properties in a refinement-based development, we introduce patterns for refining the timing constructs that allow timing properties on abstract models to be replaced by timing properties on refined models. The language constructs and refinement patterns are illustrated through some generic examples. We have developed a tool to support our approach. Our tool is a plug-in to the Rodin tool-set for Event-B and automates the translation of timing properties to Event-B as well as the generation of gluing invariants, required toverify the consistency of timing properties refinement. In the end, we demonstrate the practicality of our approach, by going through the modelling and verifying process of two real-time case studies. The main focus will be the usefulness of the timing re�nement patterns in a step-wise modelling and veri�cation process of a real-time system.
86

Control of large distributed systems using games with pure strategy Nash equilibria

Chapman, Archie C. January 2009 (has links)
Control mechanisms for optimisation in large distributed systems cannot be constructed based on traditional methods of control because they are typically characterised by distributed information and costly and/or noisy communication. Furthermore, noisy observations and dynamism are also inherent to these systems, so their control mechanisms need to be flexible, agile and robust in the face of these characteristics. In such settings, a good control mechanism should satisfy the following four design requirements: (i) it should produce high quality solutions, (ii) it should be robustness and flexibility in the face of additions, removals and failures of components, (iii) it should operate by making limited use of communication, and (iv) its operation should be computational feasible. Against this background, in order to satisfy these requirements, in this thesis we adopt a design approach based on dividing control over the system across a team of self–interested agents. Such multi–agent systems (MAS) are naturally distributed (matching the application domains in question), and by pursing their own private goals, the agents can collectively implement robust, flexible and scalable control mechanisms. In more detail, the design approach we adopt is (i) to use games with pure strategy Nash equilibria as a framework or template for constructing the agents’ utility functions, such that good solutions to the optimisation problem arise at the pure strategy Nash equilibria of the game, and (ii) to derive distributed techniques for solving the games for their Nash equilibria. The specific problems we tackle can be grouped into four main topics. First, we investigate a class of local algorithms for distributed constraint optimisation problems (DCOPs). We introduce a unifying analytical framework for studying such algorithms, and develop a parameterisation of the algorithm design space, which represents a mapping from the algorithms’ components to their performance according to each of our design requirements. Second, we develop a game–theoretic control mechanism for distributed dynamic task allocation and scheduling problems. The model in question is an expansion of DCOPs to encompass dynamic problems, and the control mechanism we derive builds on the insights from our first topic to address our four design requirements. Third, we elaborate a general class of problems including DCOPs with noisy rewards and state observations, which are realistic traits of great concern in real–world problems, and derive control mechanisms for these environments. These control mechanism allow the agents to either learn their reward functions or decide when to make observations of the world’s state and/or communicate their beliefs over the state of the world, in such a manner that they perform well according to our design requirements. Fourth, we derive an optimal algorithm for computing and optimising over pure strategy Nash equilibria in games with sparse interaction structure. By exploiting the structure present in many multi-agent interactions, this distributed algorithm can efficiently compute equilibria that optimise various criteria, thus reducing the computational burden on any one agent and operating using less communication than an equivalent centralised algorithms. For each of these topics, the control mechanisms that we derive are developed such that they perform well according to all four f our design requirements. In sum, by making the above contributions to these specific topics, we demonstrate that the general approach of using games with pure strategy Nash equilibria as a template for designing MAS produces good control mechanisms for large distributed systems.
87

Enhancing retrieval and discovery of desktop documents

Mosweunyane, Gontlafetse January 2009 (has links)
Personal computers provide users with abilities to create, organize, store and access large amounts of information. Most of this information is in the form of documents in files organized in the hierarchical folder structures provided by the operating system. Operating system-provided access to these data is mainly through structure-guided navigation, and more recently through keyword search. This thesis describes the author's research into the accessibility and utilization of personal documents stored and organized using the hierarchical file system provided by common operating systems. An investigation was carried out on how users currently store and access their documents in these structures. Access and utility problems triggered a need to reconsider the navigation methods currently provided. Further investigation into navigation of personal document hierarchies using semantic metadata derived from the documents was carried out. A more intuitive exploratory interface that exposes the metadata for browsing-style navigation was implemented. The underlying organization is based on a model for navigation whereby documents are represented using index terms and associations between them exposed to create a linked, similarity-based navigation structure. Exposure of metadata-derived index terms in an interface was hypothesized to reduce the user's cognitive load and enable efficient and effective retrieval while also providing cues for discovery and recognition of associations between documents. Evaluation results of the implementation supports this hypothesis for retrieval of deeply located documents, as well as better overall effectiveness in associating and discovery of documents. The importance of semantic document metadata is also highlighted in demonstrations involving transfer of documents from the desktop to other organized document stores such as a repository.
88

An incremental refinement approach to a development of a flash-based file system in Event-B

Damchoom, Kriangsak January 2010 (has links)
Nowadays, many formal methods are used in the area of software development accompanied by a number of advanced theories and tools. However, more experiments are still required in order to provide significant evidence that will convince and encourage users to use, and gain more benefits from, those theories and tools. Event-B is a formalism used for specifying and reasoning about systems. Rodin is an open and extensible tool for Event-B specification, refinement and proof. The flash file system is a complex system. Such systems are a challenge to specify and verify at this moment in time. This system was chosen as a case study for our experiments, carried out using Event-B and the Rodin tool. The experiments were aimed at developing a rigorous model of flash-based file system; including implementation of the model, providing useful evidence and guidelines to developers and the software industry. We believe that these would convince users and make formal methods more accessible. An incremental refinement was chosen as a strategy in our development. The refinement was used for two different purposes: feature augmentation and structural refinement (covering event and machine decomposition). Several techniques and styles of modelling were investigated and compared; to produce some useful guidelines for modelling, refinement and proof. The model of the flash-based file system we have completed covers three main issues: fault-tolerance, concurrency and wear-levelling process. Our model can deal with concurrent read/write operations and other processes such as block relocation and block erasure. The model tolerates faults that may occur during reading/writing of files. We believe our development acts as an exemplar that other developers can learn from. We also provide systematic rules for translation of Event-B models into Java code. However, more work is required to make these rules more applicable and useful in the future
89

Investigating adoption of, and success factors for, agile software development in Malaysia

Asnawi, Ani Liza January 2012 (has links)
Agile methods are sets of software practices that can produce products faster and at the same time deliver what customers want. Despite these benefits, however, few studies can be found from the Southeast Asia region, particularly Malaysia. Furthermore many of the software processes were developed and produced in the US and European countries so they are tailored to their culture and most empirical evidence come from these countries. In this research, the perception, challenges in relation to Agile adoption and how the methods can be used successfully (the impact/benefits) were investigated from the perspective of Malaysian software practitioners. Consequently the research introduced two models which provide interaction and causality among the factors which can help software practitioners in Malaysia to determine and understand aspects important for successful Agile adoption. Agile focuses on the ‘people aspect’ therefore the cultural differences need to be addressed. Malaysia is a country that has three different ethnicities groups (Malay, Chinese and Indian) and the first language is Malay. English is the second language in the country and it is a standard language used in the business environment including software business. This study started with investigating the awareness of software practitioners in Malaysia regarding Agile methods. Low awareness was identified and interestingly the language aspect and organisational structure/culture were found to have significant association with the awareness of Agile methods. Those using English language were found to be more aware about Agile methods. The adoption of Agile methods in the country seems to be low although this might be changing over time. Issues from the early adopters were qualitatively investigated (with seven organisations and 13 software practitioners) to understand Agile adoption in Malaysia. Customers’education, mind set, people and management were found important from these interviews. The initial results and findings served as background to further investigate factors important in relation to the adoption of Agile methods from the Malaysian perspective. The study continued with a survey and further interviews involving seven organisations (three local and four multinational companies) and 14 software practitioners. While the survey received 207 responses, the language aspect was found significant for Agile usage and the Agile beliefs. Agile usage was also found significant for organisation types (government/non-government), indicating lack of adoption from the government sector. In addition, all factors investigated were found to be significant for getting the impact and benefits of Agile. The strongest relationship was identified from the organisational aspect, followed with the knowledge and involvement from all parties. Qualitative investigation supported and explained the results obtained from the survey and from here, the top factors for adoption and success in applying Agile were discovered to be involvement from all parties which requiring organisation and people to make it happen. The most important factors (or dimensions) identified from both groups (Agile users and non-Agile) were in the dimensions of organisational and people-related aspects (including customers). Finally the study introduced two models which discovered causal relationships in predicting the impact and benefits (success) of Agile methods. This research is based on the empirical investigation; hence the study suggests that Agile methods must be adjusted to the organisation and the people to get involvement from all parties. Agile is more easily adopted in an organisation with low power distance and low uncertainty avoidance. In addition, multinational companies and private sectors were found to facilitate Agile methods. In these organisations, the employees were found to be proficient using English language.
90

On the analysis of structure in texture

Waller, Ben January 2014 (has links)
Until now texture has been largely viewed as a statistical or holistic paradigm: textures are described as a whole and by summary statistics. In this thesis it is assumed that there is a structure underlying the texture leading to models, reconstruction and to scale based analysis. Local Binary Patterns are used throughout as the basis functions for texture and methods have been developed to reconstruct texture images from arrays of their LBP codes. The reconstructed images contain identical texture properties to the original; providing the same array of LBP codes. An evidence gathering approach has been developed to provide a model for each texture class based on the spatial structure of these elements throughout the image. This method, called Evidence Gathering Texture Segmentation, provides good results for segmentation with smooth boundaries and minimal oversegmentation, when compared with existing methods. Analysing microand macro-structures confers ability to include scale in texture analysis. A novel combination of lowpass and highpass filters produces images devoid of structures at certain scales; allowing both the micro- and macro-structures to be analysed without occlusion by other scales of texture within the image. A two stage training process is used to learn the optimum filter sizes and to produce model histograms for each known texture class. The process, called Accumulative Filtering, gives superior results compared to the best multiresolution LBP configuration and analysis only using lowpass filters. By reconstruction, by evidence gathering and by analysis of micro- and macro-structures, new capabilities are described to exploit structure within the analysis of texture.

Page generated in 0.1478 seconds