• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 720
  • 715
  • 707
  • 398
  • 385
  • 382
  • 164
  • 97
  • 86
  • 82
  • 44
  • 42
  • 39
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Developing a model of mobile Web uptake in the developing world

Purwandari, Betty January 2013 (has links)
This research was motivated by the limited penetration of the Internet within emerging economies and the ‘mobile miracle’, which refers to a steep increase of mobile phone penetration. In the context of the developing world, harnessing the ‘mobile miracle’ to improve Internet access can leverage the potential of the Web. However, no comprehensive model exists, which can identify and measure indicators of Mobile Web uptake. The absence of such a model creates problems in understanding the impact of the Mobile Web. This has generated the key question under study in this thesis: “What is a suitable model for Mobile Web uptake and its impact in the developing world?” In order to address the research question, the Model of Mobile Web Uptake in the Developing World (MMWUDW) was created. It was informed by a literature review, pilot study in Kenya and expert reviews. The MMWUDW was evaluated using Structural Equation Modelling (SEM) with the primary data that consisted of the questionnaire and interview data from Indonesia. The SEM analysis was triangulated with the questionnaire results and interview findings. Examining the primary data to evaluate the MMWUDW was essential to understand why people used mobile phones to make or follow links on the Web. The MMWUDW has three main factors. These are Mobile Web maturity, uptake and impact. The results of the SEM suggested that mobile networks, percentage of income for mobile credits, literacy and digital literacy did not affect Mobile Web uptake. In contrast, web-enabled phones, Web applications or contents, and mobile operator services strongly indicated Mobile Web maturity, which was a prerequisite for Mobile Web uptake. The uptake then created Mobile Web impact, which included both positive and negative features; ease of access to information and a convenient way to communicate; being entertained and empowered; maintaining of social cohesion and economic benefits, as well as wasting time and money, and being exposed to cyber bullying. Moreover, the research identified areas for improvement in the Mobile Web and regression equations to measure the factors and indicators of the MMWUDW. Possible future work comprises advancement of the MMWUDW and new Web Science research on the Mobile Web in developing countries.
82

Direct write printed flexible electronic devices on fabrics

Li, Yi January 2014 (has links)
This thesis describes direct write printing methods to achieve flexible electronic devices on fabrics by investigating, low temperature process; and functional conductor, insulator and semiconductor inks. The objective is to print flexible electronic devices onto fabrics solely by inkjet printing or pneumatic dispenser printing. Antennas and capacitors, as intermediate inkjet printed electronic devices, are addressed before transistor fabrication. There are many publications that report inkjet printed flexible electronic devices. However, none of the reported methods use fabrics as the target substrate or are processed under a sufficiently low temperature (≤150 oC) to enable fabrics to survive. The target substrate in this research, standard 65/35 polyester cotton fabric, has a maximum thermal curing condition of 180 oC for 15 minutes and 150 oC for 45 minutes. Therefore the total effective curing time is best below 150 oC within 30 minutes to minimise any potential degradation of the fabric substrate. This thesis reports on an inkjet printed flexible half wavelength fabric dipole antenna, an inkjet printed fabric patch antenna, an all inkjet printed SU-8 capacitor, an all inkjet printed fabric capacitor and an inkjet printed transistor on a silicon dioxide coated silicon wafer. The measured fabric dipole antenna peak operating frequency is 1.897 GHz with 74.1 % efficiency and 3.6 dBi gain. The measured fabric patch antenna peak operating frequency is around 2.48 GHz with efficiency up to 57 % and 5.09 dBi gain. The measured capacitance of the printed capacitor is 48.5 pF (2.47 pF/mm2) at 100 Hz using the inkjet printed SU-8. The capacitance of an all inkjet printed flexible fabric capacitor is 163 pF (23.1 pF/mm2) at 100Hz with the UV curable PVP dielectric ink developed as part of this work.
83

Engineering a Semantic Web trust infrastructure

Cobden, Marcus January 2014 (has links)
The ability to judge the trustworthiness of information is an important and challenging problem in the field of Semantic Web research. In this thesis, we take an end-to-end look at the challenges posed by trust on the Semantic Web, and present contributions in three areas: a Semantic Web identity vocabulary, a system for bootstrapping trust environments, and a framework for trust aware information management. Typically Semantic Web agents, which consume and produce information, are not described with sufficient information to permit those interacting with them to make good judgements of trustworthiness. A descriptive vocabulary for agent identity is required to enable effective inter agent discourse, and the growth of trust and reputation within the Semantic Web; we therefore present such a foundational identity ontology for describing web-based agents. It is anticipated that the Semantic Web will suffer from a trust network bootstrapping problem. In this thesis, we propose a novel approach which harnesses open data to bootstrap trust in new trust environments. This approach brings together public records published by a range of trusted institutions in order to encourage trust in identities within new environments. Information integrity and provenance are both critical prerequisites for well-founded judgements of information trustworthiness. We propose a modification to the RDF Named Graph data model in order to address serious representational limitations with the named graph proposal, which affect the ability to cleanly represent claims and provenance records. Next, we propose a novel graph based approach for recording the provenance of derived information. This approach offers computational and memory savings while maintaining the ability to answer graph-level provenance questions. In addition, it allows new optimisations such as strategies to avoid needless repeat computation, and a delta-based storage strategy which avoids data duplication.
84

A framework for the real-time analysis of musical events

Ibbotson, John Bryan January 2009 (has links)
In this thesis I propose a framework for the real-time creation of a harmonic structural model of music. Unlike most uses of computing in musicology which are based on batch processing, the framework uses publish/subscribe messaging techniques found in business systems to create an interconnected set of collaborating applications within a network that process streamed events of the kind generated during a musical performance. These applications demonstrate the transformation of data in the form of MIDI commands into information and knowledge in the form of the music’s harmonic structure represented as a model using semantic web techniques. With such a framework, collaborative performances over the network become possible with a shared representation of the music being performed accessible to all performers both human and potentially software agents. The framework demonstrates novel real-time implementations of pitch spelling, chord and key extraction algorithms interacting with semantic web and database technologies in a collaborative manner. It draws on relevant research in information science, musical cognition, semantic web and business messaging technologies to implement a framework and set of software components for the real-time analysis of musical events, the output of which is a description of the music’s harmonic structure. Finally, it proposes a pattern based approach to querying the generated model which suggests a visual query and navigation paradigm.
85

On evidence gathering in 3D point clouds of static and moving objects

Abuzaina, Anas January 2015 (has links)
The recent and considerable progress in 3D sensing technologies mandates the development of efficient algorithms to process the sensed data. Many of these algorithms are based on computing and matching of 3D feature descriptors in order to estimate point correspondences between 3D datasets. The dependency on 3D feature description and computation can be a significant limitation to many 3D perception tasks; the fact that there are a variety of criteria used to describe 3D features, such as surface normals and curvature, makes feature-based approaches sensitive to noise and occlusion. In many cases, such as smooth surfaces, computation of feature descriptors can be non-informative. Moreover, the process of computing and matching features requires more computational overhead than using points directly. On the other hand, there has not been much focus on employing evidence gathering frameworks to obtain solutions for 3D perception problems. Evidence gathering approaches, which use data directly, have proved to provide robust performance against noise and occlusion. More importantly, evidence gathering approaches do not require initialisation or training, and avoid the need to solve the correspondence problem. The capability to detect, extract and reconstruct 3D bjects without relying on feature matching and estimating correspondences between 3D datasets has not been thoroughly investigated, and is certainly desirable and has many practical applications. In this thesis we present theoretical formulations and practical solutions to 3D perceptual tasks, that are based on evidence gathering. We propose a new 3D reconstruction algorithm for rotating objects that is based on motion-compensated temporal accumulation. We also propose two fast and robust Hough Transform based algorithms for 3D static parametric object detection and 3D moving parametric object extraction. Furthermore, we introduce two algorithms for 3D motion parameter estimation that are based on Reuleaux's and Chasles' kinematic theorems. The proposed algorithms estimate 3D motion parameters directly from the data by exploiting the geometry of rigid transformation. Moreover, they provide an alternative to the both local and global feature description and matching pipelines commonly used by numerous 3D object recognition and registration algorithms. Our objective is to provide new means for understanding static and dynamic scenes, captured by new 3D sensing technologies as we believe that these technologies will be dominant in the perception field as they are going under rapid development. We provide alternative solutions to commonly used feature based approaches by using new evidence gathering based methods for the processing of 3D range data.
86

Multiple objective optimisation of data and control paths in a behavioural silicon compiler

Baker, Keith Richard January 1992 (has links)
The objective of this research was to implement an `intelligent' silicon compiler that provides the ability to automatically explore the design space and optimise a design, given as a behavioural description, with respect to multiple objectives. The objective has been met by the implementation of the MOODS Silicon Compiler. The user submits goals or objectives to the system which automatically finds near optimal solutions. As objectives may be conflicting, trade-offs between synthesis tasks are essential and consequently their simultaneous execution must occur. Tasks are decomposed into behaviour preserving transformations which, due to their completeness, can be applied in any sequence to a multi-level representation of the design. An accurate evaluation of the design is ensured by feeding up technology dependent information to a cost function. The cost function guides the simulated annealing algorithm in applying transformations to iteratively optimise the design. The simulated annealing algorithm provides an abstractness from the transformations and designer's objectives. This abstractness avoids the construction of tailored heuristics which pre-program trade-offs into a system. Pre-programmed trade-offs are used in most systems by assuming a particular shape to the trade-off curve and are inappropriate as trade-offs are technology dependent. The lack of pre-programmed trade-offs in the MOODS system allows it to adapt to changes in technology or library cells. The choice of cells and their subsequent sharing are based on the user's criteria expressed in the cost function, rather than being pre-programmed into the system. The results show that implementations created by MOODS are better than or equal to those achieved by other systems. Comparisons with other systems highlighted the importance of specifying all of a design's data as the lack of data misrepresents the design leading to misleading comparisons. The MOODS synthesis system includes an efficient method for automated design space exploration where a varied set of near optimal implementations can be produced from a single behavioural specification. Design space exploration is an important aspect of designing by high-level synthesis and in the development of synthesis systems. It allows the designer to obtain a perspicuous characterization of a design's design space allowing him to investigate alternative designs.
87

Extending Event-B with discrete timing properties

Sarshogh, Mohammad Reza January 2013 (has links)
Event-B is a formal language for systems modelling, based on set theory and predicate logic. It has the advantage of mechanized proof, and it is possible to model a system in several levels of abstraction by using re�nement. Discrete timing properties are important in many critical systems. However, modelling of timing properties is not directly supported in Event-B. In this work, we identify three main categories of discrete timing properties for trigger response patterns, deadline, delay and expiry. We introduce language constructs for each of these timing properties that augment the Event-B language. We describe how these constructs have been given a semantics in terms of the standard Event-B constructs. To ease the process of using timing properties in a refinement-based development, we introduce patterns for refining the timing constructs that allow timing properties on abstract models to be replaced by timing properties on refined models. The language constructs and refinement patterns are illustrated through some generic examples. We have developed a tool to support our approach. Our tool is a plug-in to the Rodin tool-set for Event-B and automates the translation of timing properties to Event-B as well as the generation of gluing invariants, required toverify the consistency of timing properties refinement. In the end, we demonstrate the practicality of our approach, by going through the modelling and verifying process of two real-time case studies. The main focus will be the usefulness of the timing re�nement patterns in a step-wise modelling and veri�cation process of a real-time system.
88

Control of large distributed systems using games with pure strategy Nash equilibria

Chapman, Archie C. January 2009 (has links)
Control mechanisms for optimisation in large distributed systems cannot be constructed based on traditional methods of control because they are typically characterised by distributed information and costly and/or noisy communication. Furthermore, noisy observations and dynamism are also inherent to these systems, so their control mechanisms need to be flexible, agile and robust in the face of these characteristics. In such settings, a good control mechanism should satisfy the following four design requirements: (i) it should produce high quality solutions, (ii) it should be robustness and flexibility in the face of additions, removals and failures of components, (iii) it should operate by making limited use of communication, and (iv) its operation should be computational feasible. Against this background, in order to satisfy these requirements, in this thesis we adopt a design approach based on dividing control over the system across a team of self–interested agents. Such multi–agent systems (MAS) are naturally distributed (matching the application domains in question), and by pursing their own private goals, the agents can collectively implement robust, flexible and scalable control mechanisms. In more detail, the design approach we adopt is (i) to use games with pure strategy Nash equilibria as a framework or template for constructing the agents’ utility functions, such that good solutions to the optimisation problem arise at the pure strategy Nash equilibria of the game, and (ii) to derive distributed techniques for solving the games for their Nash equilibria. The specific problems we tackle can be grouped into four main topics. First, we investigate a class of local algorithms for distributed constraint optimisation problems (DCOPs). We introduce a unifying analytical framework for studying such algorithms, and develop a parameterisation of the algorithm design space, which represents a mapping from the algorithms’ components to their performance according to each of our design requirements. Second, we develop a game–theoretic control mechanism for distributed dynamic task allocation and scheduling problems. The model in question is an expansion of DCOPs to encompass dynamic problems, and the control mechanism we derive builds on the insights from our first topic to address our four design requirements. Third, we elaborate a general class of problems including DCOPs with noisy rewards and state observations, which are realistic traits of great concern in real–world problems, and derive control mechanisms for these environments. These control mechanism allow the agents to either learn their reward functions or decide when to make observations of the world’s state and/or communicate their beliefs over the state of the world, in such a manner that they perform well according to our design requirements. Fourth, we derive an optimal algorithm for computing and optimising over pure strategy Nash equilibria in games with sparse interaction structure. By exploiting the structure present in many multi-agent interactions, this distributed algorithm can efficiently compute equilibria that optimise various criteria, thus reducing the computational burden on any one agent and operating using less communication than an equivalent centralised algorithms. For each of these topics, the control mechanisms that we derive are developed such that they perform well according to all four f our design requirements. In sum, by making the above contributions to these specific topics, we demonstrate that the general approach of using games with pure strategy Nash equilibria as a template for designing MAS produces good control mechanisms for large distributed systems.
89

Enhancing retrieval and discovery of desktop documents

Mosweunyane, Gontlafetse January 2009 (has links)
Personal computers provide users with abilities to create, organize, store and access large amounts of information. Most of this information is in the form of documents in files organized in the hierarchical folder structures provided by the operating system. Operating system-provided access to these data is mainly through structure-guided navigation, and more recently through keyword search. This thesis describes the author's research into the accessibility and utilization of personal documents stored and organized using the hierarchical file system provided by common operating systems. An investigation was carried out on how users currently store and access their documents in these structures. Access and utility problems triggered a need to reconsider the navigation methods currently provided. Further investigation into navigation of personal document hierarchies using semantic metadata derived from the documents was carried out. A more intuitive exploratory interface that exposes the metadata for browsing-style navigation was implemented. The underlying organization is based on a model for navigation whereby documents are represented using index terms and associations between them exposed to create a linked, similarity-based navigation structure. Exposure of metadata-derived index terms in an interface was hypothesized to reduce the user's cognitive load and enable efficient and effective retrieval while also providing cues for discovery and recognition of associations between documents. Evaluation results of the implementation supports this hypothesis for retrieval of deeply located documents, as well as better overall effectiveness in associating and discovery of documents. The importance of semantic document metadata is also highlighted in demonstrations involving transfer of documents from the desktop to other organized document stores such as a repository.
90

An incremental refinement approach to a development of a flash-based file system in Event-B

Damchoom, Kriangsak January 2010 (has links)
Nowadays, many formal methods are used in the area of software development accompanied by a number of advanced theories and tools. However, more experiments are still required in order to provide significant evidence that will convince and encourage users to use, and gain more benefits from, those theories and tools. Event-B is a formalism used for specifying and reasoning about systems. Rodin is an open and extensible tool for Event-B specification, refinement and proof. The flash file system is a complex system. Such systems are a challenge to specify and verify at this moment in time. This system was chosen as a case study for our experiments, carried out using Event-B and the Rodin tool. The experiments were aimed at developing a rigorous model of flash-based file system; including implementation of the model, providing useful evidence and guidelines to developers and the software industry. We believe that these would convince users and make formal methods more accessible. An incremental refinement was chosen as a strategy in our development. The refinement was used for two different purposes: feature augmentation and structural refinement (covering event and machine decomposition). Several techniques and styles of modelling were investigated and compared; to produce some useful guidelines for modelling, refinement and proof. The model of the flash-based file system we have completed covers three main issues: fault-tolerance, concurrency and wear-levelling process. Our model can deal with concurrent read/write operations and other processes such as block relocation and block erasure. The model tolerates faults that may occur during reading/writing of files. We believe our development acts as an exemplar that other developers can learn from. We also provide systematic rules for translation of Event-B models into Java code. However, more work is required to make these rules more applicable and useful in the future

Page generated in 0.0597 seconds