• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20445
  • 5226
  • 1262
  • 1207
  • 867
  • 670
  • 435
  • 410
  • 410
  • 410
  • 410
  • 410
  • 407
  • 158
  • 156
  • Tagged with
  • 34383
  • 34383
  • 14105
  • 10821
  • 3107
  • 2978
  • 2737
  • 2541
  • 2482
  • 2354
  • 2278
  • 2178
  • 2163
  • 2046
  • 1937
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Practical approaches to mining of clinical datasets : from frameworks to novel feature selection

Poolsawad, Nongnuch January 2014 (has links)
Research has investigated clinical data that have embedded within them numerous complexities and uncertainties in the form of missing values, class imbalances and high dimensionality. The research in this thesis was motivated by these challenges to minimise these problems whilst, at the same time, maximising classification performance of data and also selecting the significant subset of variables. As such, this led to the proposal of a data mining framework and feature selection method. The proposed framework has a simple algorithmic framework and makes use of a modified form of existing frameworks to address a variety of different data issues, called the Handling Clinical Data Framework (HCDF). The assessment of data mining techniques reveals that missing values imputation and resampling data for class balancing can improve the performance of classification. Next, the proposed feature selection method was introduced; it involves projecting onto principal component method (FS-PPC) and draws on ideas from both feature extraction and feature selection to select a significant subset of features from the data. This method selects features that have high correlation with the principal component by applying symmetrical uncertainty (SU). However, irrelevant and redundant features are removed by using mutual information (MI). However, this method provides confidence in the selected subset of features that will yield realistic results with less time and effort. FS-PPC is able to retain classification performance and meaningful features while consisting of non-redundant features. The proposed methods have been practically applied to analysis of real clinical data and their effectiveness has been assessed. The results show that the proposed methods are enable to minimise the clinical data problems whilst, at the same time, maximising classification performance of data.
12

Model transformation for multi-objective architecture optimisation for dependable systems

Mian, Zhibao January 2014 (has links)
Model-based engineering (MBE) promises a number of advantages for the development of embedded systems. Model-based engineering depends on a common model of the system, which is refined as the system is developed. The use of a common model promises a consistent and systematic analysis of dependability, correctness, timing and performance properties. These benefits are potentially available early and throughout the development life cycle. An important part of model-based engineering is the use of analysis and design languages. The Architecture Analysis and Design Language (AADL) is a new modelling language which is increasingly being used for high dependability embedded systems development. AADL is ideally suited to model-based engineering but the use of new language threatens to isolate existing tools which use different languages. This is a particular problem when these tools provide an important development or analysis function, for example system optimisation. System designers seek an optimal trade-off between high dependability and low cost. For large systems, the design space of alternatives with respect to both dependability and cost is enormous and too large to investigate manually. For this reason automation is required to produce optimal or near optimal designs. There is, however, a lack of analysis techniques and tools that can perform a dependability analysis and optimisation of AADL models. Some analysis tools are available in the literature but they are not able to accept AADL models since they use a different modelling language. A cost effective way of adding system dependability analysis and optimisation to models expressed in AADL is to exploit the capabilities of existing tools. Model transformation is a useful technique to maximise the utility of model-based engineering approaches because it provides a route for the exploitation of mature and tested tools in a new model-based engineering context. By using model transformation techniques, one can automatically translate between AADL models and other models. The advantage of this model transformation approach is that it opens a path by which AADL models may exploit existing non-AADL tools. There is little published work which gives a comprehensive description of a method for transforming AADL models. Although transformations from AADL into other models have been reported only one comprehensive description has been published, a transformation of AADL to petri net models. There is a lack of detailed guidance for the transformation of AADL models. This thesis investigates the transformation of AADL models into the HiP-HOPS modelling language, in order to provide dependability analysis and optimisation. HiP-HOPS is a mature, state of the art, dependability analysis and optimisation tool but it has its own model. A model transformation is defined from the AADL model to the HiP-HOPS model. In addition to the model-to-model transformation, it is necessary to extend the AADL modelling attributes. For cost and dependability optimisation, a new AADL property set is developed for modelling component and system variability. This solves the problem of describing, within an AADL model, the design space of alternative designs. The transformation (with transformation rules written in ATLAS Transformation Language (ATL)) has been implemented as a plug-in for the AADL model development tool OSATE (Open-source AADL Tool Environment). To illustrate the method, the plug-in is used to transform some AADL model case-studies.
13

Quantitative analysis of dynamic safety-critical systems using temporal fault trees

Edifor, Ernest Edem January 2014 (has links)
Emerging technological systems present complexities that pose new risks and hazards. Some of these systems, called safety-critical systems, can have very disastrous effects on human life and the environment if they fail. For this reason, such systems may feature multiple modes of operation, which may make use of redundant components, parallel architectures, and the ability to fall back to a degraded state of operation without failing completely. However, the introduction of such features poses new challenges for systems analysts, who need to understand how such systems behave and estimate how reliable and safe they really are. Fault Trees Analysis (FTA) is a technique widely accepted and employed for analysing the reliability of safety-critical systems. With FTA, analysts can perform both qualitative and quantitative analyses on safety-critical systems. Unfortunately, traditional FTA is unable to efficiently capture some of the dynamic features of modern systems. This problem is not new; various efforts have been made to develop techniques to solve it. Pandora is one such technique to enhance FTA. It uses new 'temporal' logic gates, in addition to some existing ones, to model dynamic sequences of events and eventually produce combinations of basic events necessary and sufficient to cause a system failure. Until now, Pandora was not able to quantitatively evaluate the probability of a system failure. This is the motivation for this thesis. This thesis proposes and evaluates various techniques for the probabilistic evaluation of the temporal gates in Pandora, enabling quantitative temporal fault tree analysis. It also introduces a new logical gate called the 'parameterised Simultaneous-AND' (pSAND) gate. The proposed techniques include both analytical and simulation-based approaches. The analytical solution supports only component failures with exponential distribution whilst the simulation approach is not restricted to any specific component failure distribution. Other techniques for evaluating higher order component combinations, which are results of the propagation of individual gates towards a system failure, have also been formulated. These mathematical expressions for the evaluation of individual gates and combinations of components have enabled the evaluation of a total system failure and importance measures, which are of great interest to system analysts.
14

Simplifying co-located collaboration in screen-based virtual environments

García Estrada, José Fernando January 2015 (has links)
Co-located virtual environments (VEs) are shared physical spaces where several participants collaborate in a virtual world as they would do in a real-world group meeting using a shared display and visualization. However, implementation of co-located VEs with equal interaction by all participants is complex and expensive. The main issue is the difficulty in providing perspective-correct images for each individual when several participants are co-located. This limits the ability of participants to interact directly with objects in the VE. Implementing a form of interaction that accepts this limitation remains a significant challenge. Research approaches have made trade-offs striving to eliminate or mitigate the restriction on providing perspective-correct images to enable co-located work. However, some of those approaches may compromise the benefits of individual perception in a VE against a common averaged perspective or adopt complex, high cost solutions. This thesis investigates user interaction for collaborative work in a screen-based co-located VE. The work presented in this thesis aims to make interaction in co-located VEs simpler than existing hardware and software-based approaches by applying an under-investigated approach where one user has a perspective-correct view (active) whilst other users receive images that are not perspective-correct (passive). Collaboration was investigated through a controlled co-located work study, adopting two-user interaction using the active-passive approach. The findings of the study indicate that the configuration of the object geometry may be an important factor using the active-passive approach. This result extends existing approaches that focus on the relation between active and passive users’ locations. The investigation was facilitated by an innovative method where the behaviour of a subject during a coordinated task is automatised using a virtual participant. The findings also show limitations of the active-passive approach that can be optimized to enable co-located work. The conclusions of this thesis provide quantitative and qualitative data on how two-user interaction performance varies during coordinated interaction using an active-passive approach. The results presented in this thesis provide an improved understanding of two-user collaboration in virtual environments and should be useful for the design and development of co-located VE systems.
15

Machine learning in compilers

Leather, Hugh January 2011 (has links)
Tuning a compiler so that it produces optimised code is a difficult task because modern processors are complicated; they have a large number of components operating in parallel and each is sensitive to the behaviour of the others. Building analytical models on which optimisation heuristics can be based has become harder as processor complexity increased and this trend is bound to continue as the world moves towards further heterogeneous parallelism. Compiler writers need to spend months to get a heuristic right for any particular architecture and these days compilers often support a wide range of disparate devices. Whenever a new processor comes out, even if derived from a previous one, the compiler’s heuristics will need to be retuned for it. This is, typically, too much effort and so, in fact, most compilers are out of date. Machine learning has been shown to help; by running example programs, compiled in different ways, and observing how those ways effect program run-time, automatic machine learning tools can predict good settings with which to compile new, as yet unseen programs. The field is nascent, but has demonstrated significant results already and promises a day when compilers will be tuned for new hardware without the need for months of compiler experts’ time. Many hurdles still remain, however, and while experts no longer have to worry about the details of heuristic parameters, they must spend their time on the details of the machine learning process instead to get the full benefits of the approach. This thesis aims to remove some of the aspects of machine learning based compilers for which human experts are still required, paving the way for a completely automatic, retuning compiler. First, we tackle the most conspicuous area of human involvement; feature generation. In all previous machine learning works for compilers, the features, which describe the important aspects of each example to the machine learning tools, must be constructed by an expert. Should that expert choose features poorly, they will miss crucial information without which the machine learning algorithm can never excel. We show that not only can we automatically derive good features, but that these features out perform those of human experts. We demonstrate our approach on loop unrolling, and find we do better than previous work, obtaining XXX% of the available performance, more than the XXX% of previous state of the art. Next, we demonstrate a new method to efficiently capture the raw data needed for machine learning tasks. The iterative compilation on which machine learning in compilers depends is typically time consuming, often requiring months of compute time. The underlying processes are also noisy, so that most prior works fall into two categories; those which attempt to gather clean data by executing a large number of times and those which ignore the statistical validity of their data to keep experiment times feasible. Our approach, on the other hand guarantees clean data while adapting to the experiment at hand, needing an order of magnitude less work that prior techniques.
16

Programming support for CSCW : using X windows

Winnett, Maria E. January 1995 (has links)
This thesis presents a model for programming support for synchronous, distributed CSCW (Computer Supported Co-operative Work). Synchronous, distributed CSCW aims to allow groups of people separated, by distance to work together in real time as if they were at the same location. The model proposed in the thesis allows an application program to be constructed using user interface components known as “shared widgets”. A shared widget displays underlying application data on multiple screens and processes input from multiple users distributed over a network. The distribution of data to and from the users and the underlying network communication is hidden from the application program within the shared widget. The model describes a shared widget as comprising a single “Artefact” and a number of “Views.” The Artefact contains the underlying data and the actions that can be performed on it. A View is the presentation of the Artefact on a user's screen. Shared widgets contain a View for each user in the group. Each user can provide input to the Artefact via their own View, and any change made to the Artefact is reflected synchronously in all the Views. The Artefact can also impose a floor control policy to restrict input to a particular user or group of users, by checking each input event against a known floor control value. The model differs from previous approaches to programming support for CSCW in that the distributed nature of the users is hidden from the application programmer within the shared widgets. As a result, the application programmer does not have to be concerned with the processing of input events or the distribution of output to multiple users. The hiding of these implementation details within the shared widgets allows the CSCW application to be constructed in a similar way to a single-user application. An implementation of the shared widget model, using X Windows, is also described in the thesis. Experimental results and observations are given and used to suggest future directions for further research.
17

Motion segmentation of semantic objects in video sequences

Thirde, David J. January 2007 (has links)
The extraction of meaningful objects from video sequences is becoming increasingly important in many multimedia applications such as video compression or video post-production. The goal of this thesis is to review, evaluate and build upon the wealth of recent work on the problem of video object segmentation in the context of probabilistic techniques for generic video object segmentation. Methods are suggested that solve this problem using formal probabilistic learning techniques, this allows principled justification of methods applied to the problem of segmenting video objects. By applying a simple, but effective, evaluation methodology the impact of all aspects of the video object segmentation process are quantitatively analysed. This research focuses on the application of feature spaces and probabilistic models for video object segmentation are investigated. Subsequently, an efficient region-based approach to object segmentation is described along with an evaluation of mechanisms for updating such a representation. Finally, a hierarchical Bayesian framework is proposed to allow efficient implementation and comparison of combined region-level and object-level representational schemes.
18

Object modelling of temporal changes in Geographical Information Systems

Adamu, Abdul T. January 2003 (has links)
Changes in current temporally enabled GIS systems, thàt have been successfully implemented, are based on the snapshot approach which consist of sequences of discrete images. This approach does not allow either the pattern of changes to be shown or the complexities within the changes to be examined. Also the existing GIS database models cannot represent effectively the history of geographical phenomena. The aim of this research is to develop an object-oriented GIS model (OOGIS) that will represent detailed changes of geographical objects and track the evolution of objects. The detailed changes include spatial, thematic, temporal, events and processes that are involved in the changes. Those have been addressed, but not implemented, by a number of previous GIS projects. Object tracking and evolution includes not only attributes changes to homogenous objects, but also major changes that lead to transforming/destroying existing objects and creating new ones. This will allow the pattern of changes of - geographical phenomena to be examined by tracking the evolution of geographical objects. The OOGIS model was designed using an object-oriented visual modelling tool and was implemented using an object-oriented programming environment (OOPE), an object-oriented database system (OODBS). The visual modelling tool for designing the OOGIS model was Unified Modelling Language (UML), OOPE for implementing the OOGIS model was Microsoft Visual C++ and the OODBS was Objectivity/DB. The prototype of the investigation has been successfully implemented using a Case Study of Royal Borough of Kingston-Upon-Thames, in the United Kingdom. This research is addressing in particular the deficiencies in two existing GIS models that are related to this work. The fust model, the triad model, represents the spatial, thematic and temporal but fails to represent events and processes connected to the changes. The second model, the event-oriented model, though it represents the events (or processes) related to the changes, it stores the changes as attributes of the object. This model is.limited to temporal stable (static) changes and can not be applied to the evolution of geographical phenomena or changes that involve several objects sharing common . .propertíes and temporal relationships. Moreover, the model does not take into account the evolution (e.g. splitting, transformation etc) of a specific object which can involve more than changes to its attributes. Both models are not able to tackle, for instance, in situation when an object such as a park is disappearing to make way for new objects (i.e. roads and new buildings) or in situation where an agriculture piece of land becomes an industrial lot or village becomes a city. In this work the construction of a new approach which overcomes these deficiencies is presented. Also the approach take into account associations and relationships between objects such as inheritance which would be reflected in the object oriented database. For example a road can be regarded a base class from which other classes can be derived such as motorways, streets, dual roads etc which might reflect the evolution of objects ,in non-homogenous ways. The object versioning technique in this work will allow the versions of a geographical object to be related, thereby creating temporal relationships between them. It requires less data storage, since only the changes are recorded. The association between the versions allows continuous forward and backward movement within the versions, and promotes optimum query mechanisms.
19

An energy expert advisor and decision support system for aluminium melting and casting

Yoberd, Belmond January 1994 (has links)
The aim of this project was to develop and implement an expert advisor system to provide information for selecting and scheduling several items of small foundry plants using electric resistance bale-out furnaces, to optimise metal use and reduce energy costs. This involved study in formulating the procedures and developing a “foundry user friendly” expert system for giving advice to unskilled operatives in what was a complex multi- variable process. This system (FOES) included investigation and development of an advising system on the casting of a large numbers of different objects cast under different operating conditions and electricity tariffs. Knowledge elicitation techniques were developed and used during the complicated knowledge election process. Since this research programme intended to look at the complete process of melting, holding and pouring of the aluminium alloy, complex electricity tariffs were incorporated into the expert system in order to accurately calculate the energy cost of each process. A sub-section of the FOES system (DAD) could advise the unskilled foundry operative identify and eliminate the seven most common aluminium alloy casting defect by using a novel technique of incorporating actual defect photographs which were digitally scanned into the system.
20

A mobile and cloud-based framework for plant stress detection from crowdsourced visual and infrared imagery

Zukowski, Daniel 29 July 2016 (has links)
<p> A cloud infrastructure and Android-based system were developed to enable amateurs and professionals to make use of laboratory techniques for remote plant disease detection. The system allows users to upload and analyze plant data as citizen scientists, helping to improve models for remote disease detection in horticultural settings by greatly increasing the quantity and diversity of data available for analysis by the community. Techniques used in research laboratories for remote disease detection are generally not available to home gardeners and small commercial farmers. Lab equipment is cost-prohibitive and experiments highly controlled, leading to models that are not necessarily transferable to the user&rsquo;s environment. Plant producers rely on expert knowledge from training, experience, and extension service professionals to accurately and reliably diagnose and quantify plant health. Techniques for disease detection using visible and infrared imagery have been proven in research studies and can now be made available to individuals due to advancements in smartphones and low-cost thermal imaging devices. The framework presented in this paper provides an internet-accessible data pipeline for image acquisition, preprocessing, stereo rectification, disparity mapping, registration, feature extraction, and machine learning, designed to support research efforts and to make plant stress detection technology readily available to the public. A system of this kind has the potential to benefit both researchers and plant growers: producers can collectively create large labeled data sets which researchers can use to build and improve detection models, returning value to growers in the form of generalizable models that work in real-world horticultural settings. We demonstrate the components of the framework and show data from a water stress experiment on basil plants performed using the mobile app and cloud-based services.</p>

Page generated in 0.0748 seconds