Spelling suggestions: "subject:"[een] COMPUTER SCIENCE"" "subject:"[enn] COMPUTER SCIENCE""
61 |
Distributed on-line safety monitor based on safety assessment model and multi-agent systemDheedan, Amer Abdaladeem January 2012 (has links)
On-line safety monitoring, i.e. the tasks of fault detection and diagnosis, alarm annunciation, and fault controlling, is essential in the operational phase of critical systems. Over the last 30 years, considerable work in this area has resulted in approaches that exploit models of the normal operational behaviour and failure of a system. Typically, these models incorporate on-line knowledge of the monitored system and enable qualitative and quantitative reasoning about the symptoms, causes and possible effects of faults. Recently, monitors that exploit knowledge derived from the application of off-line safety assessment techniques have been proposed. The motivation for that work has been the observation that, in current practice, vast amounts of knowledge derived from off-line safety assessments cease to be useful following the certification and deployment of a system. The concept is potentially very useful. However, the monitors that have been proposed so far are limited in their potential because they are monolithic and centralised, and therefore, have limited applicability in systems that have a distributed nature and incorporate large numbers of components that interact collaboratively in dynamic cooperative structures. On the other hand, recent work on multi-agent systems shows that the distributed reasoning paradigm could cope with the nature of such systems. This thesis proposes a distributed on-line safety monitor which combines the benefits of using knowledge derived from off-line safety assessments with the benefits of the distributed reasoning of the multi-agent system. The monitor consists of a multi-agent system incorporating a number of Belief-Desire-Intention (BDI) agents which operate on a distributed monitoring model that contains reference knowledge derived from off-line safety assessments. Guided by the monitoring model, agents are hierarchically deployed to observe the operational conditions across various levels of the hierarchy of the monitored system and work collaboratively to integrate and deliver safety monitoring tasks. These tasks include detection of parameter deviations, diagnosis of underlying causes, alarm annunciation and application of fault corrective measures. In order to avoid alarm avalanches and latent misleading alarms, the monitor optimises alarm annunciation by suppressing unimportant and false alarms, filtering spurious sensory measurements and incorporating helpful alarm information that is announced at the correct time. The thesis discusses the relevant literature, describes the structure and algorithms of the proposed monitor, and through experiments, it shows the benefits of the monitor which range from increasing the composability, extensibility and flexibility of on-line safety monitoring to ultimately developing an effective and cost-effective monitor. The approach is evaluated in two case studies and in the light of the results the thesis discusses and concludes both limitations and relative merits compared to earlier safety monitoring concepts.
|
62 |
Flexible virtual learning environments : a schema-driven approach using sematic web conceptsWen, Lipeng January 2008 (has links)
Flexible e-Iearning refers to an intelligent educational mechanism that focuses on simulating and improving traditional education as far as possible on the Web by integrating various electronic approaches, technologies, and equipment. This mechanism aims to promote the personalized development and management of e-learning Web services and applications. The main value of this method is that it provides high-powered individualization in pedagogy for students and staff. Here, the thesis mainly studied three problems in meeting the practical requirements of users in education. The first question is how a range of teaching styles (e.g. command and guided discovery) can be supported. The second one is how varieties of instructional processes can be authored. The third question is how these processes can be controlled by learners and educators in terms of their personalized needs during the execution of instruction. In this research, through investigating the existing e-Iearning approaches and technologies, the main technical problems of current virtual learning environments (VLEs) were analyzed. Next, by using the Semantic Web concepts as well as relevant standards, a schema-driven approach was created. This method can support users' individualized operations in the Web-based education. Then, a flexible e-learning system based on the approach was designed and implemented to map a range of extensive didactic paradigms. Finally, a case study was completed to evaluate the research results. The main findings of the assessment were that the flexible VLE implemented a range of teaching styles and the personalized creation and control of educational processes.
|
63 |
Quantum recurrent neural networks for filteringAhamed, Woakil Uddin January 2009 (has links)
The essence of stochastic filtering is to compute the time-varying probability densityfunction (pdf) for the measurements of the observed system. In this thesis, a filter isdesigned based on the principles of quantum mechanics where the schrodinger waveequation (SWE) plays the key part. This equation is transformed to fit into the neuralnetwork architecture. Each neuron in the network mediates a spatio-temporal field witha unified quantum activation function that aggregates the pdf information of theobserved signals. The activation function is the result of the solution of the SWE. Theincorporation of SWE into the field of neural network provides a framework which is socalled the quantum recurrent neural network (QRNN). A filter based on this approachis categorized as intelligent filter, as the underlying formulation is based on the analogyto real neuron.In a QRNN filter, the interaction between the observed signal and the wave dynamicsare governed by the SWE. A key issue, therefore, is achieving a solution of the SWEthat ensures the stability of the numerical scheme. Another important aspect indesigning this filter is in the way the wave function transforms the observed signalthrough the network. This research has shown that there are two different ways (anormal wave and a calm wave, Chapter-5) this transformation can be achieved and thesewave packets play a critical role in the evolution of the pdf. In this context, this thesishave investigated the following issues: existing filtering approach in the evolution of thepdf, architecture of the QRNN, the method of solving SWE, numerical stability of thesolution, and propagation of the waves in the well. The methods developed in this thesishave been tested with relevant simulations. The filter has also been tested with somebenchmark chaotic series along with applications to real world situation. Suggestionsare made for the scope of further developments.
|
64 |
Predicting cardiovascular risks using pattern recognition and data miningNguyen, Thuy Thi Thu January 2009 (has links)
This thesis presents the use of pattern recognition and data mining techniques into risk prediction models in the clinical domain of cardiovascular medicine. The data is modelled and classified by using a number of alternative pattern recognition and data mining techniques in both supervised and unsupervised learning methods. Specific investigated techniques include multilayer perceptrons, radial basis functions, and support vector machines for supervised classification, and self organizing maps, KMIX and WKMIX algorithms for unsupervised clustering. The Physiological and Operative Severity Score for enUmeration of Mortality and morbidity (POSSUM), and Portsmouth POSSUM (PPOSSUM) are introduced as the risk scoring systems used in British surgery, which provide a tool for predicting risk adjustment and comparative audit. These systems could not detect all possible interactions between predictor variables whereas these may be possible through the use of pattern recognition techniques. The thesis presents KMIX and WKMIX as an improvement of the K-means algorithm; both use Euclidean and Hamming distances to measure the dissimilarity between patterns and their centres. The WKMIX is improved over the KMIX algorithm, and utilises attribute weights derived from mutual information values calculated based on a combination of Baye’s theorem, the entropy, and Kullback Leibler divergence. The research in this thesis suggests that a decision support system, for cardiovascular medicine, can be built utilising the studied risk prediction models and pattern recognition techniques. The same may be true for other medical domains.
|
65 |
Multi-objective optimisation of safety-critical hierarchical systemsParker, David James January 2010 (has links)
Achieving high reliability, particularly in safety critical systems, is an important and often mandatory requirement. At the same time costs should be kept as low as possible. Finding an optimum balance between maximising a system's reliability and minimising its cost is a hard combinatorial problem. As the size and complexity of a system increases, so does the scale of the problem faced by the designers. To address these difficulties, meta-heuristics such as Genetic Algorithms and Tabu Search algorithms have been applied in the past for automatically determining the optimal allocation of redundancies in a system as a mechanism for optimising the reliability and cost characteristics of that system. In all cases, simple reliability block diagrams with restrictive assumptions, such as failure independence and limited 2-state failure modes, were used for evaluating the reliability of the candidate designs produced by the various algorithms. This thesis argues that a departure from this restrictive evaluation model is possible by using a new model-based reliability evaluation technique called Hierachically Performed Hazard Origin and Propagation Studies (HiP-HOPS). HiP-HOPS can overcome the limitations imposed by reliability block diagrams by providing automatic analysis of complex engineering models with multiple failure modes. The thesis demonstrates that, used as the fitness evaluating component of a multi-objective Genetic Algorithm, HiP-HOPS can be used to solve the problem of redundancy allocation effectively and with relative efficiency. Furthermore, the ability of HiP-HOPS to model and automatically analyse complex engineering models, with multiple failure modes, allows the Genetic Algorithm to potentially optimise systems using more flexible strategies, not just series-parallel. The results of this thesis show the feasibility of the approach and point to a number of directions for future work to consider.
|
66 |
Integrated application of compositional and behavioural safety analysisSharvia, Septavera January 2011 (has links)
To address challenges arising in the safety assessment of critical engineering systems, research has recently focused on automating the synthesis of predictive models of system failure from design representations. In one approach, known as compositional safety analysis, system failure models such as fault trees and Failure Modes and Effects Analyses (FMEAs) are constructed from component failure models using a process of composition. Another approach has looked into automating system safety analysis via application of formal verification techniques such as model checking on behavioural models of the system represented as state automata. So far, compositional safety analysis and formal verification have been developed separately and seen as two competing paradigms to the problem of model-based safety analysis. This thesis shows that it is possible to move forward the terms of this debate and use the two paradigms synergistically in the context of an advanced safety assessment process. The thesis develops a systematic approach in which compositional safety analysis provides the basis for the systematic construction and refinement of state-automata that record the transition of a system from normal to degraded and failed states. These state automata can be further enhanced and then be model-checked to verify the satisfaction of safety properties. Note that the development of such models in current practice is ad hoc and relies only on expert knowledge, but it being rationalised and systematised in the proposed approach – a key contribution of this thesis. Overall the approach combines the advantages of compositional safety analysis such as simplicity, efficiency and scalability, with the benefits of formal verification such as the ability for automated verification of safety requirements on dynamic models of the system, and leads to an improved model-based safety analysis process. In the context of this process, a novel generic mechanism is also proposed for modelling the detectability of errors which typically arise as a result of component faults and then propagate through the architecture. This mechanism is used to derive analyses that can aid decisions on appropriate detection and recovery mechanisms in the system model. The thesis starts with an investigation of the potential for useful integration of compositional and formal safety analysis techniques. The approach is then developed in detail and guidelines for analysis and refinement of system models are given. Finally, the process is evaluated in three cases studies that were iteratively performed on increasingly refined and improved models of aircraft and automotive braking and cruise control systems. In the light of the results of these studies, the thesis concludes that integration of compositional and formal safety analysis techniques is feasible and potentially useful in the design of safety critical systems.
|
67 |
Specification and use of component failure patternsWolforth, Ian Philip January 2010 (has links)
Safety-critical systems are typically assessed for their adherence to specified safety properties. They are studied down to the component-level to identify root causes of any hazardous failures. Most recent work with model-based safety analysis has focused on improving system modelling techniques and the algorithms used for automatic analyses of failure models. However, few developments have been made to improve the scope of reusable analysis elements within these techniques. The failure behaviour of components in these techniques is typically specified in such a way that limits the applicability of such specifications across applications. The thesis argues that allowing more general expressions of failure behaviour, identifiable patterns of failure behaviour for use within safety analyses could be specified and reused across systems and applications where the conditions that allow such reuse are present.This thesis presents a novel Generalised Failure Language (GFL) for the specification and use of component failure patterns. Current model-based safety analysis methods are investigated to examine the scope and the limits of achievable reuse within their analyses. One method, HiP-HOPS, is extended to demonstrate the application of GFL and the use of component failure patterns in the context of automated safety analysis. A managed approach to performing reuse is developed alongside the GFL to create a method for more concise and efficient safety analysis. The method is then applied to a simplified fuel supply and a vehicle braking system, as well as on a set of legacy models that have previously been analysed using classical HiP-HOPS. The proposed GFL method is finally compared against the classical HiP-HOPS, and in the light of this study the benefits and limitations of this approach are discussed in the conclusions.
|
68 |
Offshore marine visualizationChapman, Paul M. January 2003 (has links)
In 85 B.C. a Greek philosopher called Posidonius set sail to answer an age-old question: how deep is the ocean? By lowering a large rock tied to a very long length of rope he determined that the ocean was 2km deep. These line and sinker methods were used until the 1920s when oceanographers developed the first echo sounders that could measure the water's depth by reflecting sound waves off the seafloor. The subsequent increase in sonar depth soundings resulted in oceanologists finally being able to view the alien underwater landscape. Paper printouts and records dominated the industry for decades until the mid 1980s when new digital sonar systems enabled computers to process and render the captured data streams. In the last five years, the offshore industry has been particularly slow to take advantage of the significant advancements made in computer and graphics technologies. Contemporary marine visualization systems still use outdated 2D representations of vessels positioned on digital charts and the potential for using 3D computer graphics for interacting with multidimensional marine data has not been fully investigated. This thesis is concerned with the issues surrounding the visualization of offshore activities and data using interactive 3D computer graphics. It describes the development of a novel 3D marine visualization system and subsequent study of marine visualization techniques through a number of offshore case studies that typify the marine industry. The results of this research demonstrate that presenting the offshore engineer or office based manager with a more intuitive and natural 3D computer generated viewing environment enables complex offshore tasks, activities and procedures to be more readily monitored and understood. The marine visualizations presented in this thesis take advantage of recent advancements in computer graphics technology and our extraordinary ability to interpret 3D data. These visual enhancements have improved offshore staffs' spatial and temporal understanding of marine data resulting in improved planning, decision making and real-time situation awareness of complex offshore data and activities.
|
69 |
Hybrid framework for dynamic position determination in multisensor environmentsLiimatainen, Saana Pauliina January 2009 (has links)
Information about a user's context is crucial in obtaining the goal of ubiquitous computing. This thesis introduces a new approach in for looking at a special case of context; location information. Making devices location-aware is the first step of providing context-based services. Existing technologies for position determination are ill suited in terms of interoperability and heterogeneity. Furthermore, they rely either on vast and often expensive infrastructures to perform the position estimation or alternatively the mobile device is burdened with the responsibility of localising itself. Both of the current approaches have their trade-offs. The basis of this work is to maximise the availability of positioning services allowing mobility between different environments and surroundings while minimising the vulnerabilities of existing approaches. The work presents a managed positioning environment for indoor and outdoor surroundings, in which accuracy and precision can be improved by using a mix of fixed sensors and the sensing capabilities of mobile devices in a way that it allows the transformation of proximity data into absolute coordinates. It is believed that this also improves the availability of the positioning service as partial, imprecise or incomplete data is utilised rather than discarded. The usage of wireless local area networks along with PDAs, mobile phones and similar devices, as opposed to custom sensors ensures that maintenance and administrative costs are kept to a minimum. Furthermore, a dynamic feedback system is proposed in order minimise deployment and initialisation effort by allowing refining of location information in fixed sensors.
|
70 |
Global inference for sentence compression : an integer linear programming approachClarke, James January 2008 (has links)
In this thesis we develop models for sentence compression. This text rewriting task has recently attracted a lot of attention due to its relevance for applications (e.g., summarisation) and simple formulation by means of word deletion. Previous models for sentence compression have been inherently local and thus fail to capture the long range dependencies and complex interactions involved in text rewriting. We present a solution by framing the task as an optimisation problem with local and global constraints and recast existing compression models into this framework. Using the constraints we instil syntactic, semantic and discourse knowledge the models otherwise fail to capture. We show that the addition of constraints allow relatively simple local models to reach state-of-the-art performance for sentence compression. The thesis provides a detailed study of sentence compression and its models. The differences between automatic and manually created compression corpora are assessed along with how compression varies across written and spoken text. We also discuss various techniques for automatically and manually evaluating compression output against a gold standard. Models are reviewed based on their assumptions, training requirements, and scalability. We introduce a general method for extending previous approaches to allow for more global models. This is achieved through the optimisation framework of Integer Linear Programming (ILP). We reformulate three compression models: an unsupervised model, a semi-supervised model and a fully supervised model as ILP problems and augment them with constraints. These constraints are intuitive for the compression task and are both syntactically and semantically motivated. We demonstrate how they improve compression quality and reduce the requirements on training material. Finally, we delve into document compression where the task is to compress every sentence of a document and use the resulting summary as a replacement for the original document. For document-based compression we investigate discourse information and its application to the compression task. Two discourse theories, Centering and lexical chains, are used to automatically annotate documents. These annotations are then used in our compression framework to impose additional constraints on the resulting document. The goal is to preserve the discourse structure of the original document and most of its content. We show how a discourse informed compression model can outperform a discourse agnostic state-of-the-art model using a question answering evaluation paradigm.
|
Page generated in 0.0624 seconds