Spelling suggestions: "subject:"computer cience anda informatics"" "subject:"computer cience anda lnformatics""
91 |
The use of formal methods for safety-critical systemsTrafford, Paul Joseph January 1997 (has links)
An investigation is presented into the use of formal methods for the production of safety-critical systems with embedded software. New theory and procedures are tested on an industrial case study, the formal specification and refinement of a communications protocol for medical devices (the Universal Flexport protocol [copyright]). On reviewing the current literature, a strong case emerges for grounding any work within an overall perspective that integrates the experience of safety engineering and the correctness of formal methods. Such a basis, it is argued, is necessary for an effective contribution to the delivery with assurance of life-critical software components. Hence, a safety-oriented framework is proposed which facilitates a natural flow from safety analysis of the entire system through to formal requirements, design, verification and validation for a software model undergoing refinement towards implementation. This framework takes a standard safety lifecycle model and considers where and how formal methods can play apart, resulting in procedures which emphasise the activities most amenable to formal input. Next, details of the framework are instantiated, based upon the provision of a common formal semantics to represent both the safety analysis and software models. A procedure, FTBuild, is provided for deriving formal requirements as part of the process of generating formalised fault trees. Work is then presented on establishing relations between formalised fault trees and models, extending results of other authors. Also given are some notions of (property) conformance with respect to the given requirements. The formal approach itself is supported by the enhancement of the theory of con-formance testing that has been developed for communication systems. The basis of this work is the detailed integration of already established theories: a testing system for process algebra (the Experimental System due to Hennessy and de Nicola) and a more general observation framework (developed by the LOTOSphere consortium). Notions of conformance and robustness are then examined in the context of refinement for the process algebra, (Basic) LOTOS, resulting in the adoption of the commonly accepted 'reduction' relation for which a proof is given that it is testable. Then a new algorithm is developed for a single (canonical) tester for reduction, which is unified in that it tests simultaneously for both con-formance and robustness. It also allows, in certain cases, a straightforward implementation as a Full LOTOS process with the ability to give some diagnostics in the case of failure. The text is supported by examples and some guidelines for use. Finally, having established these foundations, the methodology is demonstrated on the Flexport protocol through two iterations of FTBuild which demonstrate how the activities of specification, safety analysis, validation and refinement are all brought together.
|
92 |
A framework for semantic information search and discovery in enterprise e-commerce applicationsJohn, Biju January 2011 (has links)
Semantic web is an extension of the current web which is envisaged to offer a machine readable information space, where intelligent engines perform sophisticated tasks that can help in managing the massive and complex information on web. The shift towards the use of semantic web and semantic information is important in many areas including E-commerce enterprise applications. However, major issues remain particularly in terms of semantic information grouping, maintenance, storage and automated reasoning and retrieval support that is required for the verification, ranking and validation of E-commerce products and services. Moreover, many E-business and E-commerce approaches are deficient in a number of ways. These include the lack of semantic guidelines for building websites into a business, lack of semantic information search and real-time information leading to a lack of appropriate levels of interaction between the customers and the entrepreneur. Hence, the main aim of this research is to develop a framework for semantic information search and discovery in E-commerce enterprise applications. The proposed framework offers a new approach for semantic grouping, storing, retrieval and ranking of information semantically. Thus, to achieve its aims the research work started with an investigation into existing E-commerce websites and their evaluation using a software quality model which showed that many sites suffer from a lack of appropriate information linkage and real¬time interaction that are required for building semantic E-commerce systems and for information searches. To address these issues a semantic architecture including a semantic layer with an intelligent network based engine is proposed and implemented in this thesis. The architecture supports an automated semantic grouping and retrieval system and produces semantic ranking of information while continuously checking the information semantically and suggesting possible enhancements to the user. The experimental evaluation results, using a case study based on an E-commerce business application, demonstrated the benefits of the approach particularly in terms of semantic search and. semantic ranking of the products. Moreover, although this information search approach was applied in the context of E-commerce it is generic which makes it appropriate for many online applications and websites.
|
93 |
Ultrasound image analysis of the carotid arteryLoizou, Christos P. January 2005 (has links)
Stroke is one of the most important causes of death in the world and the leading cause of serious, long-term disability. There is an urgent need for better techniques to diagnose patients at risk of stroke based on the measurements of the intima media thickness (IMT) and the segmentation of the atherosclerotic carotid plaque. The objective of this work was to carry out a comparative evaluation of despeckle filtering on ultrasound images of the carotid artery, and develop a new segmentation system, for detecting the IMT of the common carotid artery and the borders of the athrerosclerotic carotid plaque in longitudinal ultrasound images of the carotid artery. To the, best of our knowledge no similar system has been developed for segmenting the atherosclerotic carotid plaque, although a number of techniques have been proposed for IMT segmentation. A total of 11 despeckle filtering methods were evaluated based on texture analysis, image quality evaluation metrics, and visual evaluation made by two experts, on 440 ultrasound images of the carotid artery bifurcation. Furthermore, the proposed IMT and plaque segmentation techniques were evaluated on 100 and 80 longitudinal ultrasound images of the carotid bifurcation respectively based on receiver operating chatracteristic (ROC) analysis. The despeckle filtering results showed that a despeckle filter based on local statistics (lsmv) improved the class separation between asymptomatic and symptomatic classes, gave only a marginal improvement in the percentage of correct classifications success rate, and improved the visual assessment carried out by the experts. It was also found that the lsmv despeckle filter can be used for despeckling asymptomatic images where the expert is interested mainly in the plaque composition and texture analysis, whereas a geometric despeckle filter (gf4d) can be used for despeckling of symptomatic images where the expert is interested in identifying the degree of stenosis and the plaque borders. The IMT snakes segmentation results showed that no significant difference was found between the manual and the snakes segmentation measurements. Better segmentation results were obtained for the normalized despeckled images. The plaque segmentation, results showed that, the Lai & Chin snakes segmentation method gives results comparable to the manual delineation procedure. The IMT and plaque snakes segmentation method may be therefore used to complement and assist the final expert's evaluation. The proposed despeckling and segmentation methods will be further evaluated on a larger number of ultrasound images and on multiple experts' evaluation. Furthermore, it is expected that both methods will be incorporated into an integrated system enabling the texture analysis of the segmented plaque, providing an automated system for the early diagnosis and the assessment of the risk of stroke.
|
94 |
A distributed imaging framework for the analysis and visualization of multi-dimensional bio-image datasets, in high content screening applicationsMclay, Colin Anthony January 2015 (has links)
This research presents the DFrame, a modular and extensible distributed framework that simplifies and thus encourages the use of parallel processing, and that is especially targeted at the analysis and visualization of multi-dimensional bio-image datasets in high content screening applications. These applications typically apply pipelines of complex and time consuming algorithms to multiple bio-image dataset stream and it is highly desirable to use parallel resources to exploit the inherent concurrency, in order to achieve results in much reduced time scales. The DFrame allows pluggable extension and reuse of models implementing parallelizing patterns, and similarly provides for application extensibility. This facilitates the composition of novel parallelized 3D image processing application. A client server architecture is adopted to support both batch and long running interactive sessions. The DFrame client provides functions to author applications as workflows, and mediates interaction with the server. The DFrame server runs as multiple cooperating distributed instances, that together orchestrate to execture tasks according to a workflow's implied order. An inversion of control paradigm is used to drive the loading and running of the models that themselves then coordinate to load and parallelize the running of each task specified in a workflow. The design opens up the opportunity to incorporate advanced management features, including parallel pattern selection based on application context, dynamic 'in application' resource allocation, and adaptable partitioning and composition strategies. Generic partitioning and composition mechanisms for supporting both task and data parallelism are provided, with specific implementation support applicable to the domain of 3D image processing. Evaluations of the DFrame are conducted at the component levelm where specific parallelizing models are applied to discrete 3D image filtering and segmentation operators and to a ray tracing implementation. A complete integrated case study is then presented that composes component entities into multiple image processing pipeline to more fully demonstrate the power and utility of the DFrame, not only in terms of performance, but also to highlight the extensibility and adaptability that permeates through the design, and its applicability to the domain of multi-dimensional image processing. Results are discussed that evidence the utility of the approach, and avenues of future works are considered.
|
95 |
Multiple Action Recognition for Video Games (MARViG)Bloom, Victoria January 2015 (has links)
Action recognition research historically has focused on increasing accuracy on datasets in highly controlled environments. Perfect or near perfect offline action recognition accuracy on scripted datasets has been achieved. The aim of this thesis is to deal with the more complex problem of online action recognition with low latency in real world scenarios. To fulfil this aim two new multi-modal gaming datasets were captured and three novel algorithms for online action recognition were proposed. Two new gaming datasets, G3D and G3Di for real-time action recognition with multiple actions and multi-modal data were captured and publicly released. Furthermore, G3Di was captured using a novel game-sourcing method so the actions are realistic. Three novel algorithms for online action recognition with low latency were proposed. Firstly, Dynamic Feature Selection, which combines the discriminative power of Random Forests for feature selection with an ensemble of AdaBoost classifiers for dynamic classification. Secondly, Clustered Spatio-Temporal Manifolds, which modelled the dynamics of human actions with style invariant action templates that were combined with Dynamic Time Warping for execution rate invariance. Finally, a Hierarchical Transfer Learning framework, comprised of a novel transfer learning algorithm to detect compound actions in addition to hierarchical interaction detection to recognise the actions and interactions of multiple subjects. The proposed algorithms run in real-time with low latency ensuring they are suitable for a wide range of natural user interface applications including gaming. State-of-the art results were achieved for online action recognition. Experimental results indicate higher complexity of the G3Di dataset in comparison to the existing gaming datasets, highlighting the importance of this dataset for designing algorithms suitable for realistic interactive applications. This thesis has advanced the study of realistic action recognition and is expected to serve as a basis for further study within the research community.
|
96 |
The acquisition and representation of knowledge about complex multi-dynamic processesGrau, Ron January 2009 (has links)
This thesis is concerned with the acquisition, representation, modelling and discovery of knowledge in ill-structured domains. In the context of this work, these are referred to as domains that involve "complex multi-dynamic (CMD) processes". A CMD process is an abstract concept for thinking about combinations of different processes where any specification and explanation involves large amounts of heterogeneous knowledge. Due to manifold cognitive and representational problems, this particular knowledge is currently hard to acquire from experts and difficult to integrate in process models. The thesis focuses on two problems in the context of modelling, discovery and design of CMD processes, a knowledge representation problem and a knowledge acquisition problem. The thesis outlines a solution by drawing together different theoretical and technological developments related to the fields of Artificial Intelligence, Cognitive Science and Computer Science, including research on computational models of scientific discovery, process modelling, and representation design. An integrative framework of knowledge representations and acquisition methods has been established, underpinning a general paradigm of CMD processes. The framework takes a compositional, collaborative approach to knowledge acquisition by providing methods for the decomposition of complex process combinations into systems of process fragments and the localisation of structural change, process behaviour and function within these systems. Diagrammatic representations play an important role, as they provide a range of representational, cognitive and computational properties that are particularly useful for meeting many of the difficulties that CMD processes pose. The research has been applied to Industrial Bakery Product Manufacturing, a challenging domain that involves a variety of physical, chemical and biochemical process combinations. A software prototype (CMD SUITE) has been implemented that integrates the developed theoretical framework to create novel, interactive knowledge-based tools which are aimed towards ill-structured domains of knowledge. The utility of the software workbench and its underlying CMD Framework has been demonstrated in a case study. The bakery experts collaborating in this project were able to successfully utilise the software tools to express and integrate their knowledge in a new way, while overcoming limits of previously used models and tools.
|
97 |
Sequential frame synchronization over binary symmetrical channel for unequally distributed data symbolsIsawhe, Boladale Modupe January 2017 (has links)
Frame synchronization is a critical task in digital communications receivers as it enables the accurate decoding and recovery of transmitted information. Information transmitted over a wireless channel is represented as bit stream. The bit stream is typically organized into groups of bits which can be of the same or variable length, known as frames, with frames being demarcated prior to transmission by a known bit sequence. The task of the frame synchronizer in the receiver is then to correctly determine frame boundaries, given that the received bit stream is a possibly corrupted version of the transmitted bit stream due to the error-prone nature of the wireless channel. Bearing in mind that the problem of frame synchronization has been studied extensively for wireless communications, where frames have a known, constant length, this thesis examines and seeks to make a contribution to the problem of frame synchronization where frames are of variable, unknown lengths. This is a common occurrence in the transmission of multimedia information and in packet or burst mode communications. Furthermore, a uniform distribution of data symbols is commonly assumed in frame synchronization works as this simplifies analysis. In many practical situations however, this assumption may not hold true. An example is in bit streams generated in video sequences encoded through discrete cosine transform (DCT) and also in more recent video coding standards (H.264). In this work, we therefore propose a novel, optimal frame synchronization metric for transmission over a binary symmetric channel (BSC) with a known, unequal source data symbol distribution, and where frames are of unknown, varying lengths. We thus extend prior studies carried out for the additive White Gaussian noise (AWGN) channel. We also provide a performance evaluation for the derived metric, using simulations and by exact mathematical analysis. In addition, we provide an exact analysis for the performance evaluation of the commonly used hard correlation (HC) metric, in the case where data symbols have a known, unequal distribution, which hitherto has not been made available in literature. We thus compare the performance of our proposed metric with that of the HC metric. Finally, the results of our study are applied to the investigation of cross-layer frame synchronization in the transmission of H.264 video over a Worldwide Interoperability for Microwave Access (WiMAX) system. We thus demonstrate priori knowledge of the source data distribution can be exploited to enhance frame synchronization performance, for the cases where hard decision decoding is desirable.
|
98 |
Pedestrian detection and re-identification in surveillance videoPedagadi, Sateesh January 2016 (has links)
The detection and re-identification of pedestrians is an important component of the automated analysis of surveillance video. The main challenge addressed herein is the development of accurate and efficient algorithms for these two tasks, using oversampled. redundant sets of features to which machine learning algorithms may be applied. For pedestrian detection, Integral Line Scan Features (ILF) are developed as a means of generating such a pool of features. A machine learning section procedure can be used to derive an appropriately weighted subset of features. One advantage provided by the integral design is that dense sampling of the image is relatively efficient, since the integral features are calculated only once for the entire image. Another advantage provided by the feature characteristics is a relatively consistent performance across different feature scales, which obviates the need to generate a scaled image pyramid. Methods for pedestrian re-identification, using redundant feature sets, are also investigated. It is hypothesised that performance can be improved by simultaneously expressing features in two alternative colour spaces, using both to learn a transformation into a metric space well suited to the task. Experiments are presented that confirm this hypothesis, using a novel adaptation of the Local Fisher (LF) machine learning approach. A separate contribution to the re-identification problem is the development of a method, SELF : that uses multiple classifiers. Each classifier is assigned to a given category of difficulty (of re-identification). It was hypothesised that such a method would improve performance, and experiments were devised to verify this idea. A final contribution is the analysis of pedestrian re-identification performance metric from an information-theoretic perspective, and the proposal for a metric that measures the proportion of uncertainty (PUR) removed. This metric can be applied to represent pedestrian re-identification performance. The thesis concludes with a discussion of implications and future extensions.
|
99 |
Novel game theoretic frameworks for security risk assessment in cloud environmentsMaghrabi, Louai January 2017 (has links)
No description available.
|
100 |
K-point group-signatures in curve analysisAghayan, Reza January 2013 (has links)
Geometric invariants play a vital role in the field of object recognition where the objects of interest are affected by a group of transformations. However, designing robust algorithms that are tolerant to noise and image occlusion remains still an open problem. In particular, numerically invariant signature curves in terms of joint invariants, as an approximation to differential invariant signature curves, suffer from instability, bias, noise and indeterminacy in the resulting signatures. The expression presented in the previous works to approximate the affine arc length along the given mesh is only in terms of the approximating ellipse and, in consequence, the current formulae cannot be correctly demonstrated on data containing non-elliptical boundaries. We also prove the current formulation depends on the viewpoint that may change the curve-orientation and the signature-direction, and results in different numerical signatures for congruent ordinary meshes - in other words, Signature Theorem is not correct in Mesh group-planes. In addition, we show that the current expressions do not support a numerical version of Signature-Inverse Theorem- in other words, non-congruent approximating meshes may have the same numerically invariant signatures. Prior to addressing above mentioned issues (except Signature-Inverse Theorem that is not in the scope of this thesis), we first undertake to modify Calabi et al's numerically invariant scheme and refine the current methodology by adding new terminology which improves the clarity and extends the methodology to planar ordinary meshes. To address the issue of stability in the Euclidean formulae Heron's formula is replaced by the accurate area that improves the numerical stability and, in terms of mean square error (MSE) results in a closer approximation in comparison with the current formulation. In the affine geometry, we will introduce a general formulation for the full conic sections to find a numerically invariant approximation of the equiaffine arc length, measured along the given plane curve and between each two points. In addition, closer numerical expressions will be presented that also need fewer points of the given mesh of points to approximate the first order derivative of the affine-curvature compared with the current formulae. Next, we will introduce the first order finite n-difference quotients in both Euclidean and affine signature calculus which not only approximate the first derivatives of the corresponding differential curvatures, but can be also used to minimize the effects of noise and indeterminacy in the resulting outputs. The arising numerical biases in the resulting numerical signatures will be classified as Bias Type-1 and Bias Type-2 and it will be showed how they can be removed. To parameterize numerically invariant signatures independently of the viewpoint, we will introduce an orientation-free version which results in all congruent planar ordinary meshes have the same orientation-free numerically invariant signature, and therefore the Signature Theorem will be correct then in Mesh and Digital group-planes. Finally, we will bring up our experimental results. First the results of applying the proposed scheme to generate numerically invariant signatures will be presented. In this experiment the sensitivity of the parameters used in the algorithm will be examined. These parameters include mesh regularity factors as well as the effect of selecting different mesh resolution to represent the boundary of the object of interest. To reduce noise in the resulting numerical signatures, the n-difference technique and the m-mean signature method will be introduced and will be illustrated that these methods are capable of noise reduction by more than 90%. The n-difference technique will be also applied to eliminate indeterminacy in the resulting outputs. Next, the potential applications of the proposed scheme in object description will be presented by two manners: applying the plots of numerically invariant group-signatures for discriminating between objects of interest by visual judgment, and, quantifying them by introducing the associated group-signature energy. This numerically invariant energy will be demonstrated by using a medical image example.
|
Page generated in 0.0997 seconds