• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Pre-processing and Feature Extraction Methods for Smart Biomedical Signal Monitoring : Algorithms and Applications

Chahid, Abderrazak 11 1900 (has links)
Human health is monitored through several physiological measurements such as heart rate, blood pressure, brain activity, etc. These measurements are taken at predefined points in the body and recorded as temporal signals or colorful images for diagnosis purposes. During the diagnosis, physicians analyze these recordings, sometimes visually, to make treatment decisions. These recordings are usually contaminated with noise caused by different factors such as physiological artifacts or electronic noises of the used electrodes/instruments. Therefore, the pre-processing of these signals and images becomes a crucial need to provide clinicians with useful information to make the right decisions. This Ph.D. work proposes and discusses different biomedical signal processing algorithms and their applications. It develops novel signal/image pre-processing algorithms, based on the Semi-Classical Signal Analysis method (SCSA), to enhance the quality of biomedical signals and images. The SCSA method is based on the decomposition of the input signal or image, using the squared eigenfunctions of a Semi-Classical Schrodinger operator. This approach shows great potential in denoising, and residual water-peak suppression for Magnetic Resonance Spectroscopy (MRS) signals compared to the existing methods. In addition, it shows very promising noise removal, particularly from pulse-shaped signals and from Magnetic Resonance (MR) images. In clinical practice, extracting informative characteristics or features from these pre-processed recordings is very important for advanced analysis and diagnosis. Therefore, new features and proposed are extracted based on the SCSA and fed to machine learning models for smart biomedical diagnosis such as predicting epileptic spikes in Magnetoencephalography (MEG). Moreover, a new Quantization-based Position Weight Matrix (QuPWM) feature extraction method is proposed for other biomedical classifications, such as predicting true Poly(A) regions in a DNA sequence, multiple hand gesture prediction. These features can be used to understand different complex systems, such as hand gesture/motion mechanism and help in the smart decision-making process. Finally, combining such features with reinforcement learning models will undoubtedly help automate the diagnoses and enhance the decision-making, which will accelerate the digitization of different industrial sectors. For instance, these features can help to study and understand fish growth in an End-To-End system for aquaculture environments. Precisely, this application’s preliminary results show very encouraging insights in optimally controlling the feeding while preserving the desired growth profile.
2

Analysis of feature interactions and generation of feature precedence network for automated process planning

Arumugam, Jaikumar January 2004 (has links)
No description available.
3

Flight Data Processing Techniques to Identify Unusual Events

Mugtussids, Iossif B. 26 June 2000 (has links)
Modern aircraft are capable of recording hundreds of parameters during flight. This fact not only facilitates the investigation of an accident or a serious incident, but also provides the opportunity to use the recorded data to predict future aircraft behavior. It is believed that, by analyzing the recorded data, one can identify precursors to hazardous behavior and develop procedures to mitigate the problems before they actually occur. Because of the enormous amount of data collected during each flight, it becomes necessary to identify the segments of data that contain useful information. The objective is to distinguish between typical data points, that are present in the majority of flights, and unusual data points that can be only found in a few flights. The distinction between typical and unusual data points is achieved by using classification procedures. In this dissertation, the application of classification procedures to flight data is investigated. It is proposed to use a Bayesian classifier that tries to identify the flight from which a particular data point came. If the flight from which the data point came is identified with a high level of confidence, then the conclusion that the data point is unusual within the investigated flights can be made. The Bayesian classifier uses the overall and conditional probability density functions together with a priori probabilities to make a decision. Estimating probability density functions is a difficult task in multiple dimensions. Because many of the recorded signals (features) are redundant or highly correlated or are very similar in every flight, feature selection techniques are applied to identify those signals that contain the most discriminatory power. In the limited amount of data available to this research, twenty five features were identified as the set exhibiting the best discriminatory power. Additionally, the number of signals is reduced by applying feature generation techniques to similar signals. To make the approach applicable in practice, when many flights are considered, a very efficient and fast sequential data clustering algorithm is proposed. The order in which the samples are presented to the algorithm is fixed according to the probability density function value. Accuracy and reduction level are controlled using two scalar parameters: a distance threshold value and a maximum compactness factor. / Ph. D.
4

Expert-in-the-loop supervised learning for computer security detection systems / Apprentissage supervisé et systèmes de détection : une approche de bout-en-bout impliquant les experts en sécurité

Beaugnon, Anaël 25 June 2018 (has links)
L’objectif de cette thèse est de faciliter l’utilisation de l’apprentissage supervisé dans les systèmes de détection pour renforcer la détection. Dans ce but, nous considérons toute la chaîne de traitement de l’apprentissage supervisé (annotation, extraction d’attributs, apprentissage, et évaluation) en impliquant les experts en sécurité. Tout d’abord, nous donnons des conseils méthodologiques pour les aider à construire des modèles de détection supervisés qui répondent à leurs contraintes opérationnelles. De plus, nous concevons et nous implémentons DIADEM, un outil de visualisation interactif qui aide les experts en sécurité à appliquer la méthodologie présentée. DIADEM s’occupe des rouages de l’apprentissage supervisé pour laisser les experts en sécurité se concentrer principalement sur la détection. Par ailleurs, nous proposons une solution pour réduire le coût des projets d’annotations en sécurité informatique. Nous concevons et implémentons un système d’apprentissage actif complet, ILAB, adapté aux besoins des experts en sécurité. Nos expériences utilisateur montrent qu’ils peuvent annoter un jeu de données avec une charge de travail réduite grâce à ILAB. Enfin, nous considérons la génération automatique d’attributs pour faciliter l’utilisation de l’apprentissage supervisé dans les systèmes de détection. Nous définissons les contraintes que de telles méthodes doivent remplir pour être utilisées dans le cadre de la détection de menaces. Nous comparons trois méthodes de l’état de l’art en suivant ces critères, et nous mettons en avant des pistes de recherche pour mieux adapter ces techniques aux besoins des experts en sécurité. / The overall objective of this thesis is to foster the deployment of supervised learning in detection systems to strengthen detection. To that end, we consider the whole machine learning pipeline (data annotation, feature extraction, training, and evaluation) with security experts as its core since it is crucial to pursue real-world impact. First, we provide methodological guidance to help security experts build supervised detection models that suit their operational constraints. Moreover, we design and implement DIADEM, an interactive visualization tool that helps security experts apply the methodology set out. DIADEM deals with the machine learning machinery to let security experts focus mainly on detection. Besides, we propose a solution to effectively reduce the labeling cost in computer security annotation projects. We design and implement an end-to-end active learning system, ILAB, tailored to security experts needs. Our user experiments on a real-world annotation project demonstrate that they can annotate a dataset with a low workload thanks to ILAB. Finally, we consider automatic feature generation as a means to ease, and thus foster, the use of machine learning in detection systems. We define the constraints that such methods should meet to be effective in building detection models. We compare three state-of-the-art methods based on these criteria, and we point out some avenues of research to better tailor automatic feature generation to computer security experts needs.
5

Extracting group relationships within changing software using text analysis

Green, Pamela Dilys January 2013 (has links)
This research looks at identifying and classifying changes in evolving software by making simple textual comparisons between groups of source code files. The two areas investigated are software origin analysis and collusion detection. Textual comparison is attractive because it can be used in the same way for many different programming languages. The research includes the first major study using machine learning techniques in the domain of software origin analysis, which looks at the movement of code in an evolving system. The training set for this study, which focuses on restructured files, is created by analysing 89 software systems. Novel features, which capture abstract patterns in the comparisons between source code files, are used to build models which classify restructured files fromunseen systems with a mean accuracy of over 90%. The unseen code is not only in C, the language of the training set, but also in Java and Python, which helps to demonstrate the language independence of the approach. As well as generating features for the machine learning system, textual comparisons between groups of files are used in other ways throughout the system: in filtering to find potentially restructured files, in ranking the possible destinations of the code moved from the restructured files, and as the basis for a new file comparison tool. This tool helps in the demanding task of manually labelling the training data, is valuable to the end user of the system, and is applicable to other file comparison tasks. These same techniques are used to create a new text-based visualisation for use in collusion detection, and to generate a measure which focuses on the unusual similarity between submissions. This measure helps to overcome problems in detecting collusion in data where files are of uneven size, where there is high incidental similarity or where more than one programming language is used. The visualisation highlights interesting similarities between files, making the task of inspecting the texts easier for the user.

Page generated in 0.1214 seconds