• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 245
  • 58
  • 58
  • 56
  • 21
  • 12
  • 9
  • 9
  • 8
  • 7
  • 6
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 553
  • 220
  • 177
  • 172
  • 167
  • 166
  • 147
  • 80
  • 73
  • 70
  • 68
  • 67
  • 64
  • 64
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A New Algorithm for Finding the Minimum Distance between Two Convex Hulls

Kaown, Dougsoo 05 1900 (has links)
The problem of computing the minimum distance between two convex hulls has applications to many areas including robotics, computer graphics and path planning. Moreover, determining the minimum distance between two convex hulls plays a significant role in support vector machines (SVM). In this study, a new algorithm for finding the minimum distance between two convex hulls is proposed and investigated. A convergence of the algorithm is proved and applicability of the algorithm to support vector machines is demostrated. The performance of the new algorithm is compared with the performance of one of the most popular algorithms, the sequential minimal optimization (SMO) method. The new algorithm is simple to understand, easy to implement, and can be more efficient than the SMO method for many SVM problems.
82

Towards an Accurate ECG Biometric Authentication System with Low Acquisition Time

Arteaga Falconi, Juan Sebastian 31 January 2020 (has links)
Biometrics is the study of physical or behavioral traits that establishes the identity of a person. Forensics, physical security and cyber security are some of the main fields that use biometrics. Unlike traditional authentication systems—such as password based—biometrics cannot be lost, forgotten or shared. This is possible because biometrics establishes the identity of a person based on a physiological/behavioural characteristic rather than what the person possess or remembers. Biometrics has two modes of operation: identification and authentication. Identification finds the identity of a person among a group of persons. Authentication determines if the claimed identity of a person is truthful. Biometric person authentication is an alternative to passwords or graphical patterns. It prevents shoulder surfing attacks, i.e., people watching from a short distance. Nevertheless, biometric traits of conventional authentication techniques like fingerprints, face—and to some extend iris—are easy to capture and duplicate. This denotes a security risk for modern and future applications such as digital twins, where an attacker can copy and duplicate a biometric trait in order to spoof a biometric system. Researchers have proposed ECG as biometric authentication to solve this problem. ECG authentication conceals the biometric traits and reduces the risk of an attack by duplication of the biometric trait. However, current ECG authentication solutions require 10 or more seconds of an ECG signal in order to have accurate results. The accuracy is directly proportional to the ECG signal time-length for authentication. This is inconvenient to implement ECG authentication in an end-user product because a user cannot wait 10 or more seconds to gain access in a secure manner to their device. This thesis addresses the problem of spoofing by proposing an accurate and secure ECG biometric authentication system with relatively short ECG signal length for authentication. The system consists of an ECG acquisition from lead I (two electrodes), signal processing approaches for filtration and R-peak detection, a feature extractor and an authentication process. To evaluate this system, we developed a method to calculate the Equal Error Rate—EER—with non-normal distributed data. In the authentication process, we propose an approach based on Support Vector Machine—SVM—and achieve 4.5% EER with 4 seconds of ECG signal length for authentication. This approach opens the door for a deeper understanding of the signal and hence we enhanced it by applying a hybrid approach of Convolutional Neural Networks—CNN—combined with SVM. The purpose of this hybrid approach is to improve accuracy by automatically detect and extract features with Deep Learning—in this case CNN—and then take the output into a one-class SVM classifier—Authentication; which proved to outperform accuracy for one-class ECG classification. This hybrid approach reduces the EER to 2.84% with 4 seconds of ECG signal length for authentication. Furthermore, we investigated the combination of two different biometrics techniques and we improved the accuracy to 0.46% EER, while maintaining a short ECG signal length for authentication of 4 seconds. We fuse Fingerprint with ECG at the decision level. Decision level fusion requires information that is available from any biometric technique. Fusion at different levels—such as feature level fusion—requires information about features that are incompatible or hidden. Fingerprint minutiae are composed of information that differs from ECG peaks and valleys. Therefore fusion at the feature level is not possible unless the fusion algorithm provides a compatible conversion scheme. Proprietary biometric hardware does not provide information about the features or the algorithms; therefore, features are hidden and not accessible for feature level fusion; however, the result is always available for a decision level fusion.
83

Classification of ADHD and non-ADHD Using AR Models and Machine Learning Algorithms

Lopez Marcano, Juan L. 12 December 2016 (has links)
As of 2016, diagnosis of ADHD in the US is controversial. Diagnosis of ADHD is based on subjective observations, and treatment is usually done through stimulants, which can have negative side-effects in the long term. Evidence shows that the probability of diagnosing a child with ADHD not only depends on the observations of parents, teachers, and behavioral scientists, but also on state-level special education policies. In light of these facts, unbiased, quantitative methods are needed for the diagnosis of ADHD. This problem has been tackled since the 1990s, and has resulted in methods that have not made it past the research stage and methods for which claimed performance could not be reproduced. This work proposes a combination of machine learning algorithms and signal processing techniques applied to EEG data in order to classify subjects with and without ADHD with high accuracy and confidence. More specifically, the K-nearest Neighbor algorithm and Gaussian-Mixture-Model-based Universal Background Models (GMM-UBM), along with autoregressive (AR) model features, are investigated and evaluated for the classification problem at hand. In this effort, classical KNN and GMM-UBM were also modified in order to account for uncertainty in diagnoses. Some of the major findings reported in this work include classification performance as high, if not higher, than those of the highest performing algorithms found in the literature. One of the major findings reported here is that activities that require attention help the discrimination of ADHD and Non-ADHD subjects. Mixing in EEG data from periods of rest or during eyes closed leads to loss of classification performance, to the point of approximating guessing when only resting EEG data is used. / Master of Science
84

Motivation and Quantification of Physical Activity for Hospitalised Cancer Patients

Thorsteinsdottir, Arnrun January 2015 (has links)
Previous studies have shown the positive effect of increased physical activity for cancer patients during treatments of chemotherapy and stem cell transplantation. Moderate exercise has shown to cause significantly less loss of muscle mass, less symptoms of cancer related fatigue, less need for platelet transfusions during treatment time and shorter hospitalisation. Inactivity at hospital clinics is though still a major concern and it seems like lack of motivation plays a big roll. It has been shown that an overview of activity level, personal goal setting and education on the importance of physical activity can work as a motivation towards increased physical activity. This project aimed to make a prototype that can quantify physical activity of hospitalised cancer patients and represent it in a motivational and informative way. An accelerometer was used to collect activity data; the data was processed and used to train a support vector machine for classification of activities. Activities recognised by the prototype are the postures lying down, sitting and standing as well as recognising when the user is active. Over 90% accuracy was obtained in activity recognition for specific training sets. The prototype was tested on patients at the haematology clinic at the Karolinska hospital in Huddinge. Test subjects rated the classification accuracy and the motivational value of the prototype on a scale of 1-5. The accuracy was rated 4.2 out of 5 and the motivational value 3.25 out of 5. A pilot study to further test the feasibility of the product will be performed in the summer of 2015.
85

Monitorización visual automática de tráfico rodado

Kachach, Redouane 23 September 2016 (has links)
La gestión del tráfico es una tarea muy compleja. La información generada por los sistemas tradicionales de monitorización (por ejemplo espirales) es muy limitada e insuficiente para realizar estudios más ambiciosos y complejos sobre el tráfico. Hoy en día esto es un problema en un mundo donde técnicas como el Big Data se han metido en todos los ámbitos. Esta tesis se enfoca en abordar el problema de monitorización automática de vehículos empleando sensores más modernos como las cámaras. Estos sensores llevan ya varias décadas instalados en las carreteras pero con una misión limitada a la monitorización pasiva de las mismas. El objetivo de la tesis es aprovechar estos sensores con algoritmos capaces de extraer información útil de forma automática de las imágenes. Para ello, vamos a abordar dos problemas clásicos en este campo como son el seguimiento y la clasificación automática de vehículos en varias categorías. Dentro del marco de los sistemas inteligentes de transporte (ITS, por sus siglas en inglés), el trabajo presentado en esta tesis aborda los problemas típicos relacionados con el seguimiento de vehículos como la eliminación de sombras y el manejo de oclusiones. Para ello se ha desarrollado un algoritmo que combina criterios de proximidad espacial y temporal con un algoritmo basado en KLT para el seguimiento tratando de aprovechar las ventajas de cada uno de ellos. En el contexto de la clasificación se ha desarrollado un algoritmo híbrido que combina plantillas 3D que representan las distintas categorías de vehículos junto con un clasificador SVM entrenado con características visuales de camiones y autobuses para afinar la clasificación. Todos los algoritmos utilizan una sola cámara como sensor principal. Los sistemas desarrollados han sido probados y validados experimentalmente sobre una amplia base de vídeos tanto propios como otros independientes. Hemos recopilado y etiquetado una amplia colección de vídeos de tráfico representativos de un variado abanico de situaciones que ponemos a disposición de la comunidad científica como banco de pruebas.
86

Application of Machine Learning and Statistical Learning Methods for Prediction in a Large-Scale Vegetation Map

Brookey, Carla M. 01 December 2017 (has links)
Original analyses of a large vegetation cover dataset from Roosevelt National Forest in northern Colorado were carried out by Blackard (1998) and Blackard and Dean (1998; 2000). They compared the classification accuracies of linear and quadratic discriminant analysis (LDA and QDA) with artificial neural networks (ANN) and obtained an overall classification accuracy of 70.58% for a tuned ANN compared to 58.38% for LDA and 52.76% for QDA. Because there has been tremendous development of machine learning classification methods over the last 35 years in both computer science and statistics, as well as substantial improvements in the speed of computer hardware, I applied five modern machine learning algorithms to the data to determine whether significant improvements in the classification accuracy were possible using one or more of these methods. I found that only a tuned gradient boosting machine had a higher accuracy (71.62%) that the ANN of Blackard and Dean (1998), and the difference in accuracies was only about 1%. Of the other four methods, Random Forests (RF), Support Vector Machines (SVM), Classification Trees (CT), and adaboosted trees (ADA), a tuned SVM and RF had accuracies of 67.17% and 67.57%, respectively. The partition of the data by Blackard and Dean (1998) was unusual in that the training and validation datasets had equal representation of the seven vegetation classes, even though 85% of the data fell into classes 1 and 2. For the second part of my analyses I randomly selected 60% of the data for the training data and 20% for each of the validation data and test data. On this partition of the data a single classification tree achieved an accuracy of 92.63% on the test data and the accuracy of RF is 83.98%. Unsurprisingly, most of the gains in accuracy were in classes 1 and 2, the largest classes which also had the highest misclassification rates under the original partition of the data. By decreasing the size of the training data but maintaining the same relative occurrences of the vegetation classes as in the full dataset I found that even for a training dataset of the same size as that of Blackard and Dean (1998) a single classification tree was more accurate (73.80%) that the ANN of Blackard and Dean (1998) (70.58%). The final part of my thesis was to explore the possibility that combining several of the machine learning classifiers predictions could result in higher predictive accuracies. In the analyses I carried out, the answer seems to be that increased accuracies do not occur with a simple voting of five machine learning classifiers.
87

A NEW CENTROID BASED ALGORITHM FOR HIGH SPEED BINARY CLASSIFICATION

Johnson, Kurt Eugene 03 December 2004 (has links)
No description available.
88

Analyzing TCGA Genomic and Expression Data Using SVM with Embedded Parameter Tuning

Zhao, Haitao January 2014 (has links)
No description available.
89

Identifying Offensive Videos on YouTube

Kandakatla, Rajeshwari January 2016 (has links)
No description available.
90

Machine-Learning Analysis of High-Throughput Data: Classification of Caenorhabditis elegans Flow Cytometer Fluorescence Profiles as a Case Study.

Alnaim, Khlifa 06 1900 (has links)
As technology improves, scientists are able to generate high-throughput data faster and cheaper. Consequently, the field of biological sciences is progressively becoming more reliant on data science tools like machine learning methods for analysis and sorting of big data. The Complex Object Parametric Analyzer and Sorter (COPAS) is a large particle flow cytometer that can perform high-throughput fluorescence screens on small animals, like Caenorhabditis elegans. The outputs of the COPAS are extinction coefficient (EXT), Time of Flight (TOF, arbitrary length unit) and fluorescence. However, the COPAS outputs include unwanted objects like bubbles or bacteria and some animals pass the flow cell in a non-straight manner producing abnormal profiles leading to inaccurate developmental staging. In this thesis, I have created an R package, named COPASProfiler, that generates experiment-specific supervised machine learning (ML) classification models which can detect and remove abnormal profiles enabling standardized fluorescence quantification and analysis. I used COPASProfiler to develop a pipeline to automate fluorescence analysis of high-throughput COPAS data sets. Using R shiny, I created a web program with a graphical user interface that allows users to view, annotate, quantify fluorescence, and classify COPAS-generated datasets. The COPASProfiler is available on GitHub and can be installed using one single R command. Lastly, the COPASProfiler comes with multiple tutorials and examples, and was designed to accommodate users with minimal programming experience. COPASProfiler should enable robust high-throughput fluorescence studies of regulatory elements (e.g., enhancers, promoters, and 3’UTRs) and long-term epigenetic silencing in C. elegans.

Page generated in 0.0281 seconds