• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 248
  • 58
  • 58
  • 56
  • 21
  • 12
  • 10
  • 9
  • 8
  • 7
  • 6
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 557
  • 223
  • 179
  • 174
  • 169
  • 168
  • 147
  • 80
  • 74
  • 70
  • 68
  • 67
  • 64
  • 64
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A New Algorithm for Finding the Minimum Distance between Two Convex Hulls

Kaown, Dougsoo 05 1900 (has links)
The problem of computing the minimum distance between two convex hulls has applications to many areas including robotics, computer graphics and path planning. Moreover, determining the minimum distance between two convex hulls plays a significant role in support vector machines (SVM). In this study, a new algorithm for finding the minimum distance between two convex hulls is proposed and investigated. A convergence of the algorithm is proved and applicability of the algorithm to support vector machines is demostrated. The performance of the new algorithm is compared with the performance of one of the most popular algorithms, the sequential minimal optimization (SMO) method. The new algorithm is simple to understand, easy to implement, and can be more efficient than the SMO method for many SVM problems.
82

Towards an Accurate ECG Biometric Authentication System with Low Acquisition Time

Arteaga Falconi, Juan Sebastian 31 January 2020 (has links)
Biometrics is the study of physical or behavioral traits that establishes the identity of a person. Forensics, physical security and cyber security are some of the main fields that use biometrics. Unlike traditional authentication systems—such as password based—biometrics cannot be lost, forgotten or shared. This is possible because biometrics establishes the identity of a person based on a physiological/behavioural characteristic rather than what the person possess or remembers. Biometrics has two modes of operation: identification and authentication. Identification finds the identity of a person among a group of persons. Authentication determines if the claimed identity of a person is truthful. Biometric person authentication is an alternative to passwords or graphical patterns. It prevents shoulder surfing attacks, i.e., people watching from a short distance. Nevertheless, biometric traits of conventional authentication techniques like fingerprints, face—and to some extend iris—are easy to capture and duplicate. This denotes a security risk for modern and future applications such as digital twins, where an attacker can copy and duplicate a biometric trait in order to spoof a biometric system. Researchers have proposed ECG as biometric authentication to solve this problem. ECG authentication conceals the biometric traits and reduces the risk of an attack by duplication of the biometric trait. However, current ECG authentication solutions require 10 or more seconds of an ECG signal in order to have accurate results. The accuracy is directly proportional to the ECG signal time-length for authentication. This is inconvenient to implement ECG authentication in an end-user product because a user cannot wait 10 or more seconds to gain access in a secure manner to their device. This thesis addresses the problem of spoofing by proposing an accurate and secure ECG biometric authentication system with relatively short ECG signal length for authentication. The system consists of an ECG acquisition from lead I (two electrodes), signal processing approaches for filtration and R-peak detection, a feature extractor and an authentication process. To evaluate this system, we developed a method to calculate the Equal Error Rate—EER—with non-normal distributed data. In the authentication process, we propose an approach based on Support Vector Machine—SVM—and achieve 4.5% EER with 4 seconds of ECG signal length for authentication. This approach opens the door for a deeper understanding of the signal and hence we enhanced it by applying a hybrid approach of Convolutional Neural Networks—CNN—combined with SVM. The purpose of this hybrid approach is to improve accuracy by automatically detect and extract features with Deep Learning—in this case CNN—and then take the output into a one-class SVM classifier—Authentication; which proved to outperform accuracy for one-class ECG classification. This hybrid approach reduces the EER to 2.84% with 4 seconds of ECG signal length for authentication. Furthermore, we investigated the combination of two different biometrics techniques and we improved the accuracy to 0.46% EER, while maintaining a short ECG signal length for authentication of 4 seconds. We fuse Fingerprint with ECG at the decision level. Decision level fusion requires information that is available from any biometric technique. Fusion at different levels—such as feature level fusion—requires information about features that are incompatible or hidden. Fingerprint minutiae are composed of information that differs from ECG peaks and valleys. Therefore fusion at the feature level is not possible unless the fusion algorithm provides a compatible conversion scheme. Proprietary biometric hardware does not provide information about the features or the algorithms; therefore, features are hidden and not accessible for feature level fusion; however, the result is always available for a decision level fusion.
83

Motivation and Quantification of Physical Activity for Hospitalised Cancer Patients

Thorsteinsdottir, Arnrun January 2015 (has links)
Previous studies have shown the positive effect of increased physical activity for cancer patients during treatments of chemotherapy and stem cell transplantation. Moderate exercise has shown to cause significantly less loss of muscle mass, less symptoms of cancer related fatigue, less need for platelet transfusions during treatment time and shorter hospitalisation. Inactivity at hospital clinics is though still a major concern and it seems like lack of motivation plays a big roll. It has been shown that an overview of activity level, personal goal setting and education on the importance of physical activity can work as a motivation towards increased physical activity. This project aimed to make a prototype that can quantify physical activity of hospitalised cancer patients and represent it in a motivational and informative way. An accelerometer was used to collect activity data; the data was processed and used to train a support vector machine for classification of activities. Activities recognised by the prototype are the postures lying down, sitting and standing as well as recognising when the user is active. Over 90% accuracy was obtained in activity recognition for specific training sets. The prototype was tested on patients at the haematology clinic at the Karolinska hospital in Huddinge. Test subjects rated the classification accuracy and the motivational value of the prototype on a scale of 1-5. The accuracy was rated 4.2 out of 5 and the motivational value 3.25 out of 5. A pilot study to further test the feasibility of the product will be performed in the summer of 2015.
84

Monitorización visual automática de tráfico rodado

Kachach, Redouane 23 September 2016 (has links)
La gestión del tráfico es una tarea muy compleja. La información generada por los sistemas tradicionales de monitorización (por ejemplo espirales) es muy limitada e insuficiente para realizar estudios más ambiciosos y complejos sobre el tráfico. Hoy en día esto es un problema en un mundo donde técnicas como el Big Data se han metido en todos los ámbitos. Esta tesis se enfoca en abordar el problema de monitorización automática de vehículos empleando sensores más modernos como las cámaras. Estos sensores llevan ya varias décadas instalados en las carreteras pero con una misión limitada a la monitorización pasiva de las mismas. El objetivo de la tesis es aprovechar estos sensores con algoritmos capaces de extraer información útil de forma automática de las imágenes. Para ello, vamos a abordar dos problemas clásicos en este campo como son el seguimiento y la clasificación automática de vehículos en varias categorías. Dentro del marco de los sistemas inteligentes de transporte (ITS, por sus siglas en inglés), el trabajo presentado en esta tesis aborda los problemas típicos relacionados con el seguimiento de vehículos como la eliminación de sombras y el manejo de oclusiones. Para ello se ha desarrollado un algoritmo que combina criterios de proximidad espacial y temporal con un algoritmo basado en KLT para el seguimiento tratando de aprovechar las ventajas de cada uno de ellos. En el contexto de la clasificación se ha desarrollado un algoritmo híbrido que combina plantillas 3D que representan las distintas categorías de vehículos junto con un clasificador SVM entrenado con características visuales de camiones y autobuses para afinar la clasificación. Todos los algoritmos utilizan una sola cámara como sensor principal. Los sistemas desarrollados han sido probados y validados experimentalmente sobre una amplia base de vídeos tanto propios como otros independientes. Hemos recopilado y etiquetado una amplia colección de vídeos de tráfico representativos de un variado abanico de situaciones que ponemos a disposición de la comunidad científica como banco de pruebas.
85

Application of Machine Learning and Statistical Learning Methods for Prediction in a Large-Scale Vegetation Map

Brookey, Carla M. 01 December 2017 (has links)
Original analyses of a large vegetation cover dataset from Roosevelt National Forest in northern Colorado were carried out by Blackard (1998) and Blackard and Dean (1998; 2000). They compared the classification accuracies of linear and quadratic discriminant analysis (LDA and QDA) with artificial neural networks (ANN) and obtained an overall classification accuracy of 70.58% for a tuned ANN compared to 58.38% for LDA and 52.76% for QDA. Because there has been tremendous development of machine learning classification methods over the last 35 years in both computer science and statistics, as well as substantial improvements in the speed of computer hardware, I applied five modern machine learning algorithms to the data to determine whether significant improvements in the classification accuracy were possible using one or more of these methods. I found that only a tuned gradient boosting machine had a higher accuracy (71.62%) that the ANN of Blackard and Dean (1998), and the difference in accuracies was only about 1%. Of the other four methods, Random Forests (RF), Support Vector Machines (SVM), Classification Trees (CT), and adaboosted trees (ADA), a tuned SVM and RF had accuracies of 67.17% and 67.57%, respectively. The partition of the data by Blackard and Dean (1998) was unusual in that the training and validation datasets had equal representation of the seven vegetation classes, even though 85% of the data fell into classes 1 and 2. For the second part of my analyses I randomly selected 60% of the data for the training data and 20% for each of the validation data and test data. On this partition of the data a single classification tree achieved an accuracy of 92.63% on the test data and the accuracy of RF is 83.98%. Unsurprisingly, most of the gains in accuracy were in classes 1 and 2, the largest classes which also had the highest misclassification rates under the original partition of the data. By decreasing the size of the training data but maintaining the same relative occurrences of the vegetation classes as in the full dataset I found that even for a training dataset of the same size as that of Blackard and Dean (1998) a single classification tree was more accurate (73.80%) that the ANN of Blackard and Dean (1998) (70.58%). The final part of my thesis was to explore the possibility that combining several of the machine learning classifiers predictions could result in higher predictive accuracies. In the analyses I carried out, the answer seems to be that increased accuracies do not occur with a simple voting of five machine learning classifiers.
86

Product categorisation using machine learning / Produktkategorisering med hjälp av maskininlärning

Stefan, Vasic, Nicklas, Lindgren January 2017 (has links)
Machine learning is a method in data science for analysing large data sets and extracting hidden patterns and common characteristics in the data. Corporations often have access to databases containing great amounts of data that could contain valuable information. Navetti AB wants to investigate the possibility to automate their product categorisation by evaluating different types of machine learning algorithms. This could increase both time- and cost efficiency. This work resulted in three different prototypes, each using different machine learning algorithms with the ability to categorise products automatically. The prototypes were tested and evaluated based on their ability to categorise products and their performance in terms of speed. Different techniques used for preprocessing data is also evaluated and tested. An analysis of the tests shows that when providing a suitable algorithm with enough data it is possible to automate the manual categorisation. / Maskininlärning är en metod inom datavetenskap vars uppgift är att analysera stora mängder data och hitta dolda mönster och gemensamma karaktärsdrag. Företag har idag ofta tillgång till stora mängder data som i sin tur kan innehålla värdefull information. Navetti AB vill undersöka möjligheten att automatisera sin produktkategorisering genom att utvärdera olika typer av maskininlärnings- algoritmer. Detta skulle dramatiskt öka effektiviteten både tidsmässigt och ekonomiskt. Resultatet blev tre prototyper som implementerar tre olika maskininlärnings-algoritmer som automatiserat kategoriserar produkter. Prototyperna testades och utvärderades utifrån dess förmåga att kategorisera och dess prestanda i form av hastighet. Olika tekniker som används för att förbereda data analyseras och utvärderas. En analys av testerna visar att med tillräckligt mycket data och en passande algoritm så är det möjligt att automatisera den manuella kategoriseringen.
87

Burns Depth Assessment Using Deep Learning Features

Abubakar, Aliyu, Ugail, Hassan, Smith, K.M., Bukar, Ali M., Elmahmudi, Ali 20 March 2022 (has links)
Yes / Burns depth evaluation is a lifesaving task and very challenging that requires objective techniques to accomplish. While the visual assessment is the most commonly used by surgeons, its accuracy reliability ranges between 60 and 80% and subjective that lacks any standard guideline. Currently, the only standard adjunct to clinical evaluation of burn depth is Laser Doppler Imaging (LDI) which measures microcirculation within the dermal tissue, providing the burns potential healing time which correspond to the depth of the injury achieving up to 100% accuracy. However, the use of LDI is limited due to many factors including high affordability and diagnostic costs, its accuracy is affected by movement which makes it difficult to assess paediatric patients, high level of human expertise is required to operate the device, and 100% accuracy possible after 72 h. These shortfalls necessitate the need for objective and affordable technique. Method: In this study, we leverage the use of deep transfer learning technique using two pretrained models ResNet50 and VGG16 for the extraction of image patterns (ResFeat50 and VggFeat16) from a a burn dataset of 2080 RGB images which composed of healthy skin, first degree, second degree and third-degree burns evenly distributed. We then use One-versus-One Support Vector Machines (SVM) for multi-class prediction and was trained using 10-folds cross validation to achieve optimum trade-off between bias and variance. Results: The proposed approach yields maximum prediction accuracy of 95.43% using ResFeat50 and 85.67% using VggFeat16. The average recall, precision and F1-score are 95.50%, 95.50%, 95.50% and 85.75%, 86.25%, 85.75% for both ResFeat50 and VggFeat16 respectively. Conclusion: The proposed pipeline achieved a state-of-the-art prediction accuracy and interestingly indicates that decision can be made in less than a minute whether the injury requires surgical intervention such as skin grafting or not.
88

Blink detection in eye tracking

Howat, Sean January 2023 (has links)
This report discusses the accuracy of blink detection in eye tracking, using machine learningalgorithms. Blink detection is used in a wide variety of medicinal and psychological applica-tions such as a controller for motor impaired individuals. Image classification has recentlybeen used in eye tracking and blink detection applications. The blink detection is appliedon data captured from the Pupil Invisible head-mounted eye tracker. The aim is that givenan image, the classifier can accurately determine the state of which the eye is in, blink oropen.These tests will be conducted on two SVM (support vector machine) models using differenttraining data, one trained on data from controlled environments, the other model also trainedon uncontrolled environments. For this project, data was captured in infrared disturbedenvironments to see how it affects the models performance. These models are evaluatedaccording to their accuracy using multiple different metrics. This rapport will discuss theresults of both classifiers in both tests, in addition to describing training methodology withan aim to find if blink detection is viable in infrared disturbed environments.
89

A NEW CENTROID BASED ALGORITHM FOR HIGH SPEED BINARY CLASSIFICATION

Johnson, Kurt Eugene 03 December 2004 (has links)
No description available.
90

Analyzing TCGA Genomic and Expression Data Using SVM with Embedded Parameter Tuning

Zhao, Haitao January 2014 (has links)
No description available.

Page generated in 0.9373 seconds