• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5598
  • 577
  • 282
  • 275
  • 167
  • 157
  • 83
  • 66
  • 50
  • 42
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9042
  • 9042
  • 3028
  • 1688
  • 1534
  • 1522
  • 1417
  • 1358
  • 1192
  • 1186
  • 1158
  • 1128
  • 1113
  • 1024
  • 1020
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
941

Contact-free Cognitive Load Classification based on Psycho-Physiological Parameters

Gestlöf, Rikard, Sörman, Johannes January 2019 (has links)
Cognitive load (CL) is a concept that describes the relationship between the cognitive demands from a task and the environment the task is taking place in, which influences the user’s cognitive resources. High cognitive load leads to higher chance of a mistake while a user is performing a task. CL has great impact on driving performance, although the effect of CL is task dependent. It has been proven that CL selectively impairs non-automized aspects of driving performance while automized driving tasks are unaffected. The most common way of measuring CL is electroencephalography (EEG), which might be a problem in some situations since its contact-based and must be connected to the head of the test subject. Contact-based ways of measuring different physiological parameters can be a problem since they might affect the results of the research. Since the wirings sometimes might be loose and that the test subject moves etc. However, the biggest concern with contact-based systems is that they are hard to involve practically. The reason for this is simply that a user cannot relax, and that the sensors attached to the test subjects can affect them to not provide normal results. The goal of the research is to test the performance of data gathered with a contact-free camera-based system compared to a contact-based shimmer GSR+ system in detecting cognitive load. Both data collection approaches will extract the heart rate (HR) and interbeat interval (IBI) while test subjects perform different tasks during a controlled experiment. Based on the gathered IBI, 13 different heart rate variability (HRV) features will be extracted to determine different levels of cognitive load.  In order to determine which system that is better at measuring different levels of CL, three major stress level phases were used in a controlled experiment. These three stress level phases were the reference point for low CL where test subjects sat normal (S0), normal CL where the test subjects performed easy puzzles and drove normally in a video game (S1) and high CL where the test subjects completed hard puzzles and drove on the hardest course of a video game while answering math questions (S2). To classify the extracted HRV features from the data into the three different levels of CL two different machine learning (ML) algorithms, support vector machine (SVM) and k-nearest-neighbor (KNN) were implemented. Both binary and multiclass feature matrixes were created with all combinations between the different stress levels of the collected data. In order to get the best classification accuracy with the ML algorithms, different optimizations such as kernelfunctions were chosen for the different feature matrixes. The results of this research proved that the ML algorithms achieved a higher classification accuracy for the data collected with the contact-free system than the shimmer sense system. The highest mean classification accuracy was 81% on binary classification for S0-S2 collected by the camera while driving using Fine KNN. The highest F1 score was 88%, which was achieved with medium gaussian SVM for the class combination S0-(S1/S2) feature matrix recorded with the camera system. It was concluded that the data gathered with the contact-free camera system achieved a higher accuracy than the contact-based system. Also, that KNN achieved the higher accuracy overall, than SVM for the data. This research proves that a contact-free camera-based system can detect cognitive better than a contact-based shimmer sense GSR+ system with a high classification accuracy.
942

Data analytics, interpretation and machine learning for environmental forensics using peak mapping methods

Ghasemi Damavandi, Hamidreza 01 August 2016 (has links)
In this work our driving motivation is to develop mathematically robust and computationally efficient algorithms that will help chemists towards their goal of pattern matching. Environmental chemistry today broadly faces difficult computational and interpretational challenges for vast and ever-increasing data repositories. A driving factor behind these challenges are little known intricate relationships between constituent analytes that constitute complex mixtures spanning a range of target and non-target compounds. While the end of goal of different environment applications are diverse, computationally speaking, many data interpretation bottlenecks arise from lack of efficient algorithms and robust mathematical frameworks to identify, cluster and interpret compound peaks. There is a compelling need for compound-cognizant quantitative interpretation that accounts for the full informational range of gas chromatographic (and mass spectrometric) datasets. Traditional target-oriented analysis focus only on the dominant compounds of the chemical mixture, and thus are agnostic of the contribution of unknown non-target analytes. On the other extreme, statistical methods prevalent in chemometric interpretation ignore compound identity altogether and consider only the multivariate data statistics, and thus are agnostic of intrinsic relationships between the well-known target and unknown target analytes. Thus, both schools of thought (target-based or statistical) in current-day chemical data analysis and interpretation fall short of quantifying the complex interaction between major and minor compound peaks in molecular mixtures commonly encountered in environmental toxin studies. Such interesting insights would not be revealed via these standard techniques unless a deeper analysis of these patterns be taken into account in a quantitative mathematical framework that is at once compound-cognizant and comprehensive in its coverage of all peaks, major and minor. This thesis aims to meet this grand challenge using a combination of signal processing, pattern recognition and data engineering techniques. We focus on petroleum biomarker analysis and polychlorinated biphenyl (PCB) congener studies in human breastmilk as our target applications. We propose a novel approach to chemical data analytics and interpretation that bridges the gap between target-cognizant traditional analysis from environmental chemistry with compound-agnostic computational methods in chemometric data engineering. Specically, we propose computational methods for target-cognizant data analytics that also account for local unknown analytes allied to the established target peaks. The key intuition behind our methods are based on the underlying topography of the gas chromatigraphic landscape, and we extend recent peak mapping methods as well as propose novel peak clustering and peak neighborhood allocation methods to achieve our data analytic aims. Data-driven results based on a multitude of environmental applications are presented.
943

A combined machine-learning and graph-based framework for the 3-D automated segmentation of retinal structures in SD-OCT images

Antony, Bhavna Josephine 01 December 2013 (has links)
Spectral-domain optical coherence tomography (SD-OCT) is a non-invasive imaging modality that allows for the quantitative study of retinal structures. SD-OCT has begun to find widespread use in the diagnosis and management of various ocular diseases. While commercial scanners provide limited analysis of a small number of retinal layers, the automated segmentation of retinal layers and other structures within these volumetric images is quite a challenging problem, especially in the presence of disease-induced changes. The incorporation of a priori information, ranging from qualitative assessments of the data to automatically learned features, can significantly improve the performance of automated methods. Here, a combined machine learning-based approach and graph-theoretic approach is presented for the automated segmentation of retinal structures in SD-OCT images. Machine-learning based approaches are used to learn textural features from a training set, which are then incorporated into the graph- theoretic approach. The impact of the learned features on the final segmentation accuracy of the graph-theoretic approach is carefully evaluated so as to avoid incorporating learned components that do not improve the method. The adaptability of this versatile combination of a machine-learning and graph-theoretic approach is demonstrated through the segmentation of retinal surfaces in images obtained from humans, mice and canines. In addition to this framework, a novel formulation of the graph-theoretic approach is described whereby surfaces with a disruption can be segmented. By incorporating the boundary of the "hole" into the feasibility definition of the set of surfaces, the final result consists of not only the surfaces but the boundary of the hole as well. Such a formulation can be used to model the neural canal opening (NCO) in SD-OCT images, which appears as a 3-D planar hole disrupting the surfaces in its vicinity. A machine-learning based approach was also used here to learn descriptive features of the NCO. Thus, the major contributions of this work include 1) a method for the automated correction of axial artifacts in SD-OCT images, 2) a combined machine-learning and graph-theoretic framework for the segmentation of retinal surfaces in SD-OCT images (applied to humans, mice and canines), 3) a novel formulation of the graph-theoretic approach for the segmentation of multiple surfaces and their shared hole (applied to the segmentation of the neural canal opening), and 4) the investigation of textural markers that could precede structural and functional change in degenerative retinal diseases.
944

GPU implementation of a deep learning network for image recognition tasks

Parker, Sean Patrick 01 December 2012 (has links)
Image recognition and classification is one of the primary challenges of the machine learning community. Recent advances in learning systems, coupled with hardware developments have enabled general object recognition systems to be learned on home computers with graphics processing units. Presented is a Deep Belief Network engineered using NVIDIA's CUDA programming language for general object recognition tasks.
945

Characterization of active sonar targets

Schupp-Omid, Daniel 01 May 2016 (has links)
The problem of characterization of active sonar target response has important applications in many fields, including the currently cost-prohibitive recovery of unexploded ordinance on the ocean floor. We present a method for recognizing these objects using a multidisciplinary approach that fuses machine learning, signal processing, and feature engineering. In short, by taking inspiration from other fields, we solve the problem of object recognition in shallow water in an inexpensive way. These techniques add to the body of explored knowledge in the field of active sonar processing and address real-world problems in the process.
946

Standard Machine Learning Techniques in Audio Beehive Monitoring: Classification of Audio Samples with Logistic Regression, K-Nearest Neighbor, Random Forest and Support Vector Machine

Amlathe, Prakhar 01 May 2018 (has links)
Honeybees are one of the most important pollinating species in agriculture. Every three out of four crops have honeybee as their sole pollinator. Since 2006 there has been a drastic decrease in the bee population which is attributed to Colony Collapse Disorder(CCD). The bee colonies fail/ die without giving any traditional health symptoms which otherwise could help in alerting the Beekeepers in advance about their situation. Electronic Beehive Monitoring System has various sensors embedded in it to extract video, audio and temperature data that could provide critical information on colony behavior and health without invasive beehive inspections. Previously, significant patterns and information have been extracted by processing the video/image data, but no work has been done using audio data. This research inaugurates and takes the first step towards the use of audio data in the Electronic Beehive Monitoring System (BeePi) by enabling a path towards the automatic classification of audio samples in different classes and categories within it. The experimental results give an initial support to the claim that monitoring of bee buzzing signals from the hive is feasible, it can be a good indicator to estimate hive health and can help to differentiate normal behavior against any deviation for honeybees.
947

Scavenger: A Junk Mail Classification Program

Malkhare, Rohan V 20 January 2003 (has links)
The problem of junk mail, also called spam, has reached epic proportions and various efforts are underway to fight spam. Junk mail classification using machine learning techniques is a key method to fight spam. We have devised a machine learning algorithm where features are created from individual sentences in the subject and body of a message by forming all possible word-pairings from a sentence. Weights are assigned to the features based on the strength of their predictive capabilities for spam/legitimate determination. The predictive capabilities are estimated by the frequency of occurrence of the feature in spam/legitimate collections as well as by application of heuristic rules. During classification, total spam and legitimate evidence in the message is obtained by summing up the weights of extracted features of each class and the message is classified into whichever class accumulates the greater sum. We compared the algorithm against the popular naïve-bayes algorithm (in [8]) and found it's performance exceeded that of naïve-bayes algorithm both in terms of catching spam and for reducing false positives.
948

Apprentissage de préférences en espace combinatoire et application à la recommandation en configuration interactive / Preferences learning in combinatorial spaces and application to recommandation in interactive configuration

Gimenez, Pierre-François 10 October 2018 (has links)
L'analyse et l'exploitation des préférences interviennent dans de nombreux domaines, comme l'économie, les sciences sociales ou encore la psychologie. Depuis quelques années, c'est l'e-commerce qui s'intéresse au sujet dans un contexte de personnalisation toujours plus poussée. Notre étude s'est portée sur la représentation et l'apprentissage de préférences sur des objets décrits par un ensemble d'attributs. Ces espaces combinatoires sont immenses, ce qui rend impossible en pratique la représentation in extenso d'un ordre de préférences sur leurs objets. C'est pour cette raison que furent construits des langages permettant de représenter de manière compacte des préférences sur ces espaces combinatoires. Notre objectif a été d'étudier plusieurs langages de représentation de préférences et l'apprentissage de préférences. Nous avons développé deux axes de recherche. Le premier axe est l'algorithme DRC, un algorithme d'inférence dans les réseaux bayésiens. Alors que les autres méthodes d'inférence utilisent le réseau bayésien comme unique source d'information, DRC exploite le fait qu'un réseau bayésien est souvent appris à partir d'un ensemble d'objets qui ont été choisis ou observés. Ces exemples sont une source d'information supplémentaire qui peut être utilisée lors de l'inférence. L'algorithme DRC, de ce fait, n'utilise que la structure du réseau bayésien, qui capture des indépendances conditionnelles entre attributs et estime les probabilités conditionnelles directement à partir du jeu de données. DRC est particulièrement adapté à une utilisation dans un contexte où les lois de probabilité évoluent mais où les indépendances conditionnelles ne changent pas. Le second axe de recherche est l'apprentissage de k-LP-trees à partir d'exemples d'objets vendus. Nous avons défini formellement ce problème et introduit un score et une distance adaptés. Nous avons obtenu des résultats théoriques intéressants, notamment un algorithme d'apprentissage de k-LP-trees qui converge avec assez d'exemples vers le modèle cible, un algorithme d'apprentissage de LP-tree linéaire optimal au sens où il minimise notre score, ainsi qu'un résultat sur le nombre d'exemples suffisants pour apprendre un " bon " LP-tree linéaire : il suffit d'avoir un nombre d'exemples qui dépend logarithmiquement du nombre d'attributs du problème. Enfin, une contribution expérimentale évalue différents langages dont nous apprenons des modèles à partir d'historiques de voitures vendues. Les modèles appris sont utilisés pour la recommandation de valeur en configuration interactive de voitures Renault. La configuration interactive est un processus de construction de produit où l'utilisateur choisit successivement une valeur pour chaque attribut. Nous évaluons la précision de la recommandation, c'est-à-dire la proportion des recommandations qui auraient été acceptées, et le temps de recommandation ; de plus, nous examinons les différents paramètres qui peuvent influer sur la qualité de la recommandation. Nos résultats sont concluants : les méthodes que nous avons évaluées, qu'elles proviennent de la littérature ou de nos contributions théoriques, sont bien assez rapides pour être utilisées en ligne et ont une précision très élevée, proche du maximum théorique. / The analysis and the exploitation of preferences occur in multiple domains, such as economics, humanities and psychology. E-commerce got interested in the subject a few years ago with the surge of product personalisation. Our study deals with the representation and the learning of preferences on objects described by a set of attributes. These combinatorial spaces are huge, which makes the representation of an ordering in extenso intractable. That's why preference representation languages have been built: they can represent preferences compactly on these huge spaces. In this dissertation, we study preference representation languages and preference learning.Our work focuses on two approaches. Our first approach led us to propose the DRC algorithm for inference in Bayesian networks. While other inference algorithms use the sole Bayesian network as a source of information, DRC makes use of the fact that Bayesian networks are often learnt from a set of examples either chosen or observed. Such examples are a valuable source of information that can be used during the inference. Based on this observation, DRC uses not only the Bayesian network structure that captures the conditional independences between attributes, but also the set of examples, by estimating the probabilities directly from it. DRC is particularly adapted to problems with a dynamic probability distribution but static conditional independences. Our second approach focuses on the learning of k-LP-trees from sold items examples. We formally define the problem and introduce a score and a distance adapted to it. Our theoretical results include a learning algorithm of k-LP-trees with a convergence property, a linear LP-tree algorithm minimising the score we defined and a sample complexity result: a number of examples logarithmic in the number of attributes is enough to learn a "good" linear LP-tree. We finally present an experimental contribution that evaluates different languages whose models are learnt from a car sales history. The models learnt are used to recommend values in interactive configuration of Renault cars. The interactive configuration is a process in which the user chooses a value, one attribute at a time. The recommendation precision (the proportion of recommendations that would have been accepted by the user) and the recommendation time are measured. Besides, the parameters that influence the recommendation quality are investigated. Our results are promising: these methods, described either in the literature or in our contributions, are fast enough for an on-line use and their success rate is high, even close to the theoretical maximum.
949

Discretization for Naive-Bayes learning

Yang, Ying January 2003 (has links)
Abstract not available
950

Machine Learning for Adaptive Computer Game Opponents

Miles, Jonathan David January 2009 (has links)
This thesis investigates the use of machine learning techniques in computer games to create a computer player that adapts to its opponent's game-play. This includes first confirming that machine learning algorithms can be integrated into a modern computer game without have a detrimental effect on game performance, then experimenting with different machine learning techniques to maximize the computer player's performance. Experiments use three machine learning techniques; static prediction models, continuous learning, and reinforcement learning. Static models show the highest initial performance but are not able to beat a simple opponent. Continuous learning is able to improve the performance achieved with static models but the rate of improvement drops over time and the computer player is still unable to beat the opponent. Reinforcement learning methods have the highest rate of improvement but the lowest initial performance. This limits the effectiveness of reinforcement learning because a large number of episodes are required before performance becomes sufficient to match the opponent.

Page generated in 0.1073 seconds