• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 8
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 72
  • 72
  • 72
  • 24
  • 21
  • 17
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Predicting consultation durations in a digital primary care setting

Åman, Agnes January 2018 (has links)
The aim of this thesis is to develop a method to predict consultation durations in a digital primary care setting and thereby create a tool for designing a more efficient scheduling system in primary care. The ultimate purpose of the work is to contribute to a reduction in waiting times in primary care. Even though no actual scheduling system was implemented, four machine learning models were implemented and compared to see if any of them had better performance. The input data used in this study was a combination of patient and doctor features. The patient features consisted of information extracted from digital symptom forms filled out by a patient before a video consultation with a doctor. These features were combined with doctor's speed, defined as the doctor's average consultation duration for his/her previous meetings. The output was defined as the length of the video consultation including administrative work made by the doctor before and after the meeting. One of the objectives of this thesis was to investigate whether the relationship between input and output was linear or non-linear. Also the problem was formulated both as a regression and a classification problem. The two problem formulations were compared in terms of achieved accuracy. The models chosen for this study was linear regression, linear discriminant analysis and the multi-layer perceptron implemented for both regression and classification. After performing a statistical t-test and a two-way ANOVA test it was concluded that no significant difference could be detected when comparing the models' performances. However, since linear regression is the least computationally heavy it was suggested for future usage until it is proved that any other model achieves better performance. Limitations such as too few models being tested and flaws in the data set were identified and further research is encouraged. Studies implementing an actual scheduling system using the methodology presented in the thesis is recommended as a topic for future research. / Syftet med denna uppsats är att utvärdera olika verktyg för att prediktera längden på ett läkarbesök och därmed göra det möjligt att skapa en mer effektiv schemaläggning i primärvården och på så sätt minska väntetiden för patienterna. Även om inget faktiskt schemaläggningssystem har föreslagits i denna uppsats så har fyra maskininlärningsmodeller implementerats och jämförts. Syftet med detta var bland annat att se om det var möjligt att dra slutsatsen att någon av modellerna gav bättre resultat än de andra. Den indata som använts i denna studie har bestått dels av symptomdata insamlad från symptomformulär ifylld av patienten före ett videomöte med en digital vårdgivare. Denna data har kombinerats med läkarens genomsnittliga mötestid i hens tidigare genomförda möten. Utdatan har definierats som längden av ett videomöte samt den tid som läkaren har behövt för administrativt arbete före och efter själva mötet. Ett av målen med denna studie var att undersöka som sambandet mellan indata och utdata är linjärt eller icke-linjärt. Ett annat mål var att formulera problemet både som ett regressionsproblem och som ett klassifikationsproblem. Syftet med detta var att kunna jämföra och se vilken av problemformuleringarna som gav bäst resultat. De modeller som har implementerats i denna studie är linjär regression, linjär diskriminationsanalys (linear discriminant analysis) och neurala nätverk implementerade för både regression och klassifikation. Efter att ha genomfört ett statistiskt t-test och en två-vägs ANOVA-analys kunde slutsatsen dras att ingen av de fyra studerade modellerna presterade signifikant bättre än någon av de andra. Eftersom linjär regression är enklare och kräver mindre datorkapacitet än de andra modellerna så dras slutsatsen att linjär regression kan rekommenderas för framtida användning tills det har bevisats att någon annan modell ger bättre resultat. De begränsningar som har identifierats hos studien är bland annat att det bara var fyra modeller som implementerats samt att datan som använts har vissa brister. Framtida studier som inkluderar fler modeller och bättre data har därför föreslagits. Dessutom uppmuntras framtida studier där ett faktiskt schemaläggningssystem implementeras som använder den metodik som föreslås i denna studie.
22

A BAYESIAN EVIDENCE DEFINING SEARCH

Kim, Seongsu 25 June 2015 (has links)
No description available.
23

Extracting key features for analysis and recognition in computer vision

Gao, Hui 13 March 2006 (has links)
No description available.
24

Predicting basketball performance based on draft pick : A classification analysis

Harmén, Fredrik January 2022 (has links)
In this thesis, we will look to predict the performance of a basketball player coming into the NBA depending on where the player was picked in the NBA draft. This will be done by testing different machine learning models on data from the previous 35 NBA drafts and then comparing the models in order to see which model had the highest accuracy of classification. The machine learning methods used are Linear Discriminant Analysis, K-Nearest Neighbors, Support Vector Machines and Random Forests. The results show that the method with the highest accuracy of classification was Random Forests, with an accuracy of 42%.
25

Infrared face recognition

Lee, Colin K. 06 1900 (has links)
Approved for public release, distribution is unlimited / This study continues a previous face recognition investigation using uncooled infrared technology. The database developed in an earlier study is further expanded to include 50 volunteers with 30 facial images from each subject. The automatic image reduction method reduces the pixel size of each image from 160 120 to 60 45 . The study reexamines two linear classification methods: the Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (LDA). Both PCA and LDA apply eigenvectors and eigenvalues concepts. In addition, the Singular Value Decomposition based Snapshot method is applied to decrease the computational load. The K-fold Cross Validation is applied to estimate classification performances. Results indicate that the best PCA-based method (using all eigenvectors) produces an average classification performance equal to 79.22%. Incorporated with PCA for dimension reduction, the LDA-based method achieves 94.58% accuracy in average classification performance. Additional testing on unfocused images produces no significant impact on the overall classification performance. Overall results again confirm uncooled IR imaging can be used to identify individual subjects in a constrained indoor environment. / Lieutenant, United States Navy
26

Mobilitätsverhalten potentieller Radfahrer in Dresden

Manteufel, Rico 01 October 2015 (has links) (PDF)
Before the German reunification, Dresden was a city of motorized traffic and cyclist were rare. But in the 90's began a change of transport policy and cycling became more important. This Master Thesis wants to show the current standing of cycling in Dresden. Thats why the results of the "SrV"-study should be analysed with regard to potential cyclists and their journeys. As methods were used a descriptive analysis and the linear discriminant analysis, both used at a personal and journey-specific level of data. As a result, Dresden have to do much more to become a good "cycling-city", so the bike-level wasn't really high in the year 2013. Instead the car is still the mostly used transport vehicle and the proportion in the Modal-Split is only slowly sinking. But this study shows typical characteritics of cyclists and cycling journays of Dresden, so there is a basis to get more people involved to cycle and become a more eco-friendly city.
27

Learning algorithms for sparse classification / Algorithmes d'estimation pour la classification parcimonieuse

Sanchez Merchante, Luis Francisco 07 June 2013 (has links)
Cette thèse traite du développement d'algorithmes d'estimation en haute dimension. Ces algorithmes visent à résoudre des problèmes de discrimination et de classification, notamment, en incorporant un mécanisme de sélection des variables pertinentes. Les contributions de cette thèse se concrétisent par deux algorithmes, GLOSS pour la discrimination et Mix-GLOSS pour la classification. Tous les deux sont basés sur le résolution d'une régression régularisée de type "optimal scoring" avec une formulation quadratique de la pénalité group-Lasso qui encourage l'élimination des descripteurs non-significatifs. Les fondements théoriques montrant que la régression de type "optimal scoring" pénalisée avec un terme "group-Lasso" permet de résoudre un problème d'analyse discriminante linéaire ont été développés ici pour la première fois. L'adaptation de cette théorie pour la classification avec l'algorithme EM n'est pas nouvelle, mais elle n'a jamais été détaillée précisément pour les pénalités qui induisent la parcimonie. Cette thèse démontre solidement que l'utilisation d'une régression de type "optimal scoring" pénalisée avec un terme "group-Lasso" à l'intérieur d'une boucle EM est possible. Nos algorithmes ont été testés avec des bases de données réelles et artificielles en haute dimension avec des résultats probants en terme de parcimonie, et ce, sans compromettre la performance du classifieur. / This thesis deals with the development of estimation algorithms with embedded feature selection the context of high dimensional data, in the supervised and unsupervised frameworks. The contributions of this work are materialized by two algorithms, GLOSS for the supervised domain and Mix-GLOSS for unsupervised counterpart. Both algorithms are based on the resolution of optimal scoring regression regularized with a quadratic formulation of the group-Lasso penalty which encourages the removal of uninformative features. The theoretical foundations that prove that a group-Lasso penalized optimal scoring regression can be used to solve a linear discriminant analysis bave been firstly developed in this work. The theory that adapts this technique to the unsupervised domain by means of the EM algorithm is not new, but it has never been clearly exposed for a sparsity-inducing penalty. This thesis solidly demonstrates that the utilization of group-Lasso penalized optimal scoring regression inside an EM algorithm is possible. Our algorithms have been tested with real and artificial high dimensional databases with impressive resuits from the point of view of the parsimony without compromising prediction performances.
28

Avaliação da gravidade da malária utilizando técnicas de extração de características e redes neurais artificiais

Almeida, Larissa Medeiros de 17 April 2015 (has links)
Submitted by Kamila Costa (kamilavasconceloscosta@gmail.com) on 2015-06-15T21:53:52Z No. of bitstreams: 1 Dissertação-Larissa M de Almeida.pdf: 5516102 bytes, checksum: e49d2bccd21168f811140c6accd54e8f (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-16T15:05:39Z (GMT) No. of bitstreams: 1 Dissertação-Larissa M de Almeida.pdf: 5516102 bytes, checksum: e49d2bccd21168f811140c6accd54e8f (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-16T15:07:25Z (GMT) No. of bitstreams: 1 Dissertação-Larissa M de Almeida.pdf: 5516102 bytes, checksum: e49d2bccd21168f811140c6accd54e8f (MD5) / Made available in DSpace on 2015-06-16T15:07:25Z (GMT). No. of bitstreams: 1 Dissertação-Larissa M de Almeida.pdf: 5516102 bytes, checksum: e49d2bccd21168f811140c6accd54e8f (MD5) Previous issue date: 2015-04-17 / Não Informada / About half the world's population lives in malaria risk areas. Moreover, given the globalization of travel, these diseases that were once considered exotic and mostly tropical are increasingly found in hospital emergency rooms around the world. And often when it comes to experience in tropical diseases, expert opinion most of the time is not available or not accessible in a timely manner. The task of an accurate and efficient diagnosis of malaria, essential in medical practice, can become complex. And the complexity of this process increases as patients have non-specific symptoms with a large amount of data and inaccurate information involved. In this approach, Uzoka and colleagues (2011a), from clinical information of 30 Nigerian patients with confirmed malaria, used the Analytic Hierarchy Process method (AHP) and Fuzzy methodology to conduct the evaluation of the severity of malaria. The results obtained were compared with the diagnosis of medical experts. This paper develops a new methodology to evaluate the severity of malaria and compare with the techniques used by Uzoka and colleagues (2011a). For this purpose the data set used is the same of that study. The technique used is the Artificial Neural Networks (ANN). Are evaluated three architectures with different numbers of neurons in the hidden layer, two training methodologies (leave-one-out and 10-fold cross-validation) and three stopping criteria, namely: the root mean square error, early stop and regularization. In the first phase, we use the full database. Subsequently, the feature extraction methods are used: in the second stage, the Principal Component Analysis (PCA) and in the third stage, the Linear Discriminant Analysis (LDA). The best result obtained in the three phases, it was with the full database, using the criterion of regularization associated with the leave-one-out method, of 83.3%. And the best result obtained in (Uzoka, Osuji and Obot, 2011) was with the fuzzy network which revealed 80% accuracy / Cerca de metade da população mundial vive em áreas de risco da malária. Além disso, dada a globalização das viagens, essas doenças que antes eram consideradas exóticas e principalmente tropicais são cada vez mais encontradas em salas de emergência de hospitais no mundo todo. E frequentemente quando se trata de experiência em doenças tropicais, a opinião de especialistas na maioria das vezes está indisponível ou não acessível em tempo hábil. A tarefa de chegar a um diagnóstico da malária preciso e eficaz, fundamental na prática médica, pode tornar-se complexa. E a complexidade desse processo aumenta à medida que os pacientes apresentam sintomas não específicos com uma grande quantidade de dados e informação imprecisa envolvida. Nesse sentido, Uzoka e colaboradores (2011a), a partir de informações clínicas de 30 pacientes nigerianos com diagnóstico confirmado de malária, utilizaram a metodologia Analytic Hierarchy Process (AHP) e metodologia Fuzzy para realizar a avaliação da gravidade da malária. Os resultados obtidos foram comparados com o diagnóstico de médicos especialistas. Esta dissertação desenvolve uma nova metodologia para avaliação da gravidade da malária e a compara com as técnicas utilizadas por Uzoka e colaboradores (2011a). Para tal o conjunto de dados utilizados é o mesmo do referido estudo. A técnica utilizada é a de Redes Neurais Artificiais (RNA). São avaliadas três arquiteturas com diferentes números de neurônios na camada escondida, duas metodologias de treinamento (leave-one-out e 10-fold cross-validation) e três critérios de parada, a saber: o erro médio quadrático, parada antecipada e regularização. Na primeira fase, é utilizado o banco de dados completo. Posteriormente, são utilizados os métodos de extração de características: na segunda fase, a Análise dos Componentes Principais (do inglês, Principal Component Analysis - PCA) e na terceira fase, a Análise Discriminante Linear (do inglês, Linear Discriminant Analysis – LDA). O melhor resultado obtido nas três fases, foi com o banco de dados completo, utilizando o critério de regularização, associado ao leave-one-out, de 83.3%. Já o melhor resultado obtido em (Uzoka, Osuji e Obot, 2011) foi com a rede fuzzy onde obteve 80% de acurácia.
29

Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition

Wang, Xuechuan, n/a January 2003 (has links)
Conventional pattern recognition systems have two components: feature analysis and pattern classification. Feature analysis is achieved in two steps: parameter extraction step and feature extraction step. In the parameter extraction step, information relevant for pattern classification is extracted from the input data in the form of parameter vector. In the feature extraction step, the parameter vector is transformed to a feature vector. Feature extraction can be conducted independently or jointly with either parameter extraction or classification. Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are the two popular independent feature extraction algorithms. Both of them extract features by projecting the parameter vectors into a new feature space through a linear transformation matrix. But they optimize the transformation matrix with different intentions. PCA optimizes the transformation matrix by finding the largest variations in the original feature space. LDA pursues the largest ratio of between-class variation and within-class variation when projecting the original feature space to a subspace. The drawback of independent feature extraction algorithms is that their optimization criteria are different from the classifier’s minimum classification error criterion, which may cause inconsistency between feature extraction and the classification stages of a pattern recognizer and consequently, degrade the performance of classifiers. A direct way to overcome this problem is to conduct feature extraction and classification jointly with a consistent criterion. Minimum classification Error (MCE) training algorithm provides such an integrated framework. MCE algorithm was first proposed for optimizing classifiers. It is a type of discriminative learning algorithm but achieves minimum classification error directly. The flexibility of the framework of MCE algorithm makes it convenient to conduct feature extraction and classification jointly. Conventional feature extraction and pattern classification algorithms, LDA, PCA, MCE training algorithm, minimum distance classifier, likelihood classifier and Bayesian classifier, are linear algorithms. The advantage of linear algorithms is their simplicity and ability to reduce feature dimensionalities. However, they have the limitation that the decision boundaries generated are linear and have little computational flexibility. SVM is a recently developed integrated pattern classification algorithm with non-linear formulation. It is based on the idea that the classification that a.ords dot-products can be computed efficiently in higher dimensional feature spaces. The classes which are not linearly separable in the original parametric space can be linearly separated in the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. However, SVM is a highly integrated and closed pattern classification system. It is very difficult to adopt feature extraction into SVM’s framework. Thus SVM is unable to conduct feature extraction tasks. This thesis investigates LDA and PCA for feature extraction and dimensionality reduction and proposes the application of MCE training algorithms for joint feature extraction and classification tasks. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithms in joint feature and classification tasks. SVM, as a non-linear pattern classification system is also investigated in this thesis. A reduced-dimensional SVM (RDSVM) is proposed to enable SVM to conduct feature extraction and classification jointly. All of the investigated and proposed algorithms are tested and compared firstly on a number of small databases, such as Deterding Vowels Database, Fisher’s IRIS database and German’s GLASS database. Then they are tested in a large-scale speech recognition experiment based on TIMIT database.
30

Atrial Fibrillation Signal Analysis

Vaizurs, Raja Sarath Chandra Prasad 01 January 2011 (has links)
Atrial fibrillation (AF) is the most common type of cardiac arrhythmia encountered in clinical practice and is associated with an increased mortality and morbidity. Identification of the sources of AF has been a goal of researchers for over 20 years. Current treatment procedures such as Cardio version, Radio Frequency Ablation, and multiple drugs have reduced the incidence of AF. Nevertheless, the success rate of these treatments is only 35-40% of the AF patients as they have limited effect in maintaining the patient in normal sinus rhythm. The problem stems from the fact that there are no methods developed to analyze the electrical activity generated by the cardiac cells during AF and to detect the aberrant atrial tissue that triggers it. In clinical practice, the sources triggering AF are generally expected to be at one of the four pulmonary veins in the left atrium. Classifying the signals originated from four pulmonary veins in left atrium has been the mainstay of signal analysis in this thesis which ultimately leads to correctly locating the source triggering AF. Unlike many of the current researchers where they use ECG signals for AF signal analysis, we collect intra cardiac signals along with ECG signals for AF analysis. AF Signal collected from catheters placed inside the heart gives us a better understanding of AF characteristics compared to the ECG. . In recent years, mechanisms leading to AF induction have begun to be explored but the current state of research and diagnosis of AF is mainly about the inspection of 12 lead ECG, QRS subtraction methods, spectral analysis to find the fibrillation rate and limited to establishment of its presence or absence. The main goal of this thesis research is to develop methodology and algorithm for finding the source of AF. Pattern recognition techniques were used to classify the AF signals originated from the four pulmonary veins. The classification of AF signals recorded by a stationary intra-cardiac catheter was done based on dominant frequency, frequency distribution and normalized power. Principal Component Analysis was used to reduce the dimensionality and further, Linear Discriminant Analysis was used as a classification technique. An algorithm has been developed and tested during recorded periods of AF with promising results.

Page generated in 0.1791 seconds