• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 229
  • 93
  • 20
  • 19
  • 15
  • 11
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 495
  • 417
  • 94
  • 83
  • 67
  • 58
  • 53
  • 50
  • 47
  • 43
  • 42
  • 41
  • 37
  • 36
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Time series discrimination, signal comparison testing, and model selection in the state-space framework

Bengtsson, Thomas January 2000 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2000. / Typescript. Vita. Includes bibliographical references (leaf 104). Also available on the Internet.
162

Neutral zone classifiers within a decision-theoretic framework

Yu, Hua. January 2009 (has links)
Thesis (Ph. D.)--University of California, Riverside, 2009. / Includes abstract. Also issued in print. Includes bibliographical references (leaves 81-84). Available via ProQuest Digital Dissertations.
163

Methods for improving the reliability of semiconductor fault detection and diagnosis with principal component analysis

Cherry, Gregory Allan 28 August 2008 (has links)
Not available / text
164

Klasifikavimo su mokytoju metodų lyginamoji analizė / A comparative analysis of supervised classification methods

Šimkevičius, Simonas 05 June 2006 (has links)
Supervised classification methods are applied in many fields. The main problem of applying these methods is how to select the most appropriate method in particular case. The literary review was fulfilled and the advantages and disadvantages of mostly used criterion of supervised classification methods comparisons were ascertained. Then the methodology of comparisons was suggested. The analysis of SAS system procedures and macro commands was made. It was ascertained that there is not comfortable software which allows comparing the results of supervised classification methods. This work demands a lot of work, good knowledge of SAS programming language and high qualification in programming. So, the main purpose of this work is to expand the statistical data analysis system SAS possibilities in comparison of supervised classification methods and classificate various data. In this work the possibilities of SAS system are expanded by the tool which allows comparing quality of the linear, quadratic, kernel, nearest neighbor’s discriminant analysis and logistic regression analysis methods. There were used classification error estimates which were got by resubstition, cross–validation leave one out, bootstrap and Monte Carl cross–validation methods, although classification error confidence intervals which were got by non-parametric bootstrap method. The test of created tool was made with various data (different sample sizes, various classis separability, violations of assumptions... [to full text]
165

DEVELOPMENT OF AN EEG BRAIN-MACHINE INTERFACE TO AID IN RECOVERY OF MOTOR FUNCTION AFTER NEUROLOGICAL INJURY

Salmon, Elizabeth 01 January 2013 (has links)
Impaired motor function following neurological injury may be overcome through therapies that induce neuroplastic changes in the brain. Therapeutic methods include repetitive exercises that promote use-dependent plasticity (UDP), the benefit of which may be increased by first administering peripheral nerve stimulation (PNS) to activate afferent fibers, resulting in increased cortical excitability. We speculate that PNS delivered only in response to attempted movement would induce timing-dependent plasticity (TDP), a mechanism essential to normal motor learning. Here we develop a brain-machine interface (BMI) to detect movement intent and effort in healthy volunteers (n=5) from their electroencephalogram (EEG). This could be used in the future to promote TDP by triggering PNS in response to a patient’s level of effort in a motor task. Linear classifiers were used to predict state (rest, sham, right, left) based on EEG variables in a handgrip task and to determine between three levels of force applied. Mean classification accuracy with out-of-sample data was 54% (23-73%) for tasks and 44% (21-65%) for force. There was a slight but significant correlation (p<0.001) between sample entropy and force exerted. The results indicate the feasibility of applying PNS in response to motor intent detected from the brain.
166

Application de techniques parcimonieuses et hiérarchiques en reconnaissance de la parole

Brodeur, Simon January 2013 (has links)
Les systèmes de reconnaissance de la parole sont fondamentalement dérivés des domaines du traitement et de la modélisation statistique des signaux. Depuis quelques années, d'importantes innovations de domaines connexes comme le traitement d'image et les neurosciences computationnelles tardent toutefois à améliorer la performance des systèmes actuels de reconnaissance de parole. La revue de la littérature a suggéré qu'un système de reconnaissance vocale intégrant les aspects de hiérarchie, parcimonie et grandes dimensions joindrait les avantages de chacun. L'objectif général est de comprendre comment l'intégration de tous ces aspects permettrait d'améliorer la robustesse aux bruits additifs d'un système de reconnaissance de la parole. La base de données TI46 (mots isolés, faible-vocabulaire) est utilisée pour effectuer l'apprentissage non-supervisé et les tests de classification. Les différents bruits additifs proviennent de la base de données NOISEX-92, et permettent d'évaluer la robustesse en conditions de bruit réalistes. L'extraction de caractéristiques dans le système proposé est effectuée par des projections linéaires successives sur des bases, permettant de couvrir de plus en plus de contexte temporel et spectral. Diverses méthodes de seuillage permettent de produire une représentation multi-échelle, binaire et parcimonieuse de la parole. Au niveau du dictionnaire de bases, l'apprentissage non-supervisé permet sous certaines conditions l'obtention de bases qui reflètent des caractéristiques phonétiques et syllabiques de la parole, donc visant une représentation par objets d'un signal. L'algorithme d'analyse en composantes indépendantes (ICA) s'est démontré mieux adapté à extraire de telles bases, principalement à cause du critère de réduction de redondance. Les analyses théoriques et expérimentales ont montré comment la parcimonie peut contourner les problèmes de discrimination des distances et d'estimation des densités de probabilité dans des espaces à grandes dimensions. Il est observé qu'un espace de caractéristiques parcimonieux à grandes dimensions peut définir un espace de paramètres (p.ex. modèle statistique) de mêmes propriétés. Ceci réduit la disparité entre les représentations de l'étage d'extraction des caractéristiques et celles de l'étage de classification. De plus, l'étage d'extraction des caractéristiques peut favoriser une réduction de la complexité de l'étage de classification. Un simple classificateur linéaire peut venir compléter un modèle de Markov caché (HMM), joignant une capacité de discrimination accrue à la polyvalence d'une segmentation en états d'un signal. Les résultats montrent que l'architecture développée offr de meilleurs taux de reconnaissance en conditions propres et bruités comparativement à une architecture conventionnelle utilisant les coefficients cepstraux (MFCC) et une machine à vecteurs de support (SVM) comme classificateur discriminant. Contrairement aux techniques de codage de la parole où la transformation doit être inversible, la reconstruction n'est pas importante en reconnaissance de la parole. Cet aspect a justifié la possibilité de réduire considérablement la complexité des espaces de caractéristiques et de paramètres, sans toutefois diminuer le pouvoir de discrimination et la robustesse.
167

Classification parcimonieuse et discriminante de données complexes. Une application à la cytologie

Brunet, Camille 01 December 2011 (has links) (PDF)
Les thèmes principaux de ce mémoire sont la parcimonie et la discrimination pour la modélisation de données complexes. Dans une première partie de ce mémoire, nous nous plaçons dans un contexte de modèle de mélanges gaussiens: nous introduisons une nouvelle famille de modèles probabilistes qui simultanément classent et trouvent un espace discriminant tel que cet espace discrimine au mieux les groupes. Une famille de 12 modèles latents discriminants (DLM) modèles est introduite et se base sur trois idées: tout d'abord, les données réelles vivent dans un sous-espace latent de dimension intrinsèque plus petite que celle de l'espace observé; deuxièmement, un sous-espace de K-1 dimensions est suffisant pour discriminer K groupes; enfin, l'espace observé et celui latent sont liés par une transformation linéaire. Une procédure d'estimation, appelée Fisher-EM, est proposée et améliore la plupart du temps les performances de clustering grâce à l'utilisation du sous-espace discriminant. Dans un second travail, nous nous sommes intéressés à la détermination du nombre de groupes en utilisant le cadre de la sériation. nous proposons d'intégrer de la parcimonie dans les données par l'intermédiaire d'une famille de matrices binaires. Ces dernière sont construites à partir d'une mesure de dissimilarité basée sur le nombre de voisins communs entre paires d'observations. En particulier, plus le nombre de voisins communs imposé est important, plus la matrice sera parcimonieuse, i.e. remplie de zéros, ce qui permet, à mesure que le seuil de parcimonie augmente, de retirer les valeurs extrêmes et les données bruitées. Cette collection de matrices parcimonieuses est ordonnée selon un algorithme de sériation de type forward, nommé PB-Clus, afin d'obtenir des représentations par blocs des matrices sériées. Ces deux méthodes ont été validées sur une application biologique basée sur la détection du cancer du col de l'utérus.
168

The Road to Bankruptcy: A study on Predicting Financial Distress in Sweden

Quarcoo, Nii Lartey, Smedberg, Patrik January 2014 (has links)
This thesis aims to study whether cash flow ratios can predict corporate financial distress in Sweden by employing multiple discriminant analysis. It was inspired by the Altman Z-score, which was adjusted for this aim. This study adopted a positivist epistemology and objectivist ontology. The research approach taken was a deductive one which employed quantitative methods in testing the hypotheses developed. The hypotheses were tested through means of accuracy and the Independent Samples Test. In order to identify financial distress a proxyratio was adopted. This proxy was the operating cash flow ratio. The sample consisted of 227 firms in total within the retail- and service industries. The time period covered 2000-2013. It was found that the proxy was unable to separate firms into distressed and nondistressedgroups, but rather classified all firms as distressed. Furthermore, the other ratios also failed to do any classification. Therefore, what the question that this study set out to answer came to the conclusion; cash flow ratios cannot predict corporate financial distress for retail and service companies in Sweden.
169

A study of the determinants of transfer pricing : the evaluation of the relationship between a number of company variables and transfer pricing methods used by UK companies in domestic and international markets

Mostafa, Azza Mostafa Mohamed January 1981 (has links)
The transfer pricing, literature indicates that an investigation of some aspects of this subject could usefully be undertaken in order to contribute to the understanding of transfer pricing in both domestic and international markets. This study aims at exploring the current state of transfer pricing practice and establishing the importance attached to the ranking of transfer pricing determinants (i. e. objectives and environmental variables) and the extent to which the ranking varies across markets, industry, and according to the transfer pricing method used. It also seeks to discover interrelationship among the transfer pricing determinants in order to produce a reduced set of basic factors. Lastly, it aims at evaluating the relationship between transfer pricing determinants and transfer pricing methods and at discovering a means of predicting the latter from the company's perception of the relative importance of these determinants. To achieve the above objectives, an empirical study covering both domestic and international markets was undertaken in UK companies. The conclusions are concerned with transfer pricing policy, methods currently used, and problems apparent in practice. The overall ranking-by survey respondents of the transfer pricing determinants is given as well as the results of tests of certain hypotheses which relate to this ranking. The transfer pricing determinants used in the survey for domestic and international. markets (twelve and twenty respectively) have been reduced by Factor Analysis to four and six factors. The study made use of the results to obtain measures of the ranking of discovered factors. Finally, the relationship between the transfer pricing determinants and transfer pricing methods was quantitatively evaluated in the form of a set of classification functions by using Multi-Discriminant Analysis. The classification functions are able to predict the transfer pricing method actually used in companies with an acceptable degree of success. The study's results have been reviewed with a small number of senior managers who are involved in establishing transfer pricing policy within their companies.
170

Increasing The Accuracy Of Vegetation Classification Using Geology And Dem

Domac, Aysegul 01 December 2004 (has links) (PDF)
The difficulty of gathering information on field and coarse resolution of Landsat images forced to use ancillary data in vegetation mapping. The aim of this study is to increase the accuracy of species level vegetation classification incorporating environmental variables in the Amanos region. In the first part of the study, coarse vegetation classification is attained by using maximum likelihood method with the help of forest management maps. Canonical Correspondence analysis is used to explore the relationships among the environmental variables and vegetation classes. Discriminant Analysis is used in the second part of the study in two different stages. Firstly Fisher&rsquo / s linear equations for each of the previously defined nine groups calculated and the pixels are included in one of these groups by looking at the probability of that pixel being in that group. In the second stage Distance raster value of maximum likelihood classification is used. Distance raster pixels having a value less than one is accepted as misclassified and replaced with a value of first stage result of that pixel. As a result of this study 19.6 % increase in the overall accuracy is obtained by using the relationships between environmental variables and vegetation distribution.

Page generated in 0.0857 seconds