• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 117
  • 61
  • 21
  • 20
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 266
  • 266
  • 69
  • 67
  • 59
  • 57
  • 52
  • 39
  • 36
  • 32
  • 31
  • 30
  • 30
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

INFORMATIONAL INDEX AND ITS APPLICATIONS IN HIGH DIMENSIONAL DATA

Yuan, Qingcong 01 January 2017 (has links)
We introduce a new class of measures for testing independence between two random vectors, which uses expected difference of conditional and marginal characteristic functions. By choosing a particular weight function in the class, we propose a new index for measuring independence and study its property. Two empirical versions are developed, their properties, asymptotics, connection with existing measures and applications are discussed. Implementation and Monte Carlo results are also presented. We propose a two-stage sufficient variable selections method based on the new index to deal with large p small n data. The method does not require model specification and especially focuses on categorical response. Our approach always improves other typical screening approaches which only use marginal relation. Numerical studies are provided to demonstrate the advantages of the method. We introduce a novel approach to sufficient dimension reduction problems using the new measure. The proposed method requires very mild conditions on the predictors, estimates the central subspace effectively and is especially useful when response is categorical. It keeps the model-free advantage without estimating link function. Under regularity conditions, root-n consistency and asymptotic normality are established. The proposed method is very competitive and robust comparing to existing dimension reduction methods through simulations results.
182

Stabilité de la sélection de variables pour la régression et la classification de données corrélées en grande dimension / Stability of variable selection in regression and classification issues for correlated data in high dimension

Perthame, Emeline 16 October 2015 (has links)
Les données à haut-débit, par leur grande dimension et leur hétérogénéité, ont motivé le développement de méthodes statistiques pour la sélection de variables. En effet, le signal est souvent observé simultanément à plusieurs facteurs de confusion. Les approches de sélection habituelles, construites sous l'hypothèse d'indépendance des variables, sont alors remises en question car elles peuvent conduire à des décisions erronées. L'objectif de cette thèse est de contribuer à l'amélioration des méthodes de sélection de variables pour la régression et la classification supervisée, par une meilleure prise en compte de la dépendance entre les statistiques de sélection. L'ensemble des méthodes proposées s'appuie sur la description de la dépendance entre covariables par un petit nombre de variables latentes. Ce modèle à facteurs suppose que les covariables sont indépendantes conditionnellement à un vecteur de facteurs latents. Une partie de ce travail de thèse porte sur l'analyse de données de potentiels évoqués (ERP). Les ERP sont utilisés pour décrire par électro-encéphalographie l'évolution temporelle de l'activité cérébrale. Sur les courts intervalles de temps durant lesquels les variations d'ERPs peuvent être liées à des conditions expérimentales, le signal psychologique est faible, au regard de la forte variabilité inter-individuelle des courbes ERP. En effet, ces données sont caractérisées par une structure de dépendance temporelle forte et complexe. L'analyse statistique de ces données revient à tester pour chaque instant un lien entre l'activité cérébrale et des conditions expérimentales. Une méthode de décorrélation des statistiques de test est proposée, basée sur la modélisation jointe du signal et de la dépendance à partir d'une connaissance préalable d'instants où le signal est nul. Ensuite, l'apport du modèle à facteurs dans le cadre général de l'Analyse Discriminante Linéaire est étudié. On démontre que la règle linéaire de classification optimale conditionnelle aux facteurs latents est plus performante que la règle non-conditionnelle. Un algorithme de type EM pour l'estimation des paramètres du modèle est proposé. La méthode de décorrélation des données ainsi définie est compatible avec un objectif de prédiction. Enfin, on aborde de manière plus formelle les problématiques de détection et d'identification de signal en situation de dépendance. On s'intéresse plus particulièrement au Higher Criticism (HC), défini sous l'hypothèse d'un signal rare de faible amplitude et sous l'indépendance. Il est montré dans la littérature que cette méthode atteint des bornes théoriques de détection. Les propriétés du HC en situation de dépendance sont étudiées et les bornes de détectabilité et d'estimabilité sont étendues à des situations arbitrairement complexes de dépendance. Dans le cadre de l'identification de signal, une adaptation de la méthode Higher Criticism Thresholding par décorrélation par les innovations est proposée. / The analysis of high throughput data has renewed the statistical methodology for feature selection. Such data are both characterized by their high dimension and their heterogeneity, as the true signal and several confusing factors are often observed at the same time. In such a framework, the usual statistical approaches are questioned and can lead to misleading decisions as they are initially designed under independence assumption among variables. The goal of this thesis is to contribute to the improvement of variable selection methods in regression and supervised classification issues, by accounting for the dependence between selection statistics. All the methods proposed in this thesis are based on a factor model of covariates, which assumes that variables are conditionally independent given a vector of latent variables. A part of this thesis focuses on the analysis of event-related potentials data (ERP). ERPs are now widely collected in psychological research to determine the time courses of mental events. In the significant analysis of the relationships between event-related potentials and experimental covariates, the psychological signal is often both rare, since it only occurs on short intervals and weak, regarding the huge between-subject variability of ERP curves. Indeed, this data is characterized by a temporal dependence pattern both strong and complex. Moreover, studying the effect of experimental condition on brain activity for each instant is a multiple testing issue. We propose to decorrelate the test statistics by a joint modeling of the signal and time-dependence among test statistics from a prior knowledge of time points during which the signal is null. Second, an extension of decorrelation methods is proposed in order to handle a variable selection issue in the linear supervised classification models framework. The contribution of factor model assumption in the general framework of Linear Discriminant Analysis is studied. It is shown that the optimal linear classification rule conditionally to these factors is more efficient than the non-conditional rule. Next, an Expectation-Maximization algorithm for the estimation of the model parameters is proposed. This method of data decorrelation is compatible with a prediction purpose. At last, the issues of detection and identification of a signal when features are dependent are addressed more analytically. We focus on the Higher Criticism (HC) procedure, defined under the assumptions of a sparse signal of low amplitude and independence among tests. It is shown in the literature that this method reaches theoretical bounds of detection. Properties of HC under dependence are studied and the bounds of detectability and estimability are extended to arbitrarily complex situations of dependence. Finally, in the context of signal identification, an extension of Higher Criticism Thresholding based on innovations is proposed.
183

Forecasting the Business Cycle using Partial Least Squares / Prediktion av ekonomiskacykler med hjälp av partiella minsta kvadrat metoden

Lannsjö, Fredrik January 2014 (has links)
Partial Least Squares is both a regression method and a tool for variable selection, that is especially appropriate for models based on numerous (possibly correlated) variables. While being a well established modeling tool in chemometrics, this thesis adapts PLS to financial data to predict the movements of the business cycle represented by the OECD Composite Leading Indicators. High-dimensional data is used, and a model with automated variable selection through a genetic algorithm is developed to forecast different economic regions with good results in out-of-sample tests. / Partial Least Squares är både en regressionsmetod och ett verktyg för variabelselektion som är specielltlämpligt för modeller baserade på en stor mängd (möjligtvis korrelerade) variabler.Medan det är en väletablerad modelleringsmetod inom kemimetri, anpassar den häruppsatsen PLS till finansiell data för att förutspå rörelserna av konjunkturen,representerad av OECD's Composite Leading Indicator. Högdimensionella dataanvänds och en model med automatiserad variabelselektion via en genetiskalgoritm utvecklas för att göra en prognos av olika ekonomiska regioner medgoda resultat i out-of-sample-tester
184

Joint Models for the Association of Longitudinal Binary and Continuous Processes With Application to a Smoking Cessation Trial

Liu, Xuefeng, Daniels, Michael J., Marcus, Bess 01 June 2009 (has links)
Joint models for the association of a longitudinal binary and a longitudinal continuous process are proposed for situations in which their association is of direct interest. The models are parameterized such that the dependence between the two processes is characterized by unconstrained regression coefficients. Bayesian variable selection techniques are used to parsimoniously model these coefficients. A Markov chain Monte Carlo (MCMC) sampling algorithm is developed for sampling from the posterior distribution, using data augmentation steps to handle missing data. Several technical issues are addressed to implement the MCMC algorithm efficiently. The models are motivated by, and are used for, the analysis of a smoking cessation clinical trial in which an important question of interest was the effect of the (exercise) treatment on the relationship between smoking cessation and weight gain.
185

Bayesian Model Selections for Log-binomial Regression

Zhou, Wei January 2018 (has links)
No description available.
186

Regression and time estimation in the manufacturing industry

Bjernulf, Walter January 2023 (has links)
In this thesis an analysis is performed on operation times for different sized products in a manufacturing company. The thesis will introduce and summarise most of the theory needed to perform regression and also cover a worked example where three different regression models are learned, evaluated and analysed. Conformal prediction, which at the moment is a hot topic in machine learning, will also be introduced and will be used in the worked example.
187

A Benchmark of Prevalent Feature Selection Algorithms on a Diverse Set of Classification Problems

Anette, Kniberg, Nokto, David January 2018 (has links)
Feature selection is the process of automatically selecting important features from data. It is an essential part of machine learning, artificial intelligence, data mining, and modelling in general. There are many feature selection algorithms available and the appropriate choice can be difficult. The aim of this thesis was to compare feature selection algorithms in order to provide an experimental basis for which algorithm to choose. The first phase involved assessing which algorithms are most common in the scientific community, through a systematic literature study in the two largest reference databases: Scopus and Web of Science. The second phase involved constructing and implementing a benchmark pipeline to compare 31 algorithms’ performance on 50 data sets.The selected features were used to construct classification models and their predictive performances were compared, as well as the runtime of the selection process. The results show a small overall superiority of embedded type algorithms, especially types that involve Decision Trees. However, there is no algorithm that is significantly superior in every case. The pipeline and data from the experiments can be used by practitioners in determining which algorithms to apply to their respective problems. / Variabelselektion är en process där relevanta variabler automatiskt selekteras i data. Det är en essentiell del av maskininlärning, artificiell intelligens, datautvinning och modellering i allmänhet. Den stora mängden variabelselektionsalgoritmer kan göra det svårt att avgöra vilken algoritm som ska användas. Målet med detta examensarbete är att jämföra variabelselektionsalgoritmer för att ge en experimentell bas för valet av algoritm. I första fasen avgjordes vilka algoritmer som är mest förekommande i vetenskapen, via en systematisk litteraturstudie i de två största referensdatabaserna: Scopus och Web of Science. Den andra fasen bestod av att konstruera och implementera en experimentell mjukvara för att jämföra algoritmernas prestanda på 50 data set. De valda variablerna användes för att konstruera klassificeringsmodeller vars prediktiva prestanda, samt selektionsprocessens körningstid, jämfördes. Resultatet visar att inbäddade algoritmer i viss grad är överlägsna, framför allt typer som bygger på beslutsträd. Det finns dock ingen algoritm som är signifikant överlägsen i varje sammanhang. Programmet och datan från experimenten kan användas av utövare för att avgöra vilken algoritm som bör appliceras på deras respektive problem.
188

Essays on using machine learning for causal inference

Jacob, Daniel 01 March 2022 (has links)
Um Daten am effektivsten zu nutzen, muss die moderne Ökonometrie ihren Werkzeugkasten an Modellen erweitern und neu denken. Das Feld, in dem diese Transformation am besten beobachtet werden kann, ist die kausale Inferenz. Diese Dissertation verfolgt die Absicht Probleme zu untersuchen, Lösungen zu präsentieren und neue Methoden zu entwickeln Machine Learning zu benutzen, um kausale Parameter zu schätzen. Dafür werden in der Dissertation zuerst verschiedene neuartige Methoden, welche als Ziel haben heterogene Treatment Effekte zu messen, eingeordnet. Im zweiten Schritt werden, basierend auf diesen Methoden, Richtlinien für ihre Anwendung in der Praxis aufgestellt. Der Parameter von Interesse ist der „conditional average treatment effect“ (CATE). Es kann gezeigt werden, dass ein Vergleich mehrerer Methoden gegenüber der Verwendung einer einzelnen Methode vorzuziehen ist. Ein spezieller Fokus liegt dabei auf dem Aufteilen und Gewichten der Stichprobe, um den Verlust in Effizienz wettzumachen. Ein unzulängliches Kontrollieren für die Variation durch verschiedene Teilstichproben führt zu großen Unterschieden in der Präzision der geschätzten Parameter. Wird der CATE durch Bilden von Quantilen in Gruppen unterteilt, führt dies zu robusteren Ergebnissen in Bezug auf die Varianz. Diese Dissertation entwickelt und untersucht nicht nur Methoden für die Schätzung der Heterogenität in Treatment Effekten, sondern auch für das Identifizieren von richtigen Störvariablen. Hierzu schlägt diese Dissertation sowohl die „outcome-adaptive random forest“ Methode vor, welche automatisiert Variablen klassifiziert, als auch „supervised randomization“ für eine kosteneffiziente Selektion der Zielgruppe. Einblicke in wichtige Variablen und solche, welche keine Störung verursachen, ist besonders in der Evaluierung von Politikmaßnahmen aber auch im medizinischen Sektor wichtig, insbesondere dann, wenn kein randomisiertes Experiment möglich ist. / To use data effectively, modern econometricians need to expand and rethink their toolbox. One field where such a transformation has already started is causal inference. This thesis aims to explore further issues, provide solutions, and develop new methods on how machine learning can be used to estimate causal parameters. I categorize novel methods to estimate heterogeneous treatment effects and provide a practitioner’s guide for implementation. The parameter of interest is the conditional average treatment effect (CATE). It can be shown that an ensemble of methods is preferable to relying on one method. A special focus, with respect to the CATE, is set on the comparison of such methods and the role of sample splitting and cross-fitting to restore efficiency. Huge differences in the estimated parameter accuracy can occur if the sampling uncertainty is not correctly accounted for. One feature of the CATE is a coarser representation through quantiles. Estimating groups of the CATE leads to more robust estimates with respect to the sampling uncertainty and the resulting high variance. This thesis not only develops and explores methods to estimate treatment effect heterogeneity but also to identify confounding variables as well as observations that should receive treatment. For these two tasks, this thesis proposes the outcome-adaptive random forest for automatic variable selection, as well as supervised randomization for a cost-efficient selection of the target group. Insights into important variables and those that are not true confounders are very helpful for policy evaluation and in the medical sector when randomized control trials are not possible.
189

OPTIMIZING DECISION TREE ENSEMBLES FOR GENE-GENE INTERACTION DETECTION

Assareh, Amin 27 November 2012 (has links)
No description available.
190

Statistical Methods for Functional Genomics Studies Using Observational Data

Lu, Rong 15 December 2016 (has links)
No description available.

Page generated in 0.0686 seconds