• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 117
  • 61
  • 21
  • 20
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 266
  • 266
  • 69
  • 67
  • 59
  • 57
  • 52
  • 39
  • 36
  • 32
  • 31
  • 30
  • 30
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Seleção bayesiana de variáveis em modelos multiníveis da teoria de resposta ao item com aplicações em genômica / Bayesian variable selection for multilevel item response theory models with applications in genomics

Tiago de Miranda Fragoso 12 September 2014 (has links)
As investigações sobre as bases genéticas de doenças complexas em Genômica utilizam diversos tipos de informação. Diversos sintomas são avaliados de maneira a diagnosticar a doença, os indivíduos apresentam padrões de agrupamento baseados, por exemplo no seu parentesco ou ambiente comum e uma quantidade imensa de características dos indivíduos são medidas por meio de marcadores genéticos. No presente trabalho, um modelo multiníveis da teoria de resposta ao item (TRI) é proposto de forma a integrar todas essas fontes de informação e caracterizar doenças complexas através de uma variável latente. Além disso, a quantidade de marcadores moleculares induz um problema de seleção de variáveis, para o qual uma seleção baseada nos métodos da busca estocástica e do LASSO bayesiano são propostos. Os parâmetros do modelo e a seleção de variáveis são realizados sob um paradigma bayesiano, no qual um algoritmo Monte Carlo via Cadeias de Markov é construído e implementado para a obtenção de amostras da distribuição a posteriori dos parâmetros. O mesmo é validado através de estudos de simulação, nos quais a capacidade de recuperação dos parâmetros, de escolha de variáveis e características das estimativas pontuais dos parâmetros são avaliadas em cenários similares aos dados reais. O processo de estimação apresenta uma recuperação satisfatória nos parâmetros estruturais do modelo e capacidade de selecionar covariáveis em espaços de dimensão elevada apesar de um viés considerável nas estimativas das variáveis latentes associadas ao traço latente e ao efeito aleatório. Os métodos desenvolvidos são então aplicados aos dados colhidos no estudo de associação familiar \'Corações de Baependi\', nos quais o modelo multiníveis se mostra capaz de caracterizar a síndrome metabólica, uma série de sintomas associados com o risco cardiovascular. O modelo multiníveis e a seleção de variáveis se mostram capazes de recuperar características conhecidas da doença e selecionar um marcador associado. / Recent investigations about the genetic architecture of complex diseases use diferent sources of information. Diferent symptoms are measured to obtain a diagnosis, individuals may not be independent due to kinship or common environment and their genetic makeup may be measured through a large quantity of genetic markers. In the present work, a multilevel item response theory (IRT) model is proposed that unifies all these diferent sources of information through a latent variable. Furthermore, the large ammount of molecular markers induce a variable selection problem, for which procedures based on stochastic search variable selection and the Bayesian LASSO are considered. Parameter estimation and variable selection is conducted under a Bayesian framework in which a Markov chain Monte Carlo algorithm is derived and implemented to obtain posterior distribution samples. The estimation procedure is validated through a series of simulation studies in which parameter recovery, variable selection and estimation error are evaluated in scenarios similar to the real dataset. The estimation procedure showed adequate recovery of the structural parameters and the capability to correctly nd a large number of the covariates even in high dimensional settings albeit it also produced biased estimates for the incidental latent variables. The proposed methods were then applied to the real dataset collected on the \'Corações de Baependi\' familiar association study and was able to apropriately model the metabolic syndrome, a series of symptoms associated with elevated heart failure and diabetes risk. The multilevel model produced a latent trait that could be identified with the syndrome and an associated molecular marker was found.
152

PARTICIONAMENTO DE CONJUNTO DE DADOS E SELEÇÃO DE VARIÁVEIS EM PROBLEMAS DE CALIBRAÇÃO MULTIVARIADA

Alves, André Luiz 22 September 2017 (has links)
Submitted by admin tede (tede@pucgoias.edu.br) on 2017-11-22T13:39:54Z No. of bitstreams: 1 André Luiz Alves.pdf: 760209 bytes, checksum: 09b516d6ffcca2c7f66578b275613b36 (MD5) / Made available in DSpace on 2017-11-22T13:39:54Z (GMT). No. of bitstreams: 1 André Luiz Alves.pdf: 760209 bytes, checksum: 09b516d6ffcca2c7f66578b275613b36 (MD5) Previous issue date: 2017-09-22 / The objective of this work is to compare a proposed algorithm based on the RANdom SAmple Consensus (RANSAC) method for selection of samples, selection of variables and simultaneous selection of samples and variables with the Sucessive Projections Algorithm (SPA) from a chemical data set in the context of multivariate calibration. The proposed method is based on the RANSAC method and Multiple Linear Regression (MLR). The predictive capacity of the models is measured using the Root Mean Square Error of Prediction (RMSEP). The results allow to conclude that the Successive Projection Algorithm improves the predictive capacity of Ransac. It is concluded that the SPA positively influences the Ransac algorithm for selection of samples, for selection of variables and also for simultaneous selection of samples and variables. / O objetivo do trabalho é comparar um algoritmo proposto baseado no método consenso de amostra aleatória (RANdom SAmple Consensus, RANSAC) para seleção de amostras, seleção de variáveis e seleção simultânea de amostras e variáveis com o algoritmo de projeções sucessivas (Sucessive Projections Algorithm, SPA) a partir de conjuntos de dados químicos no contexto da calibração multivariada. O método proposto é baseado no método RANSAC e regressão linear múltipla (Multiple Linear Regression, MLR). A capacidade preditiva dos modelos é medida empregando o erro de previsão da raiz quadrada do erro quadrático médio (Root Mean Square Error Of Prediction, RMSEP). Os resultados permitem concluir que o Algoritmo das Projeções Sucessivas melhora a capacidade preditiva do Ransac. Conclui-se que o SPA influi positivamente no algoritmo Ransac para seleção de amostras, para seleção de variáveis e também para seleção simultânea de amostras e variáveis.
153

Seleção de variáveis no desenvolvimento, classificação e predição de produtos / Selection of variables for the development, classification, and prediction of products

Rossini, Karina January 2011 (has links)
O presente trabalho apresenta proposições para seleção de variáveis em avaliações sensoriais descritivas e de espectro infravermelho que contribuam com a indústria de alimentos e química através da utilização de métodos de análise multivariada. Desta forma, os objetivos desta tese são: (i) Estudar as principais técnicas de análise multivariada de dados, como são comumente organizadas e como podem contribuir no processo de seleção de variáveis; (ii) Identificar e estruturar técnicas de análise multivariada de dados de forma a construir um método que reduza o número de variáveis necessárias para fins de caracterização, classificação e predição dos produtos; (iii) Reduzir a lista de variáveis/atributos, selecionando aqueles relevantes e não redundantes, reduzindo o tempo de execução e a fadiga imposta aos membros de um painel em avaliações sensoriais; (iv) Validar o método proposto utilizando dados reais; e (v) Comparar diferentes abordagens de análise sensorial voltadas ao desenvolvimento de novos produtos. Os métodos desenvolvidos foram avaliados através da aplicação de estudos de caso, em exemplos com dados reais. Os métodos sugeridos variam com as características dos dados analisados, dados altamente multicolineares ou não e, com e sem variável dependente (variável de resposta). Os métodos apresentam bom desempenho, conduzindo a uma redução significativa no número de variáveis e apresentando índices de adequação de ajuste dos modelos ou acurácia satisfatórios quando comparados aos obtidos mediante retenção da totalidade das variáveis ou comparados a outros métodos dispostos na literatura. Conclui-se que os métodos propostos são adequados para a seleção de variáveis sensoriais e de espectro infravermelho. / This dissertation presents propositions for variable selection in data from descriptive sensory evaluations and near-infrared (NIR) spectrum analyses, based on multivariate analysis methods. There are five objectives here: (i) review the main multivariate analysis techniques, their relationships and potential use in variable selection procedures; (ii) propose a variable selection method based on the techniques in (i) that allows product prediction, classification, and description; (iii) reduce the list of variables/attributes to be analyzed in sensory panels identifying those relevant and non-redundant, such that the time to collect panel data and the fatigue imposed on panelists is minimized; (iv) validate methodological propositions using real life data; and (v) compare different sensory analysis approaches used in new product development. Proposed methods were evaluated through case studies, and vary according to characteristics in the datasets analyzed (data with different degrees of multicollinearity, presenting or not dependent variables). All methods presented good performance leading to significant reduction in the number of variables in the datasets, and leading to models with better adequacy of fit. We conclude that the methods are suitable for datasets from descriptive sensory evaluations and NIR analyses.
154

Canonical Variable Selection for Ecological Modeling of Fecal Indicators

Gilfillan, Dennis, Hall, Kimberlee, Joyner, Timothy Andrew, Scheuerman, Phillip 20 September 2018 (has links)
More than 270,000 km of rivers and streams are impaired due to fecal pathogens, creating an economic and public health burden. Fecal indicator organisms such as Escherichia coli are used to determine if surface waters are pathogen impaired, but they fail to identify human health risks, provide source information, or have unique fate and transport processes. Statistical and machine learning models can be used to overcome some of these weaknesses, including identifying ecological mechanisms influencing fecal pollution. In this study, canonical correlation analysis (CCorA) was performed to select parameters for the machine learning model, Maxent, to identify how chemical and microbial parameters can predict E. coli impairment and F+-somatic bacteriophage detections. Models were validated using a bootstrapping cross-validation. Three suites of models were developed; initial models using all parameters, models using parameters identified in CCorA, and optimized models after further sensitivity analysis. Canonical correlation analysis reduced the number of parameters needed to achieve the same degree of accuracy in the initial E. coli model (84.7%), and sensitivity analysis improved accuracy to 86.1%. Bacteriophage model accuracies were 79.2, 70.8, and 69.4% for the initial, CCorA, and optimized models, respectively; this suggests complex ecological interactions of bacteriophages are not captured by CCorA. Results indicate distinct ecological drivers of impairment depending on the fecal indicator organism used. Escherichia coli impairment is driven by increased hardness and microbial activity, whereas bacteriophage detection is inhibited by high levels of coliforms in sediment. Both indicators were influenced by organic pollution and phosphorus limitation.
155

THE FAMILY OF CONDITIONAL PENALIZED METHODS WITH THEIR APPLICATION IN SUFFICIENT VARIABLE SELECTION

Xie, Jin 01 January 2018 (has links)
When scientists know in advance that some features (variables) are important in modeling a data, then these important features should be kept in the model. How can we utilize this prior information to effectively find other important features? This dissertation is to provide a solution, using such prior information. We propose the Conditional Adaptive Lasso (CAL) estimates to exploit this knowledge. By choosing a meaningful conditioning set, namely the prior information, CAL shows better performance in both variable selection and model estimation. We also propose Sufficient Conditional Adaptive Lasso Variable Screening (SCAL-VS) and Conditioning Set Sufficient Conditional Adaptive Lasso Variable Screening (CS-SCAL-VS) algorithms based on CAL. The asymptotic and oracle properties are proved. Simulations, especially for the large p small n problems, are performed with comparisons with other existing methods. We further extend to the linear model setup to the generalized linear models (GLM). Instead of least squares, we consider the likelihood function with L1 penalty, that is the penalized likelihood methods. We proposed for Generalized Conditional Adaptive Lasso (GCAL) for the generalized linear models. We then further extend the method for any penalty terms that satisfy certain regularity conditions, namely Conditionally Penalized Estimate (CPE). Asymptotic and oracle properties are showed. Four corresponding sufficient variable screening algorithms are proposed. Simulation examples are evaluated for our method with comparisons with existing methods. GCAL is also evaluated with a read data set on leukemia.
156

High-dimensional inference of ordinal data with medical applications

Jiao, Feiran 01 May 2016 (has links)
Ordinal response variables abound in scientific and quantitative analyses, whose outcomes comprise a few categorical values that admit a natural ordering, so that their values are often represented by non-negative integers, for instance, pain score (0-10) or disease severity (0-4) in medical research. Ordinal variables differ from rational variables in that its values delineate qualitative rather than quantitative differences. In this thesis, we develop new statistical methods for variable selection in a high-dimensional cumulative link regression model with an ordinal response. Our study is partly motivated by the needs for exploring the association structure between disease phenotype and high-dimensional medical covariates. The cumulative link regression model specifies that the ordinal response of interest results from an order-preserving quantization of some latent continuous variable that bears a linear regression relationship with a set of covariates. Commonly used error distributions in the latent regression include the normal distribution, the logistic distribution, the Cauchy distribution and the standard Gumbel distribution (minimum). The cumulative link model with normal (logit, Gumbel) errors is also known as the ordered probit (logit, complementary log-log) model. While the likelihood function has a closed-form solution for the aforementioned error distributions, its strong nonlinearity renders direct optimization of the likelihood to sometimes fail. To mitigate this problem and to facilitate extension to penalized likelihood estimation, we proposed specific minorization-maximization (MM) algorithms for maximum likelihood estimation of a cumulative link model for each of the preceding 4 error distributions. Penalized ordinal regression models play a role when variable selection needs to be performed. In some applications, covariates may often be grouped according to some meaningful way but some groups may be mixed in that they contain both relevant and irrelevant variables, i.e., whose coefficients are non-zero and zero, respectively. Thus, it is pertinent to develop a consistent method for simultaneously selecting relevant groups and the relevant variables within each selected group, which constitutes the so-called bi-level selection problem. We have proposed to use a penalized maximum likelihood approach with a composite bridge penalty to solve the bi-level selection problem in a cumulative link model. An MM algorithm was developed for implementing the proposed method, which is specific to each of the 4 error distributions. The proposed approach is shown to enjoy a number of desirable theoretical properties including bi-level selection consistency and oracle properties, under suitable regularity conditions. Simulations demonstrate that the proposed method enjoys good empirical performance. We illustrated the proposed methods with several real medical applications.
157

Investigation of multivariate prediction methods for the analysis of biomarker data

Hennerdal, Aron January 2006 (has links)
<p>The paper describes predictive modelling of biomarker data stemming from patients suffering from multiple sclerosis. Improvements of multivariate analyses of the data are investigated with the goal of increasing the capability to assign samples to correct subgroups from the data alone.</p><p>The effects of different preceding scalings of the data are investigated and combinations of multivariate modelling methods and variable selection methods are evaluated. Attempts at merging the predictive capabilities of the method combinations through voting-procedures are made. A technique for improving the result of PLS-modelling, called bagging, is evaluated.</p><p>The best methods of multivariate analysis of the ones tried are found to be Partial least squares (PLS) and Support vector machines (SVM). It is concluded that the scaling have little effect on the prediction performance for most methods. The method combinations have interesting properties – the default variable selections of the multivariate methods are not always the best. Bagging improves performance, but at a high cost. No reasons for drastically changing the work flows of the biomarker data analysis are found, but slight improvements are possible. Further research is needed.</p>
158

New results in detection, estimation, and model selection

Ni, Xuelei 08 December 2005 (has links)
This thesis contains two parts: the detectability of convex sets and the study on regression models In the first part of this dissertation, we investigate the problem of the detectability of an inhomogeneous convex region in a Gaussian random field. The first proposed detection method relies on checking a constructed statistic on each convex set within an nn image, which is proven to be un-applicable. We then consider using h(v)-parallelograms as the surrogate, which leads to a multiscale strategy. We prove that 2/9 is the minimum proportion of the maximally embedded h(v)-parallelogram in a convex set. Such a constant indicates the effectiveness of the above mentioned multiscale detection method. In the second part, we study the robustness, the optimality, and the computing for regression models. Firstly, for robustness, M-estimators in a regression model where the residuals are of unknown but stochastically bounded distribution are analyzed. An asymptotic minimax M-estimator (RSBN) is derived. Simulations demonstrate the robustness and advantages. Secondly, for optimality, the analysis on the least angle regressions inspired us to consider the conditions under which a vector is the solution of two optimization problems. For these two problems, one can be solved by certain stepwise algorithms, the other is the objective function in many existing subset selection criteria (including Cp, AIC, BIC, MDL, RIC, etc). The latter is proven to be NP-hard. Several conditions are derived. They tell us when a vector is the common optimizer. At last, extending the above idea about finding conditions into exhaustive subset selection in regression, we improve the widely used leaps-and-bounds algorithm (Furnival and Wilson). The proposed method further reduces the number of subsets needed to be considered in the exhaustive subset search by considering not only the residuals, but also the model matrix, and the current coefficients.
159

Wavelet methods and statistical applications: network security and bioinformatics

Kwon, Deukwoo 01 November 2005 (has links)
Wavelet methods possess versatile properties for statistical applications. We would like to explore the advantages of using wavelets in the analyses in two different research areas. First of all, we develop an integrated tool for online detection of network anomalies. We consider statistical change point detection algorithms, for both local changes in the variance and for jumps detection, and propose modified versions of these algorithms based on moving window techniques. We investigate performances on simulated data and on network traffic data with several superimposed attacks. All detection methods are based on wavelet packets transformations. We also propose a Bayesian model for the analysis of high-throughput data where the outcome of interest has a natural ordering. The method provides a unified approach for identifying relevant markers and predicting class memberships. This is accomplished by building a stochastic search variable selection method into an ordinal model. We apply the methodology to the analysis of proteomic studies in prostate cancer. We explore wavelet-based techniques to remove noise from the protein mass spectra. The goal is to identify protein markers associated with prostate-specific antigen (PSA) level, an ordinal diagnostic measure currently used to stratify patients into different risk groups.
160

Clustering, Classification, and Factor Analysis in High Dimensional Data Analysis

Wang, Yanhong 17 December 2013 (has links)
Clustering, classification, and factor analysis are three popular data mining techniques. In this dissertation, we investigate these methods in high dimensional data analysis. Since there are much more features than the sample sizes and most of the features are non-informative in high dimensional data, dimension reduction is necessary before clustering or classification can be made. In the first part of this dissertation, we reinvestigate an existing clustering procedure, optimal discriminant clustering (ODC; Zhang and Dai, 2009), and propose to use cross-validation to select the tuning parameter. Then we develop a variation of ODC, sparse optimal discriminant clustering (SODC) for high dimensional data, by adding a group-lasso type of penalty to ODC. We also demonstrate that both ODC and SDOC can be used as a dimension reduction tool for data visualization in cluster analysis. In the second part, three existing sparse principal component analysis (SPCA) methods, Lasso-PCA (L-PCA), Alternative Lasso PCA (AL-PCA), and sparse principal component analysis by choice of norm (SPCABP) are applied to a real data set the International HapMap Project for AIM selection to genome-wide SNP data, the classification accuracy is compared for them and it is demonstrated that SPCABP outperforms the other two SPCA methods. Third, we propose a novel method called sparse factor analysis by projection (SFABP) based on SPCABP, and propose to use cross-validation method for the selection of the tuning parameter and the number of factors. Our simulation studies show that SFABP has better performance than the unpenalyzed factor analysis when they are applied to classification problems.

Page generated in 0.1207 seconds