91 |
Sequential Change-point Detection in Linear Regression and Linear Quantile Regression Models Under High DimensionalityRatnasingam, Suthakaran 06 August 2020 (has links)
No description available.
|
92 |
Sparse Latent-Space Learning for High-Dimensional Data: Extensions and ApplicationsWhite, Alexander James 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The successful treatment and potential eradication of many complex diseases,
such as cancer, begins with elucidating the convoluted mapping of molecular profiles
to phenotypical manifestation. Our observed molecular profiles (e.g., genomics,
transcriptomics, epigenomics) are often high-dimensional and are collected from patient
samples falling into heterogeneous disease subtypes. Interpretable learning from
such data calls for sparsity-driven models. This dissertation addresses the high dimensionality,
sparsity, and heterogeneity issues when analyzing multiple-omics data,
where each method is implemented with a concomitant R package. First, we examine
challenges in submatrix identification, which aims to find subgroups of samples
that behave similarly across a subset of features. We resolve issues such as two-way
sparsity, non-orthogonality, and parameter tuning with an adaptive thresholding procedure
on the singular vectors computed via orthogonal iteration. We validate the
method with simulation analysis and apply it to an Alzheimer’s disease dataset.
The second project focuses on modeling relationships between large, matched
datasets. Exploring regressional structures between large data sets can provide insights
such as the effect of long-range epigenetic influences on gene expression. We
present a high-dimensional version of mixture multivariate regression to detect patient
clusters, each with different correlation structures of matched-omics datasets.
Results are validated via simulation and applied to matched-omics data sets. In the third project, we introduce a novel approach to modeling spatial transcriptomics
(ST) data with a spatially penalized multinomial model of the expression
counts. This method solves the low-rank structures of zero-inflated ST data with
spatial smoothness constraints. We validate the model using manual cell structure
annotations of human brain samples. We then applied this technique to additional
ST datasets. / 2025-05-22
|
93 |
Nonlocal Priors in Generalized Linear Models and Gaussian Graphical ModelsYang, Fang 23 August 2022 (has links)
No description available.
|
94 |
Feature Screening for High-Dimensional Variable Selection In Generalized Linear ModelsJiang, Jinzhu 02 September 2021 (has links)
No description available.
|
95 |
Energy Distance Correlation with Extended Bayesian Information Criteria for feature selection in high dimensional modelsOcloo, Isaac Xoese 22 September 2021 (has links)
No description available.
|
96 |
Methodology for Estimation and Model Selection in High-Dimensional Regression with EndogeneityDu, Fan 05 May 2023 (has links)
No description available.
|
97 |
High Dimensional Data Methods in Industrial Organization Type Discrete Choice ModelsLopez Gomez, Daniel Felipe 11 August 2022 (has links)
No description available.
|
98 |
Two Essays on High-Dimensional Inference and an Application to Distress Risk PredictionZhu, Xiaorui 22 August 2022 (has links)
No description available.
|
99 |
Human Decidual CD8+ T Cells have Phenotypic and Functional HeterogeneityAlexander, Aria January 2021 (has links)
No description available.
|
100 |
Sparse Ridge Fusion For Linear RegressionMahmood, Nozad 01 January 2013 (has links)
For a linear regression, the traditional technique deals with a case where the number of observations n more than the number of predictor variables p (n > p). In the case n < p, the classical method fails to estimate the coefficients. A solution of the problem is the case of correlated predictors is provided in this thesis. A new regularization and variable selection is proposed under the name of Sparse Ridge Fusion (SRF). In the case of highly correlated predictor, the simulated examples and a real data show that the SRF always outperforms the lasso, eleastic net, and the S-Lasso, and the results show that the SRF selects more predictor variables than the sample size n while the maximum selected variables by lasso is n size.
|
Page generated in 0.0869 seconds