• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 10
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 83
  • 83
  • 32
  • 28
  • 25
  • 18
  • 16
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Modeling social factors of HIV risk in Mexico

Valencia, Celina I., Valencia, Celina I. January 2017 (has links)
Background: Human Immunodeficiency Virus (HIV) and Acquired Immunodeficiency Syndrome (AIDS) is an urgent public health issue in Mexico. Mexico has witnessed a 122% increase in reported prevalence of HIV since 2001 (Holtz et al., 2014). Country estimates suggest there are between 140,000-230,000 individuals living with HIV in Mexico (CENSIDA, 2014). While approximately 50% of individuals living with HIV in Mexico are unaware that they are living with the virus (CENSIDA, 2014). Despite a federal universal HIV program implemented in 2011, HIV in Mexico has not reached a chronic infectious disease status as seen in other regions of the globe (Deeks, 2013). The mortality rate among individuals with HIV/AIDS in Mexico is 4.2 per 100,000 (CENSIDA, 2014). There is a paucity of findings regarding social and epidemiological data focused on populations outside traditional at risk populations of HIV in Mexico (Martin-Onraët et al., 2016). Analyzing aggregate country level data for Mexico provides necessary insights to better understanding previously unconsidered social factors that are informing sexual and reproductive health trends influencing HIV health patterns. Methods: Secondary analyses were performed on Mexico's Encuesta Nacional de Salud y Nutrición 2012 (ENSANUT). Mexico’s ENSANUT is a probabilistic aggregate national dataset with a multistage stratified cluster sampling design (Janssen et al., 2013). ENSANUT is Mexico’s equivalent to the National Health and Nutrition Examination Survey (NHANES) in the United States. Data is collected via self-report interviews conducted at the participant's home. A structured questionnaire was administered to individuals 20 years of age and older (≥ 20) where sexual and reproductive data was collected from participants. The ENSANUT adult study sub-sample (n=46,227) is comprised of 42.75% men and 57.25% women. A general linear model (GLM), principal component analysis (PCA), chi-squares (χ²), and logistic regressions were applied to the study adult subsample to disentangle social factors associated with sexually transmitted infections (STIs) in the population. Quantitative analyses were conducted on SAS 9.4. Findings: Men were more likely to have a STI diagnosis (OR=3.60; 95% CI 3.00, 4.32, p=<0.001). Previous HIV testing was found to be protective for STI diagnosis across both genders (OR=0.82, 95% CI 0.72, 0.94, p=<0.001). Co-infections of HIV/gonorrhea and HIV/syphilis (n=20) were the highest in the study population. The latent variable model indicates mental health and access to health care resources are critical for positive sexual and reproductive health outcomes in Mexico. Mental health was found to be non-protective for STI risk among the study population (OR=1.59, 95% CI 1.41, 1.81, p=<0.0001). Policy recommendations: 1. Increased access and utilization of HIV resources and mental health services would benefit the study population. Further qualitative research is needed to better understand the barriers to health care access and utilization in these two domains; 2. Increase in preventative programs and health initiatives that encourage established strategies for positive sexual and reproductive health outcomes. These strategies include: universal human papillomavirus (HPV) vaccines, wide availability of Pre-Exposure Prophylaxis (PrEP), and routine HIV/STI screenings; 3. Alternative data collection strategies for ENSANUT which are culturally appropriate for sexual and reproductive health constructs.
22

Demystifying substance use treatment implementation and service utilization in safety net settings

Crable, Erika Lynn 19 January 2021 (has links)
Multiyear trends showing high rates of alcohol and opioid-related misuse as well as opioid-related deaths have renewed attention on both access to and the quality of substance use treatment. In response, diverse healthcare systems that care for the Medicaid population have begun implementing large-scale transformations including new services and provider training requirements. The Centers for Medicaid and Medicare Services has urged state Medicaid programs to use Sections 1115 waiver demonstrations as vehicles for substance use treatment delivery system transformation. For many states, undertaking the Section 1115 waiver demonstration means moving from very limited benefits to a full continuum of new services. States’ ability to achieve such transformations is unknown since demonstration processes are under-reported and considered implementation “black boxes”. Substance use treatment delivery changes are also occurring at the community level, where several hospitals systems have implemented new services to meet the needs of their patient population. However, the influence of these new care models on patient service utilization is unknown. In this dissertation, I use comparative case study design and qualitative content analysis to examine the pre-implementation decision-making processes that Medicaid policymakers in California, Virginia and West Virginia experienced when deciding to enhance their substance use treatment service delivery systems using Sections 1115 waivers. I qualitatively describe how broad sociocultural and local organizational factors influenced Medicaid agencies’ ability to expand access to treatment. I also present a taxonomy of implementation strategies used to translate Medicaid policy into clinical services available in the community. Finally, I present a latent transition analysis to reveal how the nature of substance use treatment services available to patients may influence their service utilization over time. This final quantitative analysis is set within the context of a safety net hospital that provides a comprehensive, low barrier access model for substance use treatment, and primarily serves Medicaid beneficiaries. Results of this dissertation illuminate processes and outcomes associated with pre-, mid-, and post-implementation activities targeting improvements in the delivery of substance use treatment services. / 2023-01-19T00:00:00Z
23

Sur la méthode des moments pour l'estimation des modèles à variables latentes / On the method of moments for estimation in latent linear models

Podosinnikova, Anastasia 01 December 2016 (has links)
Les modèles linéaires latents sont des modèles statistique puissants pour extraire la structure latente utile à partir de données non structurées par ailleurs. Ces modèles sont utiles dans de nombreuses applications telles que le traitement automatique du langage naturel et la vision artificielle. Pourtant, l'estimation et l'inférence sont souvent impossibles en temps polynomial pour de nombreux modèles linéaires latents et on doit utiliser des méthodes approximatives pour lesquelles il est difficile de récupérer les paramètres. Plusieurs approches, introduites récemment, utilisent la méthode des moments. Elles permettent de retrouver les paramètres dans le cadre idéalisé d'un échantillon de données infini tiré selon certains modèles, mais ils viennent souvent avec des garanties théoriques dans les cas où ce n'est pas exactement satisfait. Dans cette thèse, nous nous concentrons sur les méthodes d'estimation fondées sur l'appariement de moment pour différents modèles linéaires latents. L'utilisation d'un lien étroit avec l'analyse en composantes indépendantes, qui est un outil bien étudié par la communauté du traitement du signal, nous présentons plusieurs modèles semiparamétriques pour la modélisation thématique et dans un contexte multi-vues. Nous présentons des méthodes à base de moment ainsi que des algorithmes pour l'estimation dans ces modèles, et nous prouvons pour ces méthodes des résultats de complexité améliorée par rapport aux méthodes existantes. Nous donnons également des garanties d'identifiabilité, contrairement à d'autres modèles actuels. C'est une propriété importante pour assurer leur interprétabilité. / Latent linear models are powerful probabilistic tools for extracting useful latent structure from otherwise unstructured data and have proved useful in numerous applications such as natural language processing and computer vision. However, the estimation and inference are often intractable for many latent linear models and one has to make use of approximate methods often with no recovery guarantees. An alternative approach, which has been popular lately, are methods based on the method of moments. These methods often have guarantees of exact recovery in the idealized setting of an infinite data sample and well specified models, but they also often come with theoretical guarantees in cases where this is not exactly satisfied. In this thesis, we focus on moment matchingbased estimation methods for different latent linear models. Using a close connection with independent component analysis, which is a well studied tool from the signal processing literature, we introduce several semiparametric models in the topic modeling context and for multi-view models and develop moment matching-based methods for the estimation in these models. These methods come with improved sample complexity results compared to the previously proposed methods. The models are supplemented with the identifiability guarantees, which is a necessary property to ensure their interpretability. This is opposed to some other widely used models, which are unidentifiable.
24

Latent analysis of unsupervised latent variable models in fault diagnostics of rotating machinery under stationary and time-varying operating conditions

Balshaw, Ryan January 2020 (has links)
Vibration-based condition monitoring is a key and crucial element for asset longevity and to avoid unexpected financial compromise. Currently, data-driven methodologies often require significant investments into data acquisition and a large amount of operational data for both healthy and unhealthy cases. The acquisition of unhealthy fault data is often financially infeasible and the result is that most methods detailed in literature are not suitable for critical industrial applications. In this work, unsupervised latent variable models negate the requirement for asset fault data. These models operate by learning the representation of healthy data and utilise health indicators to track deviance from this representation. A variety of latent variable models are compared, namely: Principal Component Analysis, Variational Auto-Encoders and Generative Adversarial Network-based methods. This research investigated the relationship between time-series data and latent variable model design under the sensible notion of data interpretation, the influence of model complexity on result performance on different datasets and shows that the latent manifold, when untangled and traversed in a sensible manner, is indicative of damage. Three latent health indicators are proposed in this work and utilised in conjunction with a proposed temporal preservation approach. The performance is compared over the different models. It was found that these latent health indicators can augment standard health indicators and benefit model performance. This allows one to compare the performance of different latent variable models, an approach that has not been realised in previous work as the interpretation of the latent manifold and the manifold response to anomalous instances had not been explored. If all aspects of a latent variable model are systematically investigated and compared, different models can be analysed on a consistent platform. In the model analysis step, a latent variable model is used to evaluate the available data such that the health indicators used to infer the health state of an asset, are available for analysis and comparison. The datasets investigated in this work consist of stationary and time-varying operating conditions. The objective was to determine whether deep learning is comparable or on par with state-of-the-art signal processing techniques. The results showed that damage is detectable in both the input space and the latent space and can be trended to identify clear condition deviance points. This highlights that both spaces are indicative of damage when analysed in a sensible manner. A key take away from this work is that for data that contains impulsive components that manifest naturally and not due to the presence of a fault, the anomaly detection procedure may be limited by inherent assumptions made in model formulations concerning Gaussianity. This work illustrates how the latent manifold is useful for the detection of anomalous instances, how one must consider a variety of latent-variable model types and how subtle changes to data processing can benefit model performance analysis substantially. For vibration-based condition monitoring, latent variable models offer significant improvements in fault diagnostics and reduce the requirement for expert knowledge. This can ultimately improve asset longevity and the investment required from businesses in asset maintenance. / Dissertation (MEng (Mechanical Engineering))--University of Pretoria, 2020. / Eskom Power Plant Engineering Institute (EPPEI) / UP Postgraduate Bursary / Mechanical and Aeronautical Engineering / MEng (Mechanical Engineering) / Unrestricted
25

Placebo response characteristic in sequential parallel comparison design studies

Rybin, Denis V. 13 February 2016 (has links)
The placebo response can affect inference in analysis of data from clinical trials. It can bias the estimate of the treatment effect, jeopardize the effort of all involved in a clinical trial and ultimately deprive patients of potentially efficacious treatment. The Sequential Parallel Comparison Design (SPCD) is one of the novel approaches addressing placebo response in clinical trials. The analysis of SPCD clinical trial data typically involves classification of subjects as ‘placebo responders’ or ‘placebo non-responders’. This classification is done using a specific criterion and placebo response is treated as a measurable characteristic. However, the use of criterion may lead to subject misclassification due to measurement error or incorrect criterion selection. Subsequently, misclassification can directly affect SPCD treatment effect estimate. We propose to view placebo response as an unknown random characteristic that can be estimated based on information collected during the trial. Two strategies are presented here. First strategy is to model placebo response using criterion classification as a starting point or the observed data, and to include the placebo response estimate into the treatment effect estimation. Second strategy is to jointly model latent placebo response and the observed data, and estimate treatment effect from the joint model. We evaluate both strategies on a wide range of simulated data scenarios in terms of type I error control, mean squared error and power. We then evaluate the strategies in presence of missing data and propose a method for missing data imputation under the non-informative missingness assumption. The data from a recent SPCD clinical trial is used to compare results of the proposed methods with reported results of the trial. / 2018-01-01T00:00:00Z
26

Applying Bayesian Ordinal Regression to ICAP Maladaptive Behavior Subscales

Johnson, Edward P. 25 October 2007 (has links) (PDF)
This paper develops a Bayesian ordinal regression model for the maladaptive subscales of the Inventory for Client and Agency Planning (ICAP). Because the maladaptive behavior section of the ICAP contains ordinal data, current analysis strategies combine all the subscales into three indices, making the data more interval in nature. Regular MANOVA tools are subsequently used to create a regression model for these indices. This paper uses ordinal regression to analyze each original scale separately. The sample consists of applicants for aid from Utah's Division of Services for Persons with Disabilities. Each applicant fills out the Scales of Independent Behavior"”Revised (SIB-R) portion of the ICAP that measures eight different maladaptive behaviors. This project models the frequency and severity of each of these eight problem behaviors with separate ordinal regression models. Gender, ethnicity, primary disability, and mental retardation are used as explanatory variables to calculate the odds ratios for a higher maladaptive behavior score in each model. This type of analysis provides a useful tool to any researcher using the ICAP to measure maladaptive behavior.
27

Exploring Model Fit and Methods for Measurement Invariance Concerning One Continuous or More Different Violators under Latent Variable Modeling

Liu, Yuanfang January 2022 (has links)
No description available.
28

Bayesian Probit Regression Models for Spatially-Dependent Categorical Data

Berrett, Candace 02 November 2010 (has links)
No description available.
29

Inferential Latent Variable Models for Combustion Processes

Cardin, Marlene 01 1900 (has links)
This thesis investigates the application of latent variable methods to three combustion processes. Multivariate analysis of flame images and process data is performed to predict important quality parameters and monitor flame stability. The motivation behind this work is to decrease operational costs and greenhouse gases in these energy intensive processes. The three combustion processes studied are a lime kiln, a basic oxygen furnace and a coal-fired boiler. In lime kiln operation, the main goal is to stabilize final product temperature in order to reduce fouling and energy costs. Due to long process dynamics, prediction of product temperature is required at least one hour in advance for potential use in a control scheme. Several methods for extracting features from flame images were investigated for the prediction of the temperature. The best method is then combined with process data in a PLS model that also incorporates dynamic information. The analysis revealed that prediction one hour into the future is successful using latent variable methods. In the basic oxygen furnace analysis, the main goal is to predict end-point carbon of the batch process. Termination of the batch as soon as the desired carbon is attained reduces oxygen consumption and thus operational cost. Traditional image analysis is used to identify a constant field of view in the flame images. Multivariate image feature extraction methods were then used in combination with process data to successfully predict the final carbon content of the heat. The coal-fired boiler analysis focuses on monitoring of flame stability at different production and air to fuel levels of the boiler. Prediction of energy efficiency and off-gas chemistry from flame images is also investigated. An unexpected result was the ability to use the installed cameras for localized fouling monitoring. This thesis showed that the use of multivariate analysis of flame images and process data in combustion process is very promising. / Thesis / Master of Applied Science (MASc)
30

Computational Dissection of Composite Molecular Signatures and Transcriptional Modules

Gong, Ting 22 January 2010 (has links)
This dissertation aims to develop a latent variable modeling framework with which to analyze gene expression profiling data for computational dissection of molecular signatures and transcriptional modules. The first part of the dissertation is focused on extracting pure gene expression signals from tissue or cell mixtures. The main goal of gene expression profiling is to identify the pure signatures of different cell types (such as cancer cells, stromal cells and inflammatory cells) and estimate the concentration of each cell type. In order to accomplish this, a new blind source separation method is developed, namely, nonnegative partially independent component analysis (nPICA), for tissue heterogeneity correction (THC). The THC problem is formulated as a constrained optimization problem and solved with a learning algorithm based on geometrical and statistical principles. The second part of the dissertation sought to identify gene modules from gene expression data to uncover important biological processes in different types of cells. A new gene clustering approach, nonnegative independent component analysis (nICA), is developed for gene module identification. The nICA approach is completed with an information-theoretic procedure for input sample selection and a novel stability analysis approach for proper dimension estimation. Experimental results showed that the gene modules identified by the nICA approach appear to be significantly enriched in functional annotations in terms of gene ontology (GO) categories. The third part of the dissertation moves from gene module level down to DNA sequence level to identify gene regulatory programs by integrating gene expression data and protein-DNA binding data. A sparse hidden component model is first developed for this problem, taking into account a well-known biological principle, i.e., a gene is most likely regulated by a few regulators. This is followed by the development of a novel computational approach, motif-guided sparse decomposition (mSD), in order to integrate the binding information and gene expression data. These computational approaches are primarily developed for analyzing high-throughput gene expression profiling data. Nevertheless, the proposed methods should be able to be extended to analyze other types of high-throughput data for biomedical research. / Ph. D.

Page generated in 0.0751 seconds