• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 308
  • 93
  • 43
  • 33
  • 23
  • 16
  • 14
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • Tagged with
  • 688
  • 140
  • 80
  • 76
  • 64
  • 56
  • 56
  • 55
  • 53
  • 49
  • 49
  • 47
  • 43
  • 37
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Examination of the Causal Effects Between the Dimensions of Service Quality and Spectator Satisfaction in Minor League Baseball

Koo, Gi Y., Hardin, Rob, McClung, Steven, Jung, Taejin, Cronin, Joseph, Vorhees, Clay, Bourdeau, Brian 01 January 2009 (has links)
Sports organisations must continuously assess how better to meet or exceed consumer expectations and perceptions of their experience in order to maintain and increase the number of spectators and loyal fans attending their sporting events. This study aims to enhance our understanding of which characteristics of a service attribute will best define its quality and impact on spectator behaviour by understanding the causal relationship between perceived service quality (PSQ) and satisfaction.
42

The effect of sugar-sweetened beverage consumption on childhood obesity - causal evidence

Yang, Yan 18 May 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Communities and States are increasingly targeting the consumption of sugar sweetened beverages (SSBs), especially soda, in their efforts to curb childhood obesity. However, the empirical evidence based on which policy makers design the relevant policies is not causally interpretable. In the present study, we suggest a modeling framework that can be used for making causal estimation and inference in the context of childhood obesity. This modeling framework is built upon the two-stage residual inclusion (2SRI) instrumental variables method and have two levels – level one models children’s lifestyle choices and level two models children’s energy balance which is assumed to be dependent on their lifestyle behaviors. We start with a simplified version of the model that includes only one policy, one lifestyle, one energy balance, and one observable control variable. We then extend this simple version to be a general one that accommodates multiple policy and lifestyle variables. The two versions of the model are 1) first estimated via the nonlinear least square (NLS) method (henceforth NLS-based 2SRI); and 2) then estimated via the maximum likelihood estimation (MLE) method (henceforth MLE-based 2SRI). Using simulated data, we show that 1) our proposed 2SRI method outperforms the conventional method that ignores the inherent nonlinearity [the linear instrumental variables (LIV) method] or the potential endogeneity [the nonlinear regression (NR) method] in obtaining the relevant estimators; and 2) the MLE-based 2SRI provides more efficient estimators (also consistent) compared to the NLS-based one. Real data analysis is conducted to illustrate the implementation of 2SRI method in practice using both NLS and MLE methods. However, due to data limitation, we are not able to draw any inference regarding the impacts of lifestyle, specifically SSB consumption, on childhood obesity. We are in the process of getting better data and, after doing so, we will replicate and extend the analyses conducted here. These analyses, we believe, will produce causally interpretable evidence of the effects of SSB consumption and other lifestyle choices on childhood obesity. The empirical analyses presented in this dissertation should, therefore, be viewed as an illustration of our newly proposed framework for causal estimation and inference.
43

Empirical stadies of online markets: the impact of product page cues on consumer decisions

Banerjee, Shrabastee 14 May 2021 (has links)
The widespread expansion of online markets in the past decade poses several questions for platforms, firms and customers alike. An important dimension to be explored in this domain is the provision of information on e-commerce platforms - given the increasing ease with which product pages can be customized to include a vast variety of content, how do these pieces of information interact? Further, what are the specific channels through which this information eventually influences consumer decision-making? My dissertation is situated in this space, and aims to look at how consumers respond to various “cues” that are being introduced by e-commerce platforms which offer products or services that can be purchased online, and how these cues might eventually influence decision-making. In my first dissertation project, the cue I focus on is user generated content. More specifically, I study how the introduction of the Q&A technology (which enables customers to ask product-specific questions before purchase, and receive answers either from other customers or the platform itself) affects the more widely established reviews and ratings feature on e-commerce platforms. I find that the addition of Q&As leads to better matches between customers and products, higher customer satisfaction, and resultantly higher ratings. My second project examines another cue that is common in online markets, which is the advertised reference price. My goal in this project is to examine how users react to a specific variant of such prices, namely the “Starting from...” price, using data from a large scale field experiment conducted on Holidu.com. My results indicate that raising “From” prices gives users a more accurate price estimate, but it negatively impacts outbound clicks and other engagement metrics. Taken together, the two projects aim to shed light on factors that influence consumer decision-making in an e-commerce setting, and the possible mechanisms underlying this influence.
44

Precision improvement for Mendelian Randomization

Zhu, Yineng 23 January 2023 (has links)
Mendelian Randomization (MR) methods use genetic variants as instrumental variables (IV) to infer causal relationships between an exposure and an outcome, which overcomes the inability to infer such a relationship in observational studies due to unobserved confounders. There are several MR methods, including the inverse variance weighted (IVW) method, which has been extended to deal with correlated IVs; the median method, which provides consistent causal estimates in the presence of pleiotropy when less than half of the genetic variants are invalid IVs but assumes independent IVs. In this dissertation, we propose two new methods to improve precision for MR analysis. In the first chapter, we extend the median method to correlated IVs: the quasi-boots median method, that accounts for IV correlation in the standard error estimation using a quasi-bootstrap method. Simulation studies show that this method outperforms existing median methods under the correlated IVs setting with and without the presence of pleiotropic effects. In the second chapter, to overcome the lack of an effective solution to account for sample overlap in current IVW methods, we propose a new overall causal effect estimator by exploring the distribution of the estimator for individual IVs under the independent IVs setting, which we name the IVW-GH method. In the final chapter, we extend the IVW-GH method to correlated IVs. In simulation studies, the IVW-GH method outperforms the existing IVW methods under the one-sample setting for independent IVs and shows reasonable results for other settings. We apply these proposed methods to genome-wide association results from the Framingham Heart Study Offspring Study and the Million Veteran Program to identify potential causal relationships between a number of proteins and lipids. All the proposed methods are able to identify some proteins known to be related to lipids. In addition, the quasi-boots median method is robust to pleiotropic effects in the real data application. Consequently, the newly proposed quasi-boots median method and IVW-GH method may provide additional insights for identifying causal relationships. / 2025-01-23T00:00:00Z
45

Causal uncertainty and persuasion: how the motivation to understand causality affects the processing and acceptance of causal arguments

Tobin, Stephanie J. 21 June 2004 (has links)
No description available.
46

Causal Network ANOVA and Tree Model Explainability

Zhongli Jiang (18848698) 24 June 2024 (has links)
<p dir="ltr"><i>In this dissertation, we present research results on two independent projects, one on </i><i>analysis of variance of multiple causal networks and the other on feature-specific coefficients </i><i>of determination in tree ensembles.</i></p>
47

Dynamic Causal Modeling Across Network Topologies

Zaghlool, Shaza B. 03 April 2014 (has links)
Dynamic Causal Modeling (DCM) uses dynamical systems to represent the high-level neural processing strategy for a given cognitive task. The logical network topology of the model is specified by a combination of prior knowledge and statistical analysis of the neuro-imaging signals. Parameters of this a-priori model are then estimated and competing models are compared to determine the most likely model given experimental data. Inter-subject analysis using DCM is complicated by differences in model topology, which can vary across subjects due to errors in the first-level statistical analysis of fMRI data or variations in cognitive processing. This requires considerable judgment on the part of the experimenter to decide on the validity of assumptions used in the modeling and statistical analysis; in particular, the dropping of subjects with insufficient activity in a region of the model and ignoring activation not included in the model. This manual data filtering is required so that the fMRI model's network size is consistent across subjects. This thesis proposes a solution to this problem by treating missing regions in the first-level analysis as missing data, and performing estimation of the time course associated with any missing region using one of four candidate methods: zero-filling, average-filling, noise-filling using a fixed stochastic process, or one estimated using expectation-maximization. The effect of this estimation scheme was analyzed by treating it as a preprocessing step to DCM and observing the resulting effects on model evidence. Simulation studies show that estimation using expectation-maximization yields the highest classification accuracy using a simple loss function and highest model evidence, relative to other methods. This result held for various data set sizes and varying numbers of model choice. In real data, application to Go/No-Go and Simon tasks allowed computation of signals from the missing nodes and the consequent computation of model evidence in all subjects compared to 62 and 48 percent respectively if no preprocessing was performed. These results demonstrate the face validity of the preprocessing scheme and open the possibility of using single-subject DCM as an individual cognitive phenotyping tool. / Ph. D.
48

<b>STOCHASTIC NEURAL NETWORK AND CAUSAL INFERENCE</b>

Yaxin Fang (17069563) 10 January 2025 (has links)
<p dir="ltr">Estimating causal effects from observational data has been challenging due to high-dimensional complex dataset and confounding biases. In this thesis, we try to tackle these issues by leveraging deep learning techniques, including sparse deep learning and stochastic neural networks, that have been developed in recent literature. </p><p dir="ltr">With the advancement of data science, the collection of increasingly complex datasets has become commonplace. In such datasets, the data dimension can be extremely high, and the underlying data generation process can be unknown and highly nonlinear. As a result, the task of making causal inference with high-dimensional complex data has become a fundamental problem in many disciplines, such as medicine, econometrics, and social science. However, the existing methods for causal inference are frequently developed under the assumption that the data dimension is low or that the underlying data generation process is linear or approximately linear. To address these challenges, chapter 3 proposes a novel causal inference approach for dealing with high-dimensional complex data. By using sparse deep learning techniques, the proposed approach can address both the high dimensionality and unknown data generation process in a coherent way. Furthermore, the proposed approach can also be used when missing values are present in the datasets. Extensive numerical studies indicate that the proposed approach outperforms existing ones. </p><p dir="ltr">One of the major challenges in causal inference with observational data is handling missing confounder. Latent variable modeling is a valid framework to address this challenge, but current approaches within the framework often suffer from consistency issues in causal effect estimation and are hard to extend to more complex application scenarios. To bridge this gap, in chapter 4, we propose a new latent variable modeling approach. It utilizes a stochastic neural network, where the latent variables are imputed as the outputs of hidden neurons using an adaptive stochastic gradient HMC algorithm. Causal inference is then conducted based on the imputed latent variables. Under mild conditions, the new approach provides a theoretical guarantee for the consistency of causal effect estimation. The new approach also serves as a versatile tool for modeling various causal relationships, leveraging the flexibility of the stochastic neural network in natural process modeling. We show that the new approach matches state-of-the-art performance on benchmarks for causal effect estimation and demonstrate its adaptability to proxy variable and multiple-cause scenarios.</p>
49

Modelos de transição de Markov: um enfoque em experimentos planejados com dados binários correlacionados / Markov transition models: a focus on planned experiments with correlated binary data

Lordelo, Mauricio Santana 30 May 2014 (has links)
Os modelos de transição de Markov constituem uma ferramenta de grande importância para diversas áreas do conhecimento quando são desenvolvidos estudos com medidas repetidas. Eles caracterizam-se por modelar a variável resposta ao longo do tempo condicionada a uma ou mais respostas anteriores, conhecidas como a história do processo. Além disso, é possível a inclusão de outras covariáveis. No caso das respostas binárias, pode-se construir uma matriz com as probabilidades de transição de um estado para outro. Neste trabalho, quatro abordagens diferentes de modelos de transição foram comparadas para avaliar qual estima melhor o efeito causal de tratamentos em um estudo experimental em que a variável resposta é um vetor binário medido ao longo do tempo. Estudos de simulação foram realizados levando em consideração experimentos balanceados com três tratamentos de natureza categórica. Para avaliar as estimativas foram utilizados o erro padrão, viés e percentual de cobertura dos intervalos de confiança. Os resultados mostraram que os modelos de transição marginalizados são mais indicados na situação em que um experimento é desenvolvido com um reduzido número de medidas repetidas. Como complementação, apresenta-se uma forma alternativa de realizar comparações múltiplas, uma vez que os pressupostos como normalidade, independência e homocedasticidade são violados impossibilitando o uso dos métodos tradicionais. Um experimento com dados reais no qual se registrou a presença de fungos (considerada como sucesso) em cultivos de citros e morango foi analisado por meio do modelo de transição apropriado. Para as comparações múltiplas, intervalos de confiança simultâneos foram construídos para o preditor linear e os resultados foram estendidos para a resposta média que neste caso são as probabilidades de sucesso. / The transition Markov models are a very important tool for several areas of knowledge when studies are developed with repeated measures. They are characterized by modeling the response variable over time conditional to the previous response which is known as the history. In addtion it is possible to include other covariates. In the case of binary responses, can be constructed a matrix of transition probabilities from one state to another. In this work, four different approaches to transition models were compared in order to assess which best estimates of the causal effect of treatments in an experimental studies where the outcome is a vector of binary response measured over time. Simulation study was held taking into account a balanced experiments with three treatments of categorical nature. To assess the best estimates standard error and bias, beyond the percentage of coverage were used. The results showed that the marginalized transition models are more appropriate in situation where an experiment is developed with a reduced number of repeated measurements. As complementation is presented an alternative way to perform multiple comparisons, since the assumptions as normality, independence and homoscedasticity are violated precluding the use of traditional methods. An experiment with real data where we recorded the presence of fungi (deemed successful) in citrus and strawberry crops was analyzed through the appropriate transition model. For multiple comparisons, simultaneous confidence intervals were constructed for the linear predictor and the results have been extended to the mean response in this case are the probabilities of success.
50

The Left Hemisphere Interpreter and Confabulation : a Comparison

Åström, Frida January 2011 (has links)
The left hemisphere interpreter refers to a function in the left hemisphere of the brain that search for and produce causal explanations for events, behaviours and feelings, even though no such apparent pattern exists between them. Confabulation is said to occur when a person presents or acts on obviously false information, despite being aware that they are false. People who confabulate also tend to defend their confabulations even when they are presented with counterevidence. Research related to these two areas seems to deal with the same phenomenon, namely the human tendency to infer explanations for events, even if the explanations have no actual bearing in reality. Despite this, research on the left hemisphere interpreter has progressed relatively independently from research related to the concept of confabulation. This thesis has therefore aimed at reviewing each area and comparing them in a search for common relations. What has been found as a common theme is the emphasis they both place on the potentially underlying function of the interpreter and confabulation. Many researchers across the two fields stress the adaptive and vital function of keeping the brain free from both contradiction and unpredictability, and propose that this function is served by confabulations and the left hemisphere interpreter. This finding may provide a possible opening for collaboration across the fields, and for the continued understanding of these exciting and perplexing phenomena.

Page generated in 0.028 seconds