• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Causal Inference : controlling for bias in observational studies using propensity score methods

Msibi, Mxolisi January 2020 (has links)
Adjusting for baseline pre-intervention characteristics between treatment groups, through the use of propensity score matching methods, is an important step that enables researchers to do causal inference with confidence. This is critical, largely, due to the fact that practical treatment allocation scenarios are non-randomized in nature, with various inherent biases that are inevitable, and therefore requiring such experimental manipulations. These propensity score matching methods are the available tools to be used as control mechanisms, for such intrinsic system biases in causal studies, without the benefits of randomization (Lane, To, Kyna , & Robin, 2012). Certain assumptions need to be verifiable or met, before one may embark on a propensity score matching causal effects journey, using the Rubin causal model (Holland, 1986), of which the main ones are conditional independence (unconfoundedness) and common support (positivity). In particular, with this dissertation we are concerned with elaborating the applications of these matching methods, for a ‘strong-ignorability’ case (Rosenbaum & Rubin, 1983), i.e. when both the overlap and unconfoundedness properties are valid. We will take a journey from explaining different experimental designs and how the treatment effect is estimated, closing with a practical example based on two cohorts of enrolled introductory statistics students prior and post-clickers intervention, at a public South African university, and the relevant causal conclusions thereof. Keywords: treatment, conditional independence, propensity score, counterfactual, confounder, common support / Dissertation (MSc)--University of Pretoria, 2020. / Statistics / MSc / Unrestricted
2

Minimum Distance Estimation in Categorical Conditional Independence Models

January 2012 (has links)
One of the oldest and most fundamental problems in statistics is the analysis of cross-classified data called contingency tables. Analyzing contingency tables is typically a question of association - do the variables represented in the table exhibit special dependencies or lack thereof? The statistical models which best capture these experimental notions of dependence are the categorical conditional independence models; however, until recent discoveries concerning the strongly algebraic nature of the conditional independence models surfaced, the models were widely overlooked due to their unwieldy implicit description. Apart from the inferential question above, this thesis asks the more basic question - suppose such an experimental model of association is known, how can one incorporate this information into the estimation of the joint distribution of the table? In the traditional parametric setting several estimation paradigms have been developed over the past century; however, traditional results are not applicable to arbitrary categorical conditional independence models due to their implicit nature. After laying out the framework for conditional independence and algebraic statistical models, we consider three aspects of estimation in the models using the minimum Euclidean (L2E), minimum Pearson chi-squared, and minimum Neyman modified chi-squared distance paradigms as well as the more ubiquitous maximum likelihood approach (MLE). First, we consider the theoretical properties of the estimators and demonstrate that under general conditions the estimators exist and are asymptotically normal. For small samples, we present the results of large scale simulations to address the estimators' bias and mean squared error (in the Euclidean and Frobenius norms, respectively). Second, we identify the computation of such estimators as an optimization problem and, for the case of the L2E, propose two different methods by which the problem can be solved, one algebraic and one numerical. Finally, we present an R implementation via two novel packages, mpoly for symbolic computing with multivariate polynomials and catcim for fitting categorical conditional independence models. It is found that in general minimum distance estimators in categorical conditional independence models behave as they do in the more traditional parametric setting and can be computed in many practical situations with the implementation provided.
3

Analyzing the Combination of Polymorphisms Associating with Antidepressant Response by Exact Conditional Test

Ma, Baofu 08 August 2005 (has links)
Genetic factors have been shown to be involved in etiology of a poor response to the antidepressant treatment with sufficient dosage and duration. Our goal was to identify the role of polymorphisms in the poor response to the treatment. To this end, 5 functional polymorphisms in 109 patients diagnosed with unipolar, major depressive disorder are analyzed. Due to the small sample size, exact conditional tests are utilized to analyze the contingency table. The data analysis involves: (1) Exact test for conditional independence in a high dimensional contingency table; (2) Marginal independence test; (3) Exact test for three-way interactions. The efficiency of program always limits the application of exact test. The appropriate methods for enumerating exact tables are the key to improve the efficiency of programs. The algorithm of enumerating the exact tables is also introduced.
4

ARMA Identification of Graphical Models

Avventi, Enrico, Lindquist, Anders, Wahlberg, Bo January 2013 (has links)
Consider a Gaussian stationary stochastic vector process with the property that designated pairs of components are conditionally independent given the rest of the components. Such processes can be represented on a graph where the components are nodes and the lack of a connecting link between two nodes signifies conditional independence. This leads to a sparsity pattern in the inverse of the matrix-valued spectral density. Such graphical models find applications in speech, bioinformatics, image processing, econometrics and many other fields, where the problem to fit an autoregressive (AR) model to such a process has been considered. In this paper we take this problem one step further, namely to fit an autoregressive moving-average (ARMA) model to the same data. We develop a theoretical framework and an optimization procedure which also spreads further light on previous approaches and results. This procedure is then applied to the identification problem of estimating the ARMA parameters as well as the topology of the graph from statistical data. / <p>Updated from "Preprint" to "Article" QC 20130627</p>
5

Grafické modely ve statistice a ekonometrii / Graphical models in statistics and econometrics

Hubálek, Ondřej January 2012 (has links)
Graphical models in statistics and econometrics provide capability to describe causal relations using causal graph in classical regression analysis and others econometric tools. Goal of this thesis is description of causal modelling of time series with help of structural models of vector autoregression. There is description of procedure of building structural VAR model, principle of graphical models and building model for causal dependence analysis. For purpose of comparison there are used data from both USA and Czech Republic and comparison of similar models for both countries is presented. Best models are then selected, to show causal relations between macroeconomic variables. For purpose of analysis, impulse-response functions are used to show impact of demand shock on GDP and other macro indicators.
6

The Strucplot Framework: Visualizing Multi-way Contingency Tables with vcd

Hornik, Kurt, Zeileis, Achim, Meyer, David 10 1900 (has links) (PDF)
This paper describes the "strucplot" framework for the visualization of multi-way contingency tables. Strucplot displays include hierarchical conditional plots such as mosaic, association, and sieve plots, and can be combined into more complex, specialized plots for visualizing conditional independence, GLMs, and the results of independence tests. The framework's modular design allows flexible customization of the plots' graphical appearance, including shading, labeling, spacing, and legend, by means of "graphical appearance control" functions. The framework is provided by the R package vcd.
7

Computation of context as a cognitive tool

Sanscartier, Manon Johanne 09 November 2006
In the field of cognitive science, as well as the area of Artificial Intelligence (AI), the role of context has been investigated in many forms, and for many purposes. It is clear in both areas that consideration of contextual information is important. However, the significance of context has not been emphasized in the Bayesian networks literature. We suggest that consideration of context is necessary for acquiring knowledge about a situation and for refining current representational models that are potentially erroneous due to hidden independencies in the data.<p>In this thesis, we make several contributions towards the automation of contextual consideration by discovering useful contexts from probability distributions. We show how context-specific independencies in Bayesian networks and discovery algorithms, traditionally used for efficient probabilistic inference can contribute to the identification of contexts, and in turn can provide insight on otherwise puzzling situations. Also, consideration of context can help clarify otherwise counter intuitive puzzles, such as those that result in instances of Simpson's paradox. In the social sciences, the branch of attribution theory is context-sensitive. We suggest a method to distinguish between <i>dispositional causes</i> and <i>situational factors</i> by means of contextual models. Finally, we address the work of Cheng and Novick dealing with causal attribution by human adults. Their <i>probabilistic contrast model</i> makes use of contextual information, called focal sets, that must be determined by a human expert. We suggest a method for discovering complete <i>focal sets</i> from probabilistic distributions, without the human expert.
8

Computation of context as a cognitive tool

Sanscartier, Manon Johanne 09 November 2006 (has links)
In the field of cognitive science, as well as the area of Artificial Intelligence (AI), the role of context has been investigated in many forms, and for many purposes. It is clear in both areas that consideration of contextual information is important. However, the significance of context has not been emphasized in the Bayesian networks literature. We suggest that consideration of context is necessary for acquiring knowledge about a situation and for refining current representational models that are potentially erroneous due to hidden independencies in the data.<p>In this thesis, we make several contributions towards the automation of contextual consideration by discovering useful contexts from probability distributions. We show how context-specific independencies in Bayesian networks and discovery algorithms, traditionally used for efficient probabilistic inference can contribute to the identification of contexts, and in turn can provide insight on otherwise puzzling situations. Also, consideration of context can help clarify otherwise counter intuitive puzzles, such as those that result in instances of Simpson's paradox. In the social sciences, the branch of attribution theory is context-sensitive. We suggest a method to distinguish between <i>dispositional causes</i> and <i>situational factors</i> by means of contextual models. Finally, we address the work of Cheng and Novick dealing with causal attribution by human adults. Their <i>probabilistic contrast model</i> makes use of contextual information, called focal sets, that must be determined by a human expert. We suggest a method for discovering complete <i>focal sets</i> from probabilistic distributions, without the human expert.
9

The Strucplot Framework: Visualizing Multi-way Contingency Tables with vcd

Meyer, David, Zeileis, Achim, Hornik, Kurt January 2005 (has links) (PDF)
This paper describes the `strucplot' framework for the visualization of multi-way contingency tables. Strucplot displays include hierarchical conditional plots such as mosaic, association, and sieve plots, and can be combined into more complex, specialized plots for visualizing conditional independence, GLMs, and the results of independence tests. The framework's modular design allows flexible customization of the plots' graphical appearance, including shading, labeling, spacing, and legend, by means of graphical appearance control (`grapcon') functions. The framework is provided by the R package vcd. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
10

Collateral choice option valuation

Mollaret, Sébastian January 2015 (has links)
A bank borrowing some money has to give some securities to the lender, which is called collateral. Different kinds of collateral can be posted, like cash in different currencies or a stock portfolio depending on the terms of the contract, which is called a Credit Support Annex (CSA). Those contracts specify eligible collateral, interest rate, frequency of collateral posting, minimum transfer amounts, etc. This guarantee reduces the counterparty risk associated with this type of transaction. If a CSA allows for posting cash in different currencies as collateral, then the party posting collateral can, now and at each future point in time, choose which currency to post. This choice leads to optionality that needs to be accounted for when valuing even the most basic of derivatives such as forwards or swaps. In this thesis, we deal with the valuation of embedded optionality in collateral contracts. We consider the case when collateral can be posted in two different currencies, which seems sufficient since collateral contracts are soon going to be simplified. This study is based on the conditional independence approach proposed by Piterbarg [8]. This method is compared to both Monte-Carlo simulation and finite- difference method. A practical application is finally presented with the example of a contract between Natixis and Barclays.

Page generated in 0.154 seconds