Return to search

Causal Reasoning in Equivalence Classes

<p>Causality is central to scientific inquiry across many disciplines including epidemiology, medicine, and economics, to name a few. Researchers are usually interested not only in knowing how two events are correlated, but also in whether one causes the other and, if so, how. In general, the scientific practice seeks not just a surface description of the observed data, but rather deeper explanations, such as predicting the effects of interventions. The answer to such questions does not lie in the data alone and requires a qualitative understanding of the underlying data-generating process; a knowledge that is articulated in a causal diagram.</p>
<p>And yet, delineating the true, underlying causal diagram requires knowledge and assumptions that are usually not available in many non-trivial and large-scale situations. Hence, this dissertation develops necessary theory and algorithms towards realizing a data-driven framework for causal inference. More specifically, this work provides fundamental treatments of the following research questions:</p>
<p><br></p>
<p><strong>Effect Identification under Markov Equivalence.</strong> One common task in many data sciences applications is to answer questions about the effect of new interventions, like: 'what would happen to <em>Y</em> while observing <em>Z=z</em> if we force <em>X</em> to take the value <em>x</em>?'. Formally, this is known as <em>causal effect identification</em>, where the goal is to determine whether a post-interventional distribution is computable from the combination of an observational distribution and assumptions about the underlying domain represented by a causal diagram. In this dissertation, we assume as the input of the task a less informative structure known as a partial ancestral graph (PAG), which represents a Markov equivalence class of causal diagrams, learnable from observational data. We develop tools and algorithms for this relaxed setting and characterize identifiable effects under necessary and sufficient conditions.</p>
<p><br></p>
<p><strong>Causal Discovery from Interventions.</strong> A causal diagram imposes constraints on the corresponding generated data; conditional independences are one such example. Given a mixture of observational and experimental data, the goal is to leverage the constraints imprinted in the data to infer the set of causal diagrams that are compatible with such constraints. In this work, we consider soft interventions, such that the mechanism of an intervened variable is modified without fully eliminating the effect of its direct causes, and investigate two settings where the targets of the interventions could be known or unknown to the data scientist. Accordingly, we introduce the first general graphical characterizations to test whether two causal diagrams are indistinguishable given the constraints in the available data. We also develop algorithms that, given a mixture of observational and interventional data, learn a representation of the equivalence class.</p>

  1. 10.25394/pgs.21684488.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/21684488
Date07 December 2022
CreatorsAmin Jaber (14227610)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/Causal_Reasoning_in_Equivalence_Classes/21684488

Page generated in 0.0026 seconds