CaltechAUTHORS: Book Chapter
https://feeds.library.caltech.edu/people/Eberhardt-Frederick/book_section.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenFri, 06 Sep 2024 18:56:37 -0700Almost Optimal Intervention Sets for Causal Discovery
https://resolver.caltech.edu/CaltechAUTHORS:20190327-085859738
Year: 2008
DOI: 10.48550/arXiv.1206.3250
We conjecture that the worst case number of experiments necessary and sufficient to discover a causal graph uniquely given its observational Markov equivalence class can be specified as a function of the largest clique in the Markov equivalence class. We provide an algorithm that computes intervention sets that we believe are optimal for the above task. The algorithm builds on insights gained from the worst case analysis in Eberhardt et al. (2005) for sequences of experiments when all possible directed acyclic graphs over N variables are considered. A simulation suggests that our conjecture is correct. We also show that a generalization of our conjecture to other classes of possible graph hypotheses cannot be given easily, and in what sense the algorithm is then no longer optimal.https://resolver.caltech.edu/CaltechAUTHORS:20190327-085859738Noisy-OR Models with Latent Confounding
https://resolver.caltech.edu/CaltechAUTHORS:20190327-085849059
Year: 2011
DOI: 10.48550/arXiv.1202.3735v1
Given a set of experiments in which varying subsets of observed variables are subject to intervention, we consider the problem of identifiability of causal models exhibiting latent confounding. While identifiability is trivial when each experiment intervenes on a large number of variables, the situation is more complicated when only one or a few variables are subject to intervention per experiment. For linear causal models with latent variables Hyttinen et al. (2010) gave precise conditions for when such data are sufficient to identify the full model. While their result cannot be extended to discrete-valued variables with arbitrary cause-effect relationships, we show that a similar result can be obtained for the class of causal models whose conditional probability distributions are restricted to a 'noisy-OR' parameterization. We further show that identification is preserved under an extension of the model that allows for negative influences, and present learning algorithms that we test for accuracy, scalability and robustness.https://resolver.caltech.edu/CaltechAUTHORS:20190327-085849059On the number of experiments sufficient and in the worst case necessary to identify all causal relations among N variables
https://resolver.caltech.edu/CaltechAUTHORS:20190327-085852486
Year: 2012
DOI: 10.48550/arXiv.1207.1389
We show that if any number of variables are allowed to be simultaneously and independently randomized in any one experiment, log_2(N) + 1 experiments are sufficient and in the worst case necessary to determine the causal relations among N ≥ 2 variables when no latent variables, no sample selection bias and no feedback cycles are present. For all K, 0 < K < 1/2 N we provide an upper bound on the number experiments required to determine causal structure when each experiment simultaneously randomizes K variables. For large N, these bounds are significantly lower than the N - 1 bound required when each experiment randomizes at most one variable. For k_(max) < N/2, we show that (N/k_(max) -1) + N/2k_(max) log_2(k_(max)) experiments are sufficient and in the worst case necessary. We offer a conjecture as to the minimal number of experiments that are in the worst case sufficient to identify all causal relations among N observed variables that are a subset of the vertices of a DAG.https://resolver.caltech.edu/CaltechAUTHORS:20190327-085852486Causal discovery of linear cyclic models from multiple experimental data sets with overlapping variables
https://resolver.caltech.edu/CaltechAUTHORS:20190327-085855919
Year: 2012
DOI: 10.48550/arXiv.1210.4879
Much of scientific data is collected as randomized experiments intervening on some and observing other variables of interest. Quite often, a given phenomenon is investigated in several studies, and different sets of variables are involved in each study. In this article we consider the problem of integrating such knowledge, inferring as much as possible concerning the underlying causal structure with respect to the union of observed variables from such experimental or passive observational overlapping data sets. We do not assume acyclicity or joint causal sufficiency of the underlying data generating model, but we do restrict the causal relationships to be linear and use only second order statistics of the data. We derive conditions for full model identifiability in the most generic case, and provide novel techniques for incorporating an assumption of faithfulness to aid in inference. In each case we seek to establish what is and what is not determined by the data at hand.https://resolver.caltech.edu/CaltechAUTHORS:20190327-085855919Discovering cyclic causal models with latent variables: a general SAT-based procedure
https://resolver.caltech.edu/CaltechAUTHORS:20190327-085903347
Year: 2013
DOI: 10.48550/arXiv.1309.6836
We present a very general approach to learning the structure of causal models based on d-separation constraints, obtained from any given set of overlapping passive observational or experimental data sets. The procedure allows for both directed cycles (feedback loops) and the presence of latent variables. Our approach is based on a logical representation of causal pathways, which permits the integration of quite general background knowledge, and inference is performed using a Boolean satisfiability (SAT) solver. The procedure is complete in that it exhausts the available information on whether any given edge can be determined to be present or absent, and returns "unknown" otherwise. Many existing constraint-based causal discovery algorithms can be seen as special cases, tailored to circumstances in which one or more restricting assumptions apply. Simulations illustrate the effect of these assumptions on discovery and how the present algorithm scales.https://resolver.caltech.edu/CaltechAUTHORS:20190327-085903347Visual Causal Feature Learning
https://resolver.caltech.edu/CaltechAUTHORS:20190327-085913684
Year: 2015
DOI: 10.48550/arXiv.1412.2309
We provide a rigorous definition of the visual cause of a behavior that is broadly applicable to the visually driven behavior in humans, animals, neurons, robots and other perceiving systems. Our framework generalizes standard accounts of causal learning to settings in which the causal variables need to be constructed from micro-variables. We prove the Causal Coarsening Theorem, which allows us to gain causal knowledge from observational data with minimal experimental effort. The theorem provides a connection to standard inference techniques in machine learning that identify features of an image that correlate with, but may not cause, the target behavior. Finally, we propose an active learning scheme to learn a manipulator function that performs optimal manipulations on the image to automatically identify the visual cause of a target behavior. We illustrate our inference and learning algorithms in experiments based on both synthetic and real data.https://resolver.caltech.edu/CaltechAUTHORS:20190327-085913684Unsupervised Discovery of El Niño Using Causal Feature Learning on Microlevel Climate Data
https://resolver.caltech.edu/CaltechAUTHORS:20170530-090152140
Year: 2016
DOI: 10.48550/arXiv.1605.09370
We show that the climate phenomena of El Niño and La Niña arise naturally as states of macro-variables when our recent causal feature learning framework (Chalupka et al., 2015, 2016) is applied to micro-level measures of zonal wind (ZW) and sea surface temperatures (SST) taken over the equatorial band of the Pacific Ocean. The method identifies these unusual climate states on the basis of the relation between ZW and SST patterns without any input about past occurrences of El Niño or La Niña. The simpler alternatives of (i) clustering the SST fields while disregarding their relationship with ZW patterns, or (ii) clustering the joint ZW-SST patterns, do not discover El Niño. We discuss the degree to which our method supports a causal interpretation and use a low-dimensional toy example to explain its success over other clustering approaches. Finally, we propose a new robust and scalable alternative to our original algorithm (Chalupka et al., 2016), which circumvents the need for high-dimensional density learning.https://resolver.caltech.edu/CaltechAUTHORS:20170530-090152140