Causal Learning for Decision Making (CLDM)

International Conference on Learning Representations (ICLR)

April 26, 2020

@CausalIclr · #CLDM2020


Deep Learning has enabled significant improvements in areas as diverse as computer vision, text understanding and reinforcement learning. However, a major challenge that still stands is the ability to generalize outside of the i.i.d setting, when we care about generalization or fast adaptation to distributions that are different from the main training distribution. It has been argued that this requires not only learning the statistical correlations within data, but the causal model underlying the data. Causal models exploit the conditional distribution of the target variables given the corresponding direct causal predictors and must remain identical under interventions on variables other than the target variable. This invariance idea is closely linked to causality and has been discussed, for example, under the term ‘modularity’ (Pearl, 2009; Schölkopf et al., 2012). Hence, causal knowledge supports decision making in two ways: by allowing us to predict the consequences of different actions under the given circumstances and by helping to make diagnoses that suggest which interventions will be effective. If the data was really generated from the composition of independent causal mechanisms (Peters et al., 2017), then there exists a good factorization of knowledge that mimics that structure. If in addition, at each time step, agents in the real world tend to only be able to change one or very few high-level variables (or the associated mechanisms producing them), then assumption of small change (in the right representation) should be generally valid. Hence we should be able to obtain fast transfer, by recovering a good approximation of the true causal decomposition into independent mechanisms (to the extent that the observations and interventions can reveal those mechanisms).

The goal of this workshop is to investigate how much progress is possible by framing the learning problem beyond learning correlations, that is, by uncovering and leveraging causal relations.

Key questions to be addressed and discussed include:

  • What is the role of an underlying causal model in decision making?
  • What is the difference between a prediction that is made with a causal model and that with a non‐causal model?
  • The way current RL agents explore environments appears less intelligent than the way that human learners explore. One reason for this disparity may be that humans, when faced with a novel environment, do not merely observe, they also interact with the world and effect the world with actions. Maintaining a causal model of the world allows the learner to maintain plausible hypotheses and design experiments to test these hypotheses.
  • Maintaining a distributional belief about the agent's model of the world as a tool for exploration (minimize entropy, maximize knowledge acquisition).
  • The importance of causality to advantageous decision-making could also be potentially problematic as research into causal explanations has shown that people often have only rough, skeletal knowledge about causal mechanisms. Therefore people’s causal knowledge only allows for very rough and sometimes incorrect predictions of consequences. Given that our causal knowledge is incomplete or sometimes wrong, it might be harmful to try to base decisions on causal considerations.

Sponsors

Organizers

References

  • Pearl, J. (2009) Causality: Models, Reasoning, and Inference.
  • Peters, Jonas, Dominik Janzing, and Bernhard Schölkopf. Elements of causal inference: foundations and learning algorithms. MIT press, 2017.
  • Judea Pearl (2018): The Seven Pillars of Causal Reasoning with Reflections on Machine Learning.
  • James Woodward (2005): Making Things Happen: A Theory of Causal Explanation.
  • Bottou, Léon, Jonas Peters, Joaquin Quiñonero-Candela, Denis X. Charles, D. Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. "Counterfactual reasoning and learning systems: The example of computational advertising"
  • Jonas Peters, Peter Bühlmann, and Nicolai Meinshausen. Causal inference by using invariant prediction: identification and confidence intervals.
  • On causal and anticausal learning, arXiv preprint arXiv:1206.6471.
  • Dasgupta, Ishita, Jane Wang, Silvia Chiappa, Jovana Mitrovic, Pedro Ortega, David Raposo, Edward Hughes, Peter Battaglia, Matthew Botvinick, and Zeb Kurth-Nelson. "Causal Reasoning from Meta-reinforcement Learning." arXiv preprint arXiv:1901.08162 (2019)
  • Bengio, Y., Deleu, T., Rahaman, N., Ke, R., Lachapelle, S., Bilaniuk, O., Goyal, A. and Pal, C., 2019. A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms. arXiv preprint arXiv:1901.10912.
  • Ke, N. R., Bilaniuk, O., Goyal, A., Bauer, S., Larochelle, H., Pal, C., & Bengio, Y. (2019). Learning Neural Causal Models from Unknown Interventions. arXiv preprint arXiv:1910.01075
  • Parascandolo, G., et al. Learning independent causal mechanisms, https://arxiv.org/abs/1712.00961.
  • Buesing, L., Weber, T., Zwols, Y., Racaniere, S., Guez, A., Lespiau, J.B. and Heess, N., 2018. Woulda, coulda, shoulda: Counterfactually-guided policy search. arXiv preprint arXiv:1811.06272.
  • Suter, R. et al. Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness. ICML. 2019.
  • Goyal. et al, Recurrent Independent Mechanisms, arXiv preprint arXiv:1909.10893.
  • Rezende et al, Causally Correct Partial Models for Reinforcement Learning, arXiv preprint arXiv:2002.02836.
A big thanks to Jessica B. Hamrick for allowing us to borrow the template :-)