Causal inference is a critical task in various fields such as healthcare, economics, marketing and education. Recently, there have been significant advances through the application of machine learning techniques, especially deep neural networks. Unfortunately, to-date many of the proposed methods are evaluated on different (data, software/hardware, hyperparameter) setups and consequently it is nearly impossible to compare the efficacy of the available methods or reproduce results presented in original research manuscripts. In this paper, we propose a causal inference toolbox (CauseBox) that addresses the aforementioned problems. At the time of publication, the toolbox includes seven state of the art causal inference methods and two benchmark datasets. By providing convenient command-line and GUI-based interfaces, the CauseBox toolbox helps researchers fairly compare the state of the art methods in their chosen application context against benchmark datasets. The code is made public at github.com/paras2612/CauseBox.

Original languageEnglish (US)
Title of host publicationCIKM 2021 - Proceedings of the 30th ACM International Conference on Information and Knowledge Management
PublisherAssociation for Computing Machinery
Number of pages5
ISBN (Electronic)9781450384469
StatePublished - Oct 26 2021
Event30th ACM International Conference on Information and Knowledge Management, CIKM 2021 - Virtual, Online, Australia
Duration: Nov 1 2021Nov 5 2021

Publication series

NameInternational Conference on Information and Knowledge Management, Proceedings


Conference30th ACM International Conference on Information and Knowledge Management, CIKM 2021
CityVirtual, Online


  • causal inference
  • deep learning
  • treatment effect estimation

ASJC Scopus subject areas

  • General Business, Management and Accounting
  • General Decision Sciences


Dive into the research topics of 'CauseBox: A Causal Inference Toolbox for BenchmarkingTreatment Effect Estimators with Machine Learning Methods'. Together they form a unique fingerprint.

Cite this