{"author":[{"first_name":"Garvesh","full_name":"Raskutti, Garvesh","last_name":"Raskutti"},{"first_name":"Caroline","full_name":"Uhler, Caroline","last_name":"Uhler","orcid":"0000-0002-7008-0216","id":"49ADD78E-F248-11E8-B48F-1D18A9856A87"}],"article_number":"e183","_id":"2015","external_id":{"arxiv":["1307.0366"]},"quality_controlled":"1","extern":"1","intvolume":" 7","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","month":"04","title":"Learning directed acyclic graphs based on sparsest permutations","article_processing_charge":"No","doi":"10.1002/sta4.183","main_file_link":[{"url":"http://arxiv.org/abs/1307.0366","open_access":"1"}],"publication_status":"published","oa":1,"issue":"1","date_created":"2018-12-11T11:55:13Z","publist_id":"5061","citation":{"ista":"Raskutti G, Uhler C. 2018. Learning directed acyclic graphs based on sparsest permutations. STAT. 7(1), e183.","ieee":"G. Raskutti and C. Uhler, “Learning directed acyclic graphs based on sparsest permutations,” STAT, vol. 7, no. 1. Wiley, 2018.","ama":"Raskutti G, Uhler C. Learning directed acyclic graphs based on sparsest permutations. STAT. 2018;7(1). doi:10.1002/sta4.183","short":"G. Raskutti, C. Uhler, STAT 7 (2018).","mla":"Raskutti, Garvesh, and Caroline Uhler. “Learning Directed Acyclic Graphs Based on Sparsest Permutations.” STAT, vol. 7, no. 1, e183, Wiley, 2018, doi:10.1002/sta4.183.","apa":"Raskutti, G., & Uhler, C. (2018). Learning directed acyclic graphs based on sparsest permutations. STAT. Wiley. https://doi.org/10.1002/sta4.183","chicago":"Raskutti, Garvesh, and Caroline Uhler. “Learning Directed Acyclic Graphs Based on Sparsest Permutations.” STAT. Wiley, 2018. https://doi.org/10.1002/sta4.183."},"abstract":[{"text":"We consider the problem of learning a Bayesian network or directed acyclic graph model from observational data. A number of constraint‐based, score‐based and hybrid algorithms have been developed for this purpose. Statistical consistency guarantees of these algorithms rely on the faithfulness assumption, which has been shown to be restrictive especially for graphs with cycles in the skeleton. We here propose the sparsest permutation (SP) algorithm, showing that learning Bayesian networks is possible under strictly weaker assumptions than faithfulness. This comes at a computational price, thereby indicating a statistical‐computational trade‐off for causal inference algorithms. In the Gaussian noiseless setting, we prove that the SP algorithm boils down to finding the permutation of the variables with the sparsest Cholesky decomposition of the inverse covariance matrix, which is equivalent to ℓ0‐penalized maximum likelihood estimation. We end with a simulation study showing that in line with the proven stronger consistency guarantees, and the SP algorithm compares favourably to standard causal inference algorithms in terms of accuracy for a given sample size.","lang":"eng"}],"type":"journal_article","language":[{"iso":"eng"}],"year":"2018","volume":7,"date_published":"2018-04-17T00:00:00Z","oa_version":"Preprint","publication":"STAT","article_type":"original","date_updated":"2021-01-12T06:54:44Z","publisher":"Wiley","status":"public","day":"17"}