{"author":[{"last_name":"Raskutti","first_name":"Garvesh","full_name":"Raskutti, Garvesh"},{"last_name":"Uhler","id":"49ADD78E-F248-11E8-B48F-1D18A9856A87","first_name":"Caroline","orcid":"0000-0002-7008-0216","full_name":"Uhler, Caroline"}],"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","oa":1,"doi":"10.1002/sta4.183","quality_controlled":"1","volume":7,"oa_version":"Preprint","publication":"STAT","citation":{"apa":"Raskutti, G., & Uhler, C. (2018). Learning directed acyclic graphs based on sparsest permutations. STAT. Wiley. https://doi.org/10.1002/sta4.183","ama":"Raskutti G, Uhler C. Learning directed acyclic graphs based on sparsest permutations. STAT. 2018;7(1). doi:10.1002/sta4.183","chicago":"Raskutti, Garvesh, and Caroline Uhler. “Learning Directed Acyclic Graphs Based on Sparsest Permutations.” STAT. Wiley, 2018. https://doi.org/10.1002/sta4.183.","ieee":"G. Raskutti and C. Uhler, “Learning directed acyclic graphs based on sparsest permutations,” STAT, vol. 7, no. 1. Wiley, 2018.","ista":"Raskutti G, Uhler C. 2018. Learning directed acyclic graphs based on sparsest permutations. STAT. 7(1), e183.","mla":"Raskutti, Garvesh, and Caroline Uhler. “Learning Directed Acyclic Graphs Based on Sparsest Permutations.” STAT, vol. 7, no. 1, e183, Wiley, 2018, doi:10.1002/sta4.183.","short":"G. Raskutti, C. Uhler, STAT 7 (2018)."},"year":"2018","intvolume":" 7","language":[{"iso":"eng"}],"date_created":"2018-12-11T11:55:13Z","publisher":"Wiley","date_updated":"2021-01-12T06:54:44Z","_id":"2015","article_type":"original","issue":"1","article_processing_charge":"No","day":"17","type":"journal_article","publist_id":"5061","month":"04","external_id":{"arxiv":["1307.0366"]},"title":"Learning directed acyclic graphs based on sparsest permutations","date_published":"2018-04-17T00:00:00Z","abstract":[{"text":"We consider the problem of learning a Bayesian network or directed acyclic graph model from observational data. A number of constraint‐based, score‐based and hybrid algorithms have been developed for this purpose. Statistical consistency guarantees of these algorithms rely on the faithfulness assumption, which has been shown to be restrictive especially for graphs with cycles in the skeleton. We here propose the sparsest permutation (SP) algorithm, showing that learning Bayesian networks is possible under strictly weaker assumptions than faithfulness. This comes at a computational price, thereby indicating a statistical‐computational trade‐off for causal inference algorithms. In the Gaussian noiseless setting, we prove that the SP algorithm boils down to finding the permutation of the variables with the sparsest Cholesky decomposition of the inverse covariance matrix, which is equivalent to ℓ0‐penalized maximum likelihood estimation. We end with a simulation study showing that in line with the proven stronger consistency guarantees, and the SP algorithm compares favourably to standard causal inference algorithms in terms of accuracy for a given sample size.","lang":"eng"}],"main_file_link":[{"open_access":"1","url":"http://arxiv.org/abs/1307.0366"}],"extern":"1","status":"public","publication_status":"published","article_number":"e183"}