Informatica logo


Login Register

  1. Home
  2. Issues
  3. Volume 32, Issue 4 (2021)
  4. Counterfactual Explanation of Machine Le ...

Informatica

Information Submit your article For Referees Help ATTENTION!
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

Counterfactual Explanation of Machine Learning Survival Models
Volume 32, Issue 4 (2021), pp. 817–847
Maxim Kovalev ORCID icon link to view author Maxim Kovalev details   Lev Utkin ORCID icon link to view author Lev Utkin details   Frank Coolen ORCID icon link to view author Frank Coolen details   Andrei Konstantinov ORCID icon link to view author Andrei Konstantinov details  

Authors

 
Placeholder
https://doi.org/10.15388/21-INFOR468
Pub. online: 9 December 2021      Type: Research Article      Open accessOpen Access

Received
1 December 2020
Accepted
1 December 2021
Published
9 December 2021

Abstract

A method for counterfactual explanation of machine learning survival models is proposed. One of the difficulties of solving the counterfactual explanation problem is that the classes of examples are implicitly defined through outcomes of a machine learning survival model in the form of survival functions. A condition that establishes the difference between survival functions of the original example and the counterfactual is introduced. This condition is based on using a distance between mean times to event. It is shown that the counterfactual explanation problem can be reduced to a standard convex optimization problem with linear constraints when the explained black-box model is the Cox model. For other black-box models, it is proposed to apply the well-known Particle Swarm Optimization algorithm. Numerical experiments with real and synthetic data demonstrate the proposed method.

References

 
Adadi, A., Berrada, M. (2018). Peeking inside the black-box: a survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160.
 
Arrieta, A.B., Diaz-Rodriguez, N., Ser, J.D., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F. (2019). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. arXiv:1910.10045.
 
Arya, V., Bellamy, R.K.E., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoffman, S.C., Houde, S., Liao, Q.V., Luss, R., Mojsilovic, A., Mourad, S., Pedemonte, P., Raghavendra, R., Richards, J., Sattigeri, P., Shanmugam, K., Singh, M., Varshney, K.R., Wei, D., Zhang, Y. (2019). One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv:1909.03012.
 
Athey, S., Imbens, G. (2016). Recursive partitioning for heterogeneous causal effects. In: Proceedings of the National Academy of Sciences, pp. 1–8.
 
Bai, Q. (2010). Analysis of particle swarm optimization algorithm. Computer and Information Science, 3(1), 180–184.
 
Barocas, S., Selbst, A.D., Raghavan, M. (2020). The hidden assumptions behind counterfactual explanations and principal reasons. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona, Spain, pp. 80–89.
 
Bender, R., Augustin, T., Blettner, M. (2005). Generating survival times to simulate Cox proportional hazards models. Statistics in Medicine, 24(11), 1713–1723.
 
Bhatt, U., Davis, B., Moura, J.M.F. (2019). Diagnostic model explanations: a medical narrative. In: Proceeding of the AAAI Spring Symposium 2019 – Interpretable AI for Well-Being, pp. 1–4.
 
Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.
 
Buhrmester, V., Munch, D., Arens, M. (2019). Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey. arXiv:1911.12116v1.
 
Carvalho, D.V., Pereira, E.M., Cardoso, J.S. (2019). Machine learning interpretability: a survey on methods and metrics. Electronics, 8(832), 1–34.
 
Chowdhury, S., Tong, W., Messac, A., Zhang, J. (2013). A mixed-discrete Particle Swarm Optimization algorithm with explicit diversity-preservation. Structural and Multidisciplinary Optimization, 47, 367–388.
 
Clerc, M., Kennedy, J. (2002). The particle swarm – explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1), 58–73.
 
Cox, D.R. (1972). Regression models and life-tables. Journal of the Royal Statistical Society, Series B (Methodological), 34(2), 187–220.
 
Dandl, S., Molnar, C., Binder, M., Bischl, B. (2020). Multi-Objective Counterfactual Explanations. arXiv:2004.11165.
 
Das, A., Rad, P. (2020). Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. arXiv:2006.11371v2.
 
Dhurandhar, A., Chen, P.-Y., Luss, R., Tu, C.-C., Ting, P., Shanmugam, K., Das, P. (2018). Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives. arXiv:1802.07623v2.
 
Dhurandhar, A., Pedapati, T., Balakrishnan, A., Chen, P.-Y., Shanmugam, K., Puri, R. (2019). Model Agnostic Contrastive Explanations for Structured Data. arXiv:1906.00117.
 
Duan, Y., Harley, R.G., Habetler, T.G. (2009). Comparison of Particle Swarm Optimization and Genetic Algorithm in the design of permanent magnet motors. In: 2009 IEEE 6th International Power Electronics and Motion Control Conference, pp. 822–825.
 
Escobar, L.A., Meeker, W.Q. (1992). Assessing influence in regression analysis with censored data. Biometrics, 48, 507–528.
 
Faraggi, D., Simon, R. (1995). A neural network model for survival data. Statistics in Medicine, 14(1), 73–82.
 
Fernandez, C., Provost, F., Han, X. (2020). Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach. arXiv:2001.07417.
 
Fong, R.C., Vedaldi, A. (2017). Interpretable explanations of Black Boxes by Meaningful Perturbation. In: 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, pp. 3429–3437.
 
Fong, R., Vedaldi, A. (2019). Explanations for attributing deep neural network predictions. In: Explainable AI, LNCS, Vol. 11700. Springer, Cham, pp. 149–167.
 
Garreau, D., von Luxburg, U. (2020). Explaining the Explainer: A First Theoretical Analysis of LIME. arXiv:2001.03447.
 
Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., Lee, S. (2019). Counterfactual Visual Explanations. arXiv:1904.07451.
 
Guidotti, R., Monreale, A., Giannotti, F., Pedreschi, D., Ruggieri, S., Turini, F. (2019a). Factual and counterfactual explanations for black-box decision making. IEEE Intelligent Systems, 34(6), 14–23.
 
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D. (2019b). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 93.
 
Haarburger, C., Weitz, P., Rippel, O., Merhof, D. (2018). Image-Based Survival Analysis for Lung Cancer Patients using CNNs. arXiv:1808.09679v1.
 
Hendricks, L.A., Hu, R., Darrell, T., Akata, Z. (2018a). Generating Counterfactual Explanations with Natural Language. arXiv:1806.09809.
 
Hendricks, L.A., Hu, R., Darrell, T., Akata, Z. (2018b). Grounding visual explanations. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 264–279.
 
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Muller, H. (2019). Causability and explainability of artificial intelligence in medicine. WIREs Data Mining and Knowledge Discovery, 9(4), 1312.
 
Hosmer, D., Lemeshow, S., May, S. (2008). Applied Survival Analysis: Regression Modeling of Time to Event Data. John Wiley & Sons, New Jersey.
 
Ibrahim, N.A., Kudus, A., Daud, I., Bakar, M.R.A. (2008). Decision tree for competing risks survival probability in breast cancer study. International Journal Of Biological and Medical Research, 3(1), 25–29.
 
Kallus, N. (2016). Learning to personalize from observational data. arXiv:1608.08925.
 
Katzman, J.L., Shaham, U., Cloninger, A., Bates, J., Jiang, T., Kluger, Y. (2018). DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network. BMC Medical Research Methodology, 18(24), 1–12.
 
Kennedy, J., Eberhart, R.C. (1995). Particle swarm optimization. In: Proceedings of the International Conference on Neural Networks, Vol. 4. IEEE, pp. 1942–1948.
 
Khan, F.M., Zubek, V.B. (2008). Support vector regression for censored data (SVRc): a novel tool for survival analysis. In: 2008 Eighth IEEE International Conference on Data Mining. IEEE, pp. 863–868.
 
Kim, J., Sohn, I., Jung, S.-H., Kim, S., Park, C. (2012). Analysis of survival data with group Lasso. Communications in Statistics – Simulation and Computation, 41(9), 1593–1605.
 
Kitayama, S., Yasuda, K. (2006). A method for mixed integer programming problems by particle swarm optimization. Electrical Engineering in Japan, 157(2), 40–49.
 
Kovalev, M.S., Utkin, L.V. (2020). A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov–Smirnov bounds. Neural Networks, 132, 1–18. https://doi.org/10.1016/j.neunet.2020.08.007.
 
Kovalev, M.S., Utkin, L.V., Kasimov, E.M. (2020). SurvLIME: A method for explaining machine learning survival models. Knowledge-Based Systems, 203, 106164. https://doi.org/10.1016/j.knosys.2020.106164.
 
Kunzel, S.R., Sekhona, J.S., Bickel, P.J., Yu, B. (2019). Meta-learners for estimating heterogeneous treatment effects using machine learning. Proceedings of the National Academy of Sciences, 116(10), 4156–4165.
 
Laskari, E.C., Parsopoulos, K.E., Vrahatis, M.N. (2002). Particle swarm optimization for integer programming. In: Proceedings of the 2002 Congress on Evolutionary Computation, Vol. 2. IEEE, pp. 1582–1587.
 
Laugel, T., Lesot, M.-J., Marsala, C., Renard, X., Detyniecki, M. (2018). Comparison-based inverse classification for interpretability in machine learning. In: Information Processing and Management of Uncertainty in Knowledge-Based Systems. Theory and Foundations. Proceedings of the 17th International Conference, IPMU 2018, Vol. 1, Cadiz, Spain, pp. 100–111.
 
Le-Rademacher, J.G., Peterson, R.A., Therneau, T.M., Sanford, B.L., Stone, R.M., Mandrekar, S.J. (2018). Application of multi-state models in cancer clinical trials. Clinical Trials, 15(5), 489–498.
 
Lee, C., Zame, W.R., Yoon, J., van der Schaar, M. (2018). Deephit: a deep learning approach to survival analysis with competing risks. In: 32nd Association for the Advancement of Artificial Intelligence (AAAI) Conference, pp. 1–8.
 
Lenis, D., Major, D., Wimmer, M., Berg, A., Sluiter, G., Buhler, K. (2020). Domain aware medical image classifier interpretation by counterfactual impact analysis. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, Lecture Notes in Computer Science, Vol. 12261. Springer, Cham, pp. 315–325.
 
Looveren, A.V., Klaise, J. (2019). Interpretable Counterfactual Explanations Guided by Prototypes. arXiv:1907.02584.
 
Lucic, A., Oosterhuis, H., Haned, H., de Rijke, M. (2019). Actionable Interpretability through Optimizable Counterfactual Explanations for Tree Ensembles. arXiv:1911.12199.
 
Lundberg, S.M., Lee, S.-I. (2017). A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774.
 
Mogensen, U.B., Ishwaran, H., Gerds, T.A. (2012). Evaluating random forests for survival analysis using prediction error curves. Journal of Statistical Software, 50(11), 1–23.
 
Molnar, C. (2019). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Published online, https://christophm.github.io/interpretable-ml-book/.
 
Murdoch, W.J., Singh, C., Kumbier, K., Abbasi-Asl, R., Yua, B. (2019). Interpretable machine learning: definitions, methods, and applications. arXiv:1901.04592.
 
Panda, S., Padhy, N.P. (2008). Comparison of particle swarm optimization and genetic algorithm for FACTS-based controller design. Applied Soft Computing, 8(4), 1418–1427.
 
Petrocelli, J.V. (2013). Pitfalls of counterfactual thinking in medical practice: preventing errors by using more functional reference points. Journal of Public Health Research, 2:e24, 136–143.
 
Petsiuk, V., Das, A., Saenko, K. (2018). RISE: Randomized input sampling for explanation of black-box models. arXiv:1806.07421.
 
Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., Bie, T.D., Flach, P. (2020). FACE: feasible and actionable counterfactual explanations. In: AIES’20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, New York, USA, pp. 344–350.
 
Ramon, Y., Martens, D., Provost, F., Evgeniou, T. (2019). Counterfactual explanation algorithms for behavioral and textual data. arXiv:1912.01819.
 
Ranganath, R., Perotte, A., Elhadad, N., Blei, D. (2016). Deep survival analysis. In: Proceedings of the 1st Machine Learning for Healthcare Conference, Vol. 56. PMLR, Northeastern University, Boston, MA, USA, pp. 101–114.
 
Ribeiro, M.T., Singh, S., Guestrin, C. (2016). “Why Should I Trust You?” Explaining the Predictions of Any Classifier. arXiv:1602.04938v3.
 
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206–215.
 
Russel, C. (2019). Efficient Search for Diverse Coherent Explanations. arXiv:1901.04909.
 
Sharma, S., Henderson, J., Ghosh, J. (2019). CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence Models. arXiv:1905.07857.
 
Sooda, K., Nair, T.R.G. (2011). A comparative analysis for determining the optimal path using PSO and GA. International Journal of Computer Applications, 32(4), 8–12.
 
Strasser, S., Goodman, R., Sheppard, J., Butcher, S. (2016). A new discrete particle swarm optimization algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference. ACM, pp. 53–60.
 
Strumbelj, E., Kononenko, I. (2010). An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research, 11, 1–18.
 
Tibshirani, R. (1997). The lasso method for variable selection in the Cox model. Statistics in Medicine, 16(4), 385–395.
 
Utkin, L.V., Kovalev, M.S., Kasimov, E.M. (2020). An explanation method for black-box machine learning survival models using the Chebyshev distance. In: Artificial Intelligence and Natural Language. AINL 2020, Communications in Computer and Information Science, Vol. 1292. Springer, Cham, pp. 62–74. https://doi.org/10.1007/978-3-030-59082-6_5.
 
van der Waa, J., Robeer, M., van Diggelen, J., Brinkhuis, M., Neerincx, M. (2018). Contrastive Explanations with Local Foil Trees. arXiv:1806.07470.
 
Verma, S., Dickerson, J., Hines, K. (2020). Counterfactual Explanations for Machine Learning: A Review. arXiv:2010.10596.
 
Vermeire, T., Martens, D. (2020). Explainable Image Classification with Evidence Counterfactual. arXiv:2004.07511.
 
Vu, M.N., Nguyen, T.D., Phan, N., R. Gera, M.T.T. (2019). Evaluating Explainers via Perturbation. arXiv:1906.02032v1.
 
Wachter, S., Mittelstadt, B., Russell, C. (2017). Counterfactual explanations without opening the black box: automated decisions and the GPDR. Harvard Journal of Law & Technology, 31, 841–887.
 
Wager, S., Athey, S. (2015). Estimation and inference of heterogeneous treatment effects using random forests. arXiv:1510.0434.
 
Wang, D., Tan, D., Liu, L. (2018). Particle swarm optimization algorithm: an overview. Soft Computing, 22, 387–408.
 
Wang, H., Zhou, L. (2017). Random survival forest with space extensions for censored data. Artificial Intelligence in Medicine, 79, 52–61.
 
Wang, P., Li, Y., Reddy, C.K. (2019). Machine learning for survival analysis: a survey. ACM Computing Surveys (CSUR), 51(6), 1–36.
 
Wang, S., Zheng, F., Xu, L. (2008). Comparison between particle swarm optimization and genetic algorithm in artificial neural network for life prediction of NC tools. Journal of Advanced Manufacturing Systems, 7(1), 1–7.
 
Wang, S., Zhang, Y., Wang, S., Ji, G. (2015). A comprehensive survey on particle Swarm optimization algorithm and its applications. Mathematical Problems in Engineering, 2015, 1–38.
 
White, A., Garcez, A.d. (2019). Measurable Counterfactual Local Explanations for Any Classifier. arXiv:1908.03020v2.
 
Widodo, A., Yang, B.-S. (2011). Machine health prognostics using survival probability and support vector machine. Expert Systems with Applications, 38(7), 8430–8437.
 
Witten, D.M., Tibshirani, R. (2010). Survival analysis with high-dimensional covariates. Statistical Methods in Medical Research, 19(1), 29–51.
 
Wright, M.N., Dankowski, T., Ziegler, A. (2017). Unbiased split variable selection for random survival forests using maximally selected rank statistics. Statistics in Medicine, 36(8), 1272–1284.
 
Xie, N., Ras, G., van Gerven, M., Doran, D. (2020). Explainable Deep Learning: A Field Guide for the Uninitiated. arXiv:2004.14545.
 
Zhang, H.H., Lu, W. (2007). Adaptive Lasso for Cox’s proportional hazards model. Biometrika, 94(3), 691–703.
 
Zhang, W., Le, T.D., Liu, L., Zhou, Z.-H., Li, J. (2017). Mining heterogeneous causal effects for personalized cancer treatment. Bioinformatics, 33(15), 2372–2378.
 
Zhao, L., Feng, D. (2020). DNNSurv: Deep Neural Networks for Survival Analysis Using Pseudo Values. arXiv:1908.02337v2.
 
Zhu, X., Yao, J., Huang, J. (2016). Deep convolutional neural network for survival analysis with pathological images. In: 2016 IEEE International Conference on Bioinformatics and Biomedicine. IEEE, pp. 544–547.

Full article Related articles Cited by PDF XML
Full article Related articles Cited by PDF XML

Copyright
© 2021 Vilnius University
by logo by logo
Open access article under the CC BY license.

Keywords
interpretable model explainable AI survival analysis censored data convex optimization counterfactual explanation Cox model Particle Swarm Optimization

Funding
This work is supported by the Russian Science Foundation under grant 21-11-00116.

Metrics
since January 2020
1666

Article info
views

610

Full article
views

878

PDF
downloads

171

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

INFORMATICA

  • Online ISSN: 1822-8844
  • Print ISSN: 0868-4952
  • Copyright © 2023 Vilnius University

About

  • About journal

For contributors

  • OA Policy
  • Submit your article
  • Instructions for Referees
    •  

    •  

Contact us

  • Institute of Data Science and Digital Technologies
  • Vilnius University

    Akademijos St. 4

    08412 Vilnius, Lithuania

    Phone: (+370 5) 2109 338

    E-mail: informatica@mii.vu.lt

    https://informatica.vu.lt/journal/INFORMATICA
Powered by PubliMill  •  Privacy policy