Document Type

Article

Publication Date

5-20-2022

Abstract

Missing values in tabular data restrict the use and performance of machine learning, requiring the imputation of missing values. Arguably the most popular imputation algorithm is multiple imputation by chained equations (MICE), which estimates missing values from linear conditioning on observed values. This paper proposes methods to improve both the imputation accuracy of MICE and the classification accuracy of imputed data by replacing MICE’s linear regressors with ensemble learning and deep neural networks (DNN). The imputation accuracy is further improved by characterizing individual samples with cluster labels (CISCL) obtained from the training data. Our extensive analyses of six tabular data sets with up to 80% missing values and three missing types (missing completely at random, missing at random, missing not at random) reveal that ensemble or deep learning within MICE is superior to the baseline MICE (b-MICE), both of which are consistently outperformed by CISCL. Results show that CISCL + b-MICE outperforms b-MICE for all percentages and types of missing values. In most experimental cases, our proposed DNN-based MICE and gradient boosting MICE plus CISCL (GB-MICE-CISCL) outperform seven state-of-the-art imputation algorithms. The classification accuracy of GB-MICE-imputed data is further improved by our proposed GB-MICE-CISCL imputation method across all percentages of missing values. Results also reveal a shortcoming of the MICE framework at high percentages of missing values (50%) and when the missing type is not random. This paper provides a generalized approach to identifying the best imputation model for a tabular data set based on the percentage and type of missing values.

Share

COinS