Congratulations to 2018 cohort student Jimiama Mafeni Mase on their research paper being published in the Neurocomputing Journal.
You can read more about Jimiama’s PhD research here.
There is an overall lack of consensus regarding how the importance of features to machine learning model predictions should be quantified, making explanations of model predictions unreliable.
In addition, explanations depend on the specific machine learning approach employed and on the subset of data used when calculating the importance of features. To improve the reliability and interpretability of machine learning explanations, we introduce a novel fuzzy information fusion methodology.
The results show that our fuzzy feature importance fusion approach outperforms mean and majority vote feature importance fusion methods in capturing increased variation of feature importance coefficients caused by increased data dimensionality, complexity and noise. This is because our approach explores the data space more thoroughly as it uses multiple samples of data to make decisions about the importance of features. It also uses distributions of the data to provide better definitions of feature importance and soft boundaries to capture the intermediate uncertainties of feature importance classification.