Users use machine learning methods to make sense of the data in the data model and determine what data should be considered. Providing interpretability is essential when developing a prediction model using machine learning. SHAP Value is an index for assessing the contribution of the input characteristics to model learning has been developed and garnered interest. Using decision tree-based models, which are frequently used to represent table data, it is demonstrated in this study that SHAP value may relatively accurately estimate the contribution of features to model learning. Game theory-based importance judgments is used to identify significant test items from blood test data. The stepwise procedure used to choose the test items resulted in consistent weights that are allocated regardless of the sequence in which they appeared; therefore they are not always appropriate in terms of importance. In this research, a game-theoretical-based important selection approach is offered for weighting test items chosen using the stepwise method. This approach is also used to extract test results that are deemed crucial from data from actual blood tests.
Benzer Makaleler | Yazar | # |
---|
Makale | Yazar | # |
---|