1. Data Traits
1.1. External Structure Traits
1.1.1. Multi-Source
1.1.2. Heterogeneity
1.2. Data Quality Traits
1.2.1. Missing Data
1.2.2. Noise Data
1.3. Data Quantity Traits
1.3.1. Sample
1.3.1.1. Sampling With Replacement
1.3.1.2. Sampling Without Replacement
1.3.1.2.1. Random Sampling
1.3.1.2.2. Stratified Sampling
1.3.2. Attribute
1.3.2.1. Feature Selection
1.3.2.1.1. Information Gain
1.3.2.1.2. Genetic Algorithm
1.3.2.1.3. T-test
1.3.2.1.4. Stepwise Regression
1.3.2.1.5. Correlation Matrix
1.3.2.2. Feature Extraction
1.3.2.2.1. Principal Component analysis
1.3.2.2.2. Encoders
1.4. Internal Information Traits
1.4.1. Imbalance Class
1.4.1.1. Data Level
1.4.1.1.1. Undersampling Technique
1.4.1.1.2. Oversampling Technique
1.4.1.1.3. Synthetic Minority Oversampling Technique
1.4.1.1.4. Clustering based Undersampling
1.4.1.2. Algorithm Level
1.4.1.2.1. One-class Classifier
1.4.2. Sparsity
2. Learning Methods
2.1. Label Status
2.1.1. Active Learning
2.1.2. Reinforcement Learning
2.1.3. Unsupervised Learning
2.1.4. Semi-supervised Learning
2.1.5. Supervised Learning
2.2. Data Status
2.2.1. Multi-Modal Learning
2.2.2. Transfer Learning
2.2.3. Federated Learning
2.3. Structure Form
2.3.1. Ensemble Learning
2.3.2. Deep Learning
2.3.3. Representation Learning
3. Classification Algorithms
3.1. Traditional Statistical Single Classifier
3.1.1. Logistic Regression
3.1.2. Linear Discriminant
3.1.3. Naive-bayes
3.2. Machine Learning Single Classifier
3.2.1. K-Nearest Neighbor
3.2.2. Support Vector Machine
3.2.2.1. Hyper-parameter Tuning
3.2.2.1.1. Grid Search
3.2.2.1.2. Random Search
3.2.2.1.3. Evolution Algorithms
3.2.2.1.4. Bayesian Optimization
3.2.3. Decision Trees
3.2.4. Artificial Neural Network
3.2.5. Graph Convolutional Network
3.3. Hybrid Multiple Classifier
3.3.1. Traditional + Intelligent
3.3.2. Intelligent + Intelligent
3.3.3. Clustering + Classification
3.4. Ensemble Classifier
3.4.1. Homogenous
3.4.1.1. Bagging
3.4.1.1.1. Random Forest
3.4.1.2. Boosting
3.4.1.2.1. AdaBoost
3.4.1.2.2. Gradient Boosting
3.4.1.2.3. XGBoost
3.4.1.2.4. LightGBM
3.4.1.2.5. CatBoost
3.4.1.2.6. Stockastic Gradiant Boosting
3.4.2. Heterogenous
3.4.2.1. Stacking
3.4.2.2. Majority Voting
3.4.2.3. Weighted Average
4. Performance Evaluation
4.1. Prediction Metrics
4.1.1. Accuracy
4.1.2. Precision
4.1.3. Recall
4.1.4. F1-Score
4.1.5. Confusion Matrix
4.1.6. Area Under the ROC Curve
4.1.7. Kolmogorov Smirnov
4.2. Stability Metrics
4.2.1. Sensitivity Analysis
4.2.2. Shapley Values
4.2.3. Local Interpretable Model-agnostic Explainations