Copy of Machine Learning Algorithms Grouped by Similarity

Get Started. It's Free
or sign up with your email address
Copy of Machine Learning Algorithms Grouped by Similarity by Mind Map: Copy of Machine Learning Algorithms Grouped by Similarity

1. Bayesian methods are those that explicitly apply Bayes’ Theorem for problems such as classification and regression.

1.1. Naive Bayes

1.2. Average One Dependence Estimators (AODE)

1.3. Bayesian Belief Network (BBN)

1.4. Bayesian Network (BN)

1.5. Gaussian Naive Bayes

1.6. Multinomial Naive Bayes

2. Decision tree methods construct a model of decisions made based on actual values of attributes in the data. Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data for classification and regression problems. Decision trees are often fast and accurate and a big favorite in machine learning.

2.1. Classification & Regression Tree CART

2.2. Iterative Dichotomiser 3 (ID3)

2.3. C 4.5

2.4. C 5.0

2.5. Chi Squared Automatic Interaction Detection (CHAID)

2.6. Decision Stump

2.7. Conditional Decision Tree (CDT)

2.8. M 5

3. Ensemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction. Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular.

3.1. Random Forest

3.2. Gradient Boosting Machines (GBM)

3.3. Boosting

3.4. Bagging (Bootstrapped Aggregation)

3.5. AdaBoost

3.6. Blending (Stacked Generalization)

3.7. Gradient Boosting Regression Trees (GBRT)

4. Artificial Neural Networks are models that are inspired by the structure and/or function of biological neural networks. They are a class of pattern matching that are commonly used for regression and classification problems but are really an enormous subfield comprised of hundreds of algorithms and variations for all manner of problem types.

4.1. Radial Basis Function Network (RBFN)

4.2. Perception

4.3. Back Propagation

4.4. Hopfield Network

5. Clustering, like regression, describes the class of problem and the class of methods. Clustering methods are typically organized by the modeling approaches such as centroid-based and hierarchal. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality.

5.1. k-means

5.2. k-medians

5.3. Expectation Maximization

6. Regression is concerned with modeling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model. Regression methods are a workhorse of statistics and have been co-opted into statistical machine learning. This may be confusing because we can use regression to refer to the class of problem and the class of algorithm. Really, regression is a process.

6.1. Linear Regression

6.2. Ordinary Least Squared Regression (OLSR)

6.3. Step-wise Regression

6.4. Logistic Regression

6.5. Multivariate Adaptive Regression Splines (MARS)

6.6. Locally Estimated Scatterplot Smoothing (LOESS)

7. Like clustering methods, dimensionality reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarize or describe data using less information. This can be useful to visualize dimensional data or to simplify data which can then be used in a supervised learning method. Many of these methods can be adapted for use in classification and regression.

7.1. Principal Component Analysis (PCA)

7.2. Partial Least Square Reduction (PLSR)

7.3. Sammon Mapping

7.4. Multi Dimensional Scaling (MDS)

7.5. Projection Pursuit

7.6. Principal Component Regression (PCR)

7.7. Discriminant Analysis

7.7.1. Linear

7.7.2. Regularized

7.7.3. Quadratic

7.7.4. Flexible

7.7.5. Mixture

7.7.6. Partial Least Squared

8. An extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing. I have listed regularization algorithms separately here because they are popular, powerful and generally simple modifications made to other methods.

8.1. Ridge Regression

8.2. Least Absolution Shrinkage & Selection Operator (LASSO)

8.3. Elastic Net

8.4. Least Angle Regression (LARS)

9. Association rule learning methods extract rules that best explain observed relationships between variables in data. These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organization.

9.1. Cubist

9.2. One Rule (OneR)

9.3. Zero Rule (ZeroR)

9.4. Repeated Incremental Pruning to Produce Error Reduction (RIPPER)

10. Instance-based learning model is a decision problem with instances or examples of training data that are deemed important or required to the model. Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason, instance-based methods are also called winner-take-all methods and memory-based learning. Focus is put on the representation of the stored instances and similarity measures used between instances.

10.1. kNN

10.2. Learning Vector Quantization (LVQ)

10.3. Self Optimizing Map (SOM)

10.4. Locally Weighted Learning (LWL

11. Hierarchical Clustering. In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types: Agglomerative: This is a "bottom up" approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. Divisive: This is a "top down" approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.

12. Deep Learning methods are a modern update to Artificial Neural Networks that exploit abundant cheap computation. They are concerned with building much larger and more complex neural networks and, as commented on above, many methods are concerned with semi-supervised learning problems where large datasets contain very little labeled data.

12.1. Deep Boltzmann Machine (DBM)

12.2. Deep Belief Network (DBN)

12.3. Convolutional Neural Networks (CNN)

12.4. Stacked Auto Encoders

13. Learning Styles

13.1. Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a time. A model is prepared through a training process in which it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data. Example problems are classification and regression. Example algorithms include Logistic Regression and the Back Propagation Neural Network.

13.2. Input data is not labeled and does not have a known result. A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may be through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity. Example problems are clustering, dimensionality reduction and association rule learning. Example algorithms include: the Apriori algorithm and k-Means.

13.3. Input data is a mixture of labeled and unlabelled examples. There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions. Example problems are classification and regression. Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabeled data.

13.4. Reinforcement learning (RL) is learning by interacting with an environment. An RL agent learns from the consequences of its actions, rather than from being explicitly taught and it selects its actions on basis of its past experiences (exploitation) and also by new choices (exploration), which is essentially trial and error learning. The reinforcement signal that the RL-agent receives is a numerical reward, which encodes the success of an action's outcome, and the agent seeks to learn to select actions that maximize the accumulated reward over time.