Get Started. It's Free
or sign up with your email address
Machine learning by Mind Map: Machine learning

1. Problem

1.1. Classification

1.2. Clustering

1.3. Regression

1.4. Anomaly detection

1.5. Association rules

1.6. Reinforcement learning

1.7. Structure prediction

1.8. Feature learning

1.9. Online learning

1.10. Semi-supervised learning

1.11. Grammar induction

2. Approach

2.1. Decision learning

2.1.1. Use Classification tree analysis Regression tree analysis

2.1.2. Advanced Simple Require little data to representation Use a white box model Able to handle both numerical and category data Validated by statistic test Robust Perform well with large dataset

2.1.3. Limited NP-hard, go to local-optimal Can create over-complex tree Some data hard to learn Bias in favor on multi levels data

2.1.4. Algorithm ID3 (Iterative Dichotomiser 3) C4.5 (successor of ID3) CART (Classification And Regression Tree) CHAID (CHi-squared Automatic Interaction Detector). MARS: handle numerical data better. Conditional Inference Trees. Statistics-based approach

2.2. Association rule learning

2.2.1. Discover interesting relations between variable in large database

2.2.2. Use Web usage mining Instrution detection Continuous production Bioinformatics

2.3. Artificial neural network

2.3.1. Statistic learning models

2.3.2. Use to estimate or approciate function has large number input and unknow

2.3.3. Neural: adaptive weight & capable of non-linear function

2.3.4. ANN defined three type of params Interconnection pattern Learning process to update weights Activation functions to convert neural's weight input to output activation

2.3.5. Perpective views mathematics (composition of funtions) probabilistic (graph model)

2.3.6. Learning paradigms Supervised learning Cost function: mean square error, gradient descent Use: pattern recognition, sequence data Unsupervised learning Cost function: prior know Use: estimation, filtering, clustering Reinforcement learning Use: sequential decision making, control problem

2.4. Inductive logic programming

2.5. Support vector machines

2.6. Clustering

2.7. Bayes neural network

2.7.1. Probabilistic graphic model Random variables Conditional dependencies

2.7.2. Inference and learning Unobserved variables Variable elimination Clique tree propagation

2.8. Reinforcement learning

2.9. Representation learning

2.10. Similarity and metric learning

2.11. Sparse dictionary learning

2.12. Genetic algorithms