Get Started. It's Free
or sign up with your email address
ML by Mind Map: ML

1. Types of learning

1.1. Supervised Learning

1.1.1. theory

1.1.1.1. supervised machine learning involves human interaction elements to manage the machine learning process.

1.1.1.1.1. Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs.

1.1.1.2. The easiest way to understand supervised machine learning is to think of it involving an input variable (x) and an output variable (y). You use an algorithm to learn a mapping function that connects the input to the output. In this scenario, humans are providing the input, the desired output, and the algorithm.

1.1.2. Models

1.1.2.1. Regression

1.1.2.1.1. learning outcome

1.1.2.1.2. lessons

1.1.2.1.3. all slide

1.1.2.1.4. ML flowchat

1.1.2.1.5. concept

1.1.2.1.6. non-parametric regression

1.1.2.1.7. parametric regression

1.1.2.1.8. parametric vs nonparametric regression

1.1.2.2. Classification

1.1.2.2.1. learning outcome

1.1.2.2.2. lesson

1.1.2.2.3. ml-class representation ph. Andrew Ng

1.1.2.2.4. washington uni all slides

1.1.2.2.5. content

1.1.2.2.6. ml-course-standford ph.andrew ng note

1.1.2.2.7. model and algorithms

1.1.2.2.8. Metrics

1.1.2.2.9. Evaluation, Handling Huge Dataset, Machine Learning System Design

1.1.2.2.10. Feed data

1.1.3. Draft

1.1.3.1. Predictive Model

1.1.3.2. we have labeled data

1.1.3.3. The main types of supervised learning problems include regression and classification problems

1.1.4. List of Common Algorithms

1.1.4.1. Linear Regression;

1.1.4.2. Logistical Regression;

1.1.4.3. Random Forest;

1.1.4.4. Gradient Boosted Trees;

1.1.4.5. Support Vector Machines (SVM);

1.1.4.6. Decision Trees;

1.1.4.7. Naive Bayes;

1.1.4.8. K Nearest Neighbor.

1.1.4.9. linear discriminant analysis

1.1.4.10. Neural Networks (Multilayer perceptron)

1.1.4.11. Similarity learning

1.1.5. Application

1.1.5.1. price prediction and trend forecasting in sales, retail commerce, and stock trading

1.1.5.2. Speech recognition

1.1.5.3. Pattern recognition

1.1.5.4. Spam detection

1.1.5.5. Information retrieval

1.1.5.6. Learning to rank

1.1.5.7. Bioinformatics

1.1.5.8. Cheminformatics

1.1.5.9. Quantitative structure–activity relationship

1.1.5.10. Database marketing

1.1.5.11. Handwriting recognition

1.1.5.12. Information extraction

1.1.5.13. Object recognition in computer vision

1.1.5.14. Optical character recognition

1.2. Unsupervised Learning

1.2.1. Unsupervised learning algorithms take a set of data that contains only inputs, and find structure in the data, like grouping or clustering of data points.

1.2.2. learning outcome

1.2.2.1. Practice

1.2.2.1.1. Implement the Nearest Neighbor Search Algorithm from scratch using Python

1.2.2.1.2. Implement the K-Nearest Neighbor Search Algorithm from scratch using Python

1.2.2.1.3. Implement the distance metrics by using skit learn

1.2.2.1.4. Implement the K-means algorithm by using skit learn

1.2.2.1.5. Implement K-means algorithm with MapReduce

1.2.2.1.6. Practice the EM algorithm by using real data and skit learn

1.2.2.1.7. Practice LDA algorithm by examples

1.2.2.1.8. Practice HMM algorithm with real data

1.2.2.1.9. Practice with Clustering and Retrieval with real problems

1.2.2.2. Theory

1.2.2.2.1. Know what Clustering and Retrieval are in Machine Learning.

1.2.2.2.2. Have general knowledge about Unsupervised

1.2.2.2.3. Understand how to resolve real problems by applying clustering and retrieval

1.2.2.2.4. List out common clustering and retrievals algorithms

1.2.2.2.5. Understand how Nearest Neighbor Search Algorithm works

1.2.2.2.6. Understand how K-Nearest Neighbor Search algorithm and it’s application

1.2.2.2.7. Grasp importance of Data representations and its application

1.2.2.2.8. Know idea about distance metrics

1.2.2.2.9. Understand how to scale K-NN search using KD-trees

1.2.2.2.10. Have knowledge about Locality sensitive hashing for approximate NN search

1.2.2.2.11. Understand K-means algorithm

1.2.2.2.12. Have knowledge how to use K-means algorithm to apply real clustering problems

1.2.2.2.13. Extend technicals in order to scale K-means algorithm by MapReduce.

1.2.2.2.14. Have knowledge about Motivating and setting the foundation for mixture models

1.2.2.2.15. Catch up mixtures of Gaussians mechanism and how to apply for clustering problem

1.2.2.2.16. Have knowledge about Expectation Maximization building blocks

1.2.2.2.17. Understand how to estimate cluster parameters in Expectation Maximization

1.2.2.2.18. Understand the EM algorithm mechanism

1.2.2.2.19. Understand how EM algorithm is relationship to K-means

1.2.2.2.20. Understand Latent Dirichlet Allocation

1.2.2.2.21. Understand the goal of LDA inference

1.2.2.2.22. Have knowledge about Bayesian inference via Gibbs sampling

1.2.2.2.23. Catch up idea of Collapsed Gibbs sampling for LDA

1.2.2.2.24. Have general knowledge about Hierarchical clustering

1.2.2.2.25. Understand Hidden Markov models

1.2.3. Document representation

1.2.3.1. Bag of words model

1.2.3.1.1. count number of representation of word in dictionary (word counts)

1.2.3.2. TF-idf( term frequency- inverse Document frequency)

1.2.3.2.1. this emphasizes important words

1.2.4. distance metrics

1.2.4.1. In 1D, just Euclidean distance

1.2.4.1.1. distance(xi , xq) = |xi - xq| In multiple dimensions:

1.2.4.2. Scaled Euclidean distance

1.2.4.2.1. Formally, this is achieved via

1.2.4.2.2. Scaled Euclidean distance Defined in terms of inner product

1.2.4.3. (non – scaled) Euclidean distance

1.2.4.3.1. (non – scaled) Euclidean distance

1.2.4.4. similarity

1.2.4.4.1. Cosine similarity – normalize

1.2.4.4.2. Similarity

1.2.4.5. Other distance metrics

1.2.4.6. Combining distance metrics

1.2.5. Algorithms

1.2.5.1. Clustering

1.2.5.1.1. Discover groups of similar inputs

1.2.5.1.2. content

1.2.5.1.3. algorithm

1.2.5.2. Retrieval

1.2.5.2.1. it's task searching for related items

1.2.5.2.2. algorithm

1.2.6. List of Common Algorithms

1.2.6.1. k-means clustering, Association Rules

1.3. Semi-supervised Learning

1.3.1. Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data).

1.3.2. Semi-supervised machine learning can be used with regression and classification models, but you can also used them to create predictions.

1.3.3. Draft

1.3.4. List of Common Algorithms

1.3.5. Application

1.4. Reinforcement learning

1.4.1. Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward

1.4.2. Draft

1.4.3. List of Common Algorithms

1.4.4. Use Cases

1.4.4.1. autonomous vehicles

1.4.4.2. learning to play a game against a human opponent.

1.5. Algorithms type

1.5.1. Regression Algorithms

1.5.1.1. Linear Regression

1.5.1.2. Logistic Regression

1.5.1.3. Stepwise Regression

1.5.2. Classification Algorithms

1.5.2.1. Linear Classifier

1.5.2.2. Support Vector Machine (SVM)

1.5.2.3. Kernel SVM

1.5.2.4. Sparse Representation-based classification (SRC)

1.5.3. Instance-based Algorithms

1.5.3.1. k-Nearest Neighbor (kNN)

1.5.3.2. Learning Vector Quantization (LVQ)

1.5.4. Regularization Algorithms

1.5.4.1. Ridge Regression

1.5.4.2. Least Absolute Shrinkage and Selection Operator (LASSO)

1.5.4.3. Least-Angle Regression (LARS)

1.5.5. Bayesian Algorithms

1.5.5.1. Naive Bayes

1.5.5.2. Gaussian Naive Bayes

1.5.6. Clustering Algorithms

1.5.6.1. k-Means clustering

1.5.6.2. k-Medians

1.5.6.3. Expectation Maximization (EM)

1.5.7. Artificial Neural Network Algorithms

1.5.7.1. Perceptron

1.5.7.2. Softmax Regression

1.5.7.3. Multi-layer Perceptron

1.5.7.4. Back-Propagation

1.5.8. Dimensionality Reduction Algorithms

1.5.8.1. Principal Component Analysis (PCA)

1.5.8.2. Linear Discriminant Analysis (LDA)

1.5.9. Ensemble Algorithms

1.5.9.1. Boosting

1.5.9.2. AdaBoost

1.5.9.3. Random Forest

1.6. What Marketers can Accomplish with Machine Learning

1.6.1. Marketing Automation Tools

1.6.1.1. Open Source Marketing Automation: Mautic

1.6.1.2. Best-in-Class Marketing Automation Software - Marketo

1.6.1.3. HubSpot | Inbound Marketing & Sales Software

1.6.2. Sending Frequency Optimization

1.6.2.1. Machine learning allows marketers to carve lists into precise segments and to neatly personalize sending frequencies for individual recipients.

1.6.3. Content Marketing

1.6.3.1. Machine learning can give you the intelligence needed to quickly determine what’s working as well as recommend what’s needed to amplify your strategies that might better connect with your audience.

1.6.3.2. The learning part of machine learning means that, over time, the machine becomes smarter.

1.6.4. Ad Platforms

1.6.5. Programmatic Display

1.6.5.1. Programmatic advertising allows marketers to take matters one step further by measuring and pivoting on advertising strategies in near real time.

1.6.6. AdWords Scripting

1.6.6.1. These Scripts often work as a form of supervised machine learning in many cases.

1.6.6.2. AdWords advertisers specify the input and the output determines the function. An algorithm connects the input to the output.

1.6.7. Predictive Analytics

1.6.7.1. Predictive analytics are used in many verticals and professions. Essentially, these analytics are models that help find and exploit patterns in data.

1.6.7.2. These patterns can then be used to analyze risks and opportunities.

1.6.8. Customer Churn

1.6.8.1. Keeping customers is as pivotal to growth as getting new customers.

1.6.8.2. Analytics can help understand behaviors that lead to customer churn and help marketers craft strategies to reverse churn.

1.6.8.3. Predicting customer churn is a valuable piece of this puzzle.

1.6.9. Computer Vision

1.6.9.1. Computer vision is exactly what the term sounds like — it’s how machines “see.”

1.6.10. Segment Comparison

1.6.10.1. Audience segmentation has always been an important part of advertising.

1.6.10.2. Knowing the members of your audience and where they’re coming from offers marketers incredibly valuable information.

1.6.10.3. Now marketers can create micro-segmentations as well as measure and compare how each segment reacts to different messages.

1.6.10.4. Google Analytics offers behavioral-based demographic data such as affinity groups for a user.

2. Central Machine Learning Problems

2.1. 8 When Models Meet Data

2.1.1. 8.1 Empirical Risk Minimization

2.1.2. 8.2 Parameter Estimation

2.1.3. 8.3 Probabilistic Modeling and Inference

2.1.4. 8.4 Directed Graphical Models

2.1.5. 8.5 Model Selection

2.2. 9 Linear Regression

2.2.1. 9.1 Problem Formulation

2.2.2. 9.2 Parameter Estimation

2.2.3. 9.3 Bayesian Linear Regression

2.2.4. 9.4 Maximum Likelihood as Orthogonal Projection

2.2.5. 9.5 Further Reading

2.3. 10 Dimensionality Reduction with Principal Component Analysis

2.3.1. 10.1 Problem Setting

2.3.2. 10.2 Maximum Variance Perspective

2.3.3. 10.3 Projection Perspective

2.3.4. 10.4 Eigenvector Computation and Low-Rank Approximations

2.3.5. 10.5 PCA in High Dimensions

2.3.6. 10.6 Key Steps of PCA in Practice

2.3.7. 10.7 Latent Variable Perspective

2.3.8. 10.8 Further Reading

2.4. 11 Density Estimation with Gaussian Mixture Models

2.4.1. 11.1 Gaussian Mixture Model

2.4.2. 11.2 Parameter Learning via Maximum Likelihood

2.4.3. 11.3 EM Algorithm

2.4.4. 11.4 Latent Variable Perspective

2.4.5. 11.5 Further Reading

2.5. 12 Classification with Support Vector Machines

2.5.1. 12.1 Separating Hyperplanes

2.5.2. 12.2 Primal Support Vector Machine

2.5.3. 12.3 Dual Support Vector Machine

2.5.4. 12.4 Kernels

2.5.5. 12.5 Numerical Solution

2.5.6. 12.6 Further Reading

2.6. References

2.7. Index

3. Introduction

3.1. Learning outcome

3.1.1. 1 Student will be able to describe about Machine Learning and application of Machine Learning

3.1.2. 2 Comprehend Linear Algebra in Machine Learning

3.1.3. 3 Comprehend Statistics in Machine Learning

3.1.4. 4 Explains about Correlation and Regression in Machine Learning

3.1.5. 5 Summarise the role of Probability in Statistics

3.1.6. 6 Comprehend SciKit Learn and use it in Machine Learning

3.1.7. 7 Distinguish Types, Expressions, Variables and String Operations in Python

3.1.8. 8 Practice Types, Expressions, Variables and String Operations in Python

3.1.9. 9 Explain List, Tupe in Python

3.1.10. 10 Practice and Apply List and Tupe in Python Programming

3.1.11. 11 Comprehend Python data structures and applies Dictionaries and Sets in Python Programming

3.1.12. 12 Apply Dictionaries and Sets in Python Programming

3.1.13. 13 Comprehend Conditions and Branching, Loops.

3.1.14. 14 Practice with Conditions and Branching, Loops in Python Programming

3.1.15. 15 Comprehend Functions, Objects and Classes in Python Programming

3.1.16. 16 Practice Functions, Objects and Classes in Python Programming

3.1.17. 17 Explain Reading Files, Writing in Python

3.1.18. 18 Practice Reading Files, Writing in Python Programming

3.1.19. 19 Comprehend Pandas in Python

3.1.20. 20 Pratice Pandas in Python Programming

3.1.21. 21 Comprehend about Numpy in Python

3.1.22. 22 Pratice Numpy in Python Programming

3.1.23. 23 Understand a tool for data science: Jupyter Notebooks

3.1.24. 24 Practice create and share a Jupyter Notebook

3.1.25. 25 Comprehend SciKit-Learn and using in Machine Learning

3.1.26. 26 Understand Linear regression modeling

3.1.27. 27 Comprehend about Predicting house prices in IPython Notebook

3.1.28. 28 Practice Predicting house prices in Jupyter Notebook

3.1.29. 29 Understand Classification modeling

3.1.30. 30 Comprehend Analyzing sentiment in IPython Notebook

3.1.31. 31 Practice Analyzing sentiment in Jupyter Notebook

3.1.32. 32 Understand Clustering models and algorithms

3.1.33. 33 Comprehend about Clustering and similarity in IPython Notebook

3.1.34. 34 Practice Clustering and similarity in Jupyter Notebook

3.1.35. 35 Understand Recommender systems

3.1.36. 36 Comprehend about Recommender song application

3.1.37. 37 Practice Recommending songs in Jupyter Notebook

3.1.38. 38 Understand Deep Learning with searching for Images application

3.1.39. 39 Comprehend Deep features for image classification & image retrieval

3.1.40. 40 Practice Deep features for image classification & image retrieval in Jupyter Notebook

3.2. Welcome

3.2.1. Định nghĩa

3.2.1.1. Theo Arthur Samuel (1959): Máy học là ngành học cung cấp cho máy tính khả năng học hỏi mà không cần được lập trình một cách rõ ràng

3.2.1.2. Theo Giáo sư Tom Mitchell – Carnegie Mellon University: Machine Learning là 1 chương trình máy tính được nói là học hỏi từ kinh nghiệm E từ các tác vụ T và với độ đo hiệu suất P. Nếu hiệu suất của nó áp dụng trên tác vụ T và được đo lường bởi độ đo P tăng từ kinh nghiệm E

3.2.2. ứng dụng

3.2.2.1. Computer networks Computer vision Handwriting recognition Information retrieval Machine perception Machine translation Natural language processing Natural language understanding Recommender systems Search engines Sentiment analysis Speech recognition Structural health monitoring Syntactic pattern recognition

3.2.3. phân biệt AI, ML và DL

3.2.3.1. Trí tuệ nhân tạo (AI): một cỗ máy có thể bắt chước hành vi và tư duy của con người.

3.2.3.2. Học máy (machine learning): Một chương trình hoặc hệ thống xây dựng (đào tạo) một mô hình dự đoán từ dữ liệu đầu vào. Hệ thống sử dụng mô hình đã học để đưa ra các dự đoán hữu ích từ dữ liệu mới (chưa từng thấy) được rút ra từ cùng một phân phối như mô hình được sử dụng để huấn luyện mô hình. Học máy cũng đề cập đến lĩnh vực nghiên cứu liên quan đến các chương trình hoặc hệ thống này.

3.2.3.3. Học sâu (deep learning): là một nhánh của machine learning, là một loại mạng lưới thần kinh bao gồm nhiều lớp ẩn( hidden layers) (Deep learning is a subset of machine learning)

3.2.3.3.1. neural network is a model lấy cảm hứng từ bộ não, bao gồm các lớp (ít nhất một trong số đó được ẩn) bao gồm các đơn vị hoặc nơ ron đơn giản được kết nối theo sau là phi tuyến

3.2.4. case study approach

3.2.4.1. houses price prediction

3.2.4.1.1. Model

3.2.4.1.2. Algorithms

3.2.4.1.3. Concepts

3.2.4.2. sentiment analysis

3.2.4.2.1. Model

3.2.4.2.2. Algorithms

3.2.4.2.3. Concepts

3.2.4.3. Document retrieval

3.2.4.3.1. Model

3.2.4.3.2. Algorithms

3.2.4.3.3. Concepts

3.2.4.3.4. method: clustering & Retrieval

3.2.4.4. Product recommendation

3.2.4.4.1. Model

3.2.4.4.2. Algorithms

3.2.4.4.3. Concepts

3.2.4.4.4. method: matrix factorization

3.2.4.5. Case 5 Visual product recommender

3.2.4.5.1. Model

3.2.4.5.2. Algorithms

3.2.4.5.3. Concepts

3.3. Mathematics

3.3.1. 2. Introduction to Linear Algebra

3.3.1.1. Các định nghĩa

3.3.1.1.1. Chuyển vị (transpose)

3.3.1.1.2. Đường chéo chính

3.3.1.1.3. vector

3.3.1.1.4. matrics

3.3.1.1.5. Tensor là một mảng các số được quản lý trên dạng cột với nhiều trục( tensor is an array of numbers arranged on a regular grid with variable number of axes)

3.3.1.1.6. Vô hướng là ma trận với 1 phần tử(A scalar is a matrix with one element)

3.3.2. 3. Descriptive statistics

3.3.2.1. sum concept

3.3.2.1.1. Variables is characteristics of something or someone

3.3.2.1.2. Cases are something or someone

3.3.2.2. levels of measurement data

3.3.2.2.1. Categorical

3.3.2.2.2. Quantitative (Discrete "set of separate numbers" or continuous "infinite region of values" )

3.3.2.3. frequency table

3.3.2.3.1. is a list, table or graph that displays the frequency of various outcomes in a sample

3.3.2.4. contingency table (cross tabulation or crosstab)

3.3.2.4.1. display 2 variable better than frequency table only one variable

3.3.2.4.2. for nominal or ordinal variable

3.3.2.4.3. example

3.3.2.5. To summarizing a distribution

3.3.2.5.1. pie chart

3.3.2.5.2. bar chart

3.3.2.5.3. histogram

3.3.2.5.4. plot box

3.3.2.5.5. measure of central tendency

3.3.2.5.6. Measure of variability

3.3.2.5.7. type of graph shape

3.3.2.6. all video

3.3.2.6.1. Video: Welcome to Basic Statistics

3.3.2.6.2. Video: Cases, variables and levels of measurement

3.3.2.6.3. Video: Data matrix and frequency table

3.3.2.6.4. Video: Graphs and shapes of distributions

3.3.2.6.5. Video: Mode, median and mean (Center)

3.3.2.6.6. Video: Range, interquartile range and box plot

3.3.2.6.7. Video: Variance and standard deviation

3.3.2.6.8. Video: Z-scores

3.3.2.6.9. Reading: Data and visualisation

3.3.2.6.10. Reading: Z-scores and example

3.3.3. 4. Correlation and Regression

3.3.3.1. scatter plot

3.3.3.1.1. for quantitative variable

3.3.3.1.2. A scatterplot helps us to broadly assess whether a correlation is strong or weak. But it does not tell us exactly how strong the relationship is

3.3.3.1.3. example

3.3.3.2. Pearson's

3.3.3.2.1. the Pearson's r tells us the direction and exact strength of the linear relationship between two qualitative variables

3.3.3.2.2. this show how strong or weak correlation but scatter plot

3.3.3.2.3. direction

3.3.3.2.4. Strength

3.3.3.2.5. how to compute pearson's R

3.3.3.2.6. One important note

3.3.3.3. Regression line

3.3.3.3.1. how to find the line

3.3.3.3.2. how to describe a regression line

3.3.3.3.3. how good is the line

3.3.3.4. Correlation is not causation

3.3.3.4.1. careful interpretation

3.3.3.5. videos

3.3.3.5.1. Video: Crosstabs and scatterplots

3.3.3.5.2. Video: Pearson's r

3.3.3.5.3. Video: Regression - Finding the line

3.3.3.5.4. Video: Regression - Describing the line

3.3.3.5.5. Video: Regression - How good is the line?

3.3.3.5.6. Video: Correlation is not causation

3.3.3.5.7. Reading: Correlation

3.3.3.5.8. Reading: Regression

3.3.4. 5. Probability

3.3.4.1. Role of probability in statistics

3.3.4.1.1. Thu thập dữ liệu theo cơ chế xác suất (ngẫu nhiên) (Gather data by probabilistic (random) mechanism.)

3.3.4.1.2. Sử dụng xác suất để dự đoán kết quả thí nghiệm theo các giả định.(Use probability to predict results of experiment under assumptions.)

3.3.4.1.3. Tính xác suất sai số lớn hơn số lượng đã cho.(Compute probability of error larger than given amount.)

3.3.4.1.4. Tính xác suất khởi hành được đưa ra giữa dự đoán và kết quả theo giả định.(Compute probability of given departure between prediction and results under assumption.)

3.3.4.1.5. Quyết định xem có hay không giả định có khả năng xảy ra.(Decide whether or not assumptions likely realistic.)

3.3.4.2. theory

3.3.4.2.1. Ngẫu nhiên còn là một thuật ngữ được sử dụng trong toán học (và ít chính thức) có nghĩa là không có cách nào để dự đoán đáng tin cậy một kết quả (để biết điều gì sẽ xảy ra trước khi nó xảy ra) hoặc phán đoán một khuôn mẫu (Randomness is not an intrinsic property of a phenomenon. It also depends amongst others of prior knowledge, observation method, and a scale at which the phenomenon is considered )

3.3.4.2.2. xác suất là cách để đo đạc sự ngẩu nhiên

3.3.4.2.3. A sample space is the collection of all possible outcomes of a random phenomenon

3.3.4.2.4. an event is a subset of the sample space

3.3.4.2.5. To quantify the probabilities for each event in a tree diagram, you can conduct experiments.

3.3.4.2.6. In any case, the general probability rules also apply to tree diagrams. The probability of any event lies between 0 and 1, and a total probability of all possible outcomes at the node in a tree diagram equals 1

3.3.4.2.7. use a tree diagram for smaller problems easy

3.3.4.2.8. to actually apply a tree diagram to quantify probabilities requires specification of the probabilities at each node

3.3.4.2.9. set is collections of items

3.3.4.2.10. phân phối xác suất hay thường gọi hơn là một hàm phân phối xác suất là quy luật cho biết cách gán mỗi xác suất cho mỗi khoảng giá trị của tập số thực, sao cho các tiên đề xác suất được thỏa mãn

3.3.4.3. Videos

3.3.4.3.1. Video: Randomness

3.3.4.3.2. Video: Probability

3.3.4.3.3. Video: Sample space, event, probability of event and tree diagram

3.3.4.3.4. Video: Quantifying probabilities with tree diagram

3.3.4.3.5. Video: Basic set-theoretic concepts

3.3.4.3.6. Video: Practice with sets

3.3.4.3.7. Video: Union

3.3.4.3.8. Reading: Probability & randomness

3.3.4.3.9. khóa học youtube

3.4. Python

3.4.1. docs3.7

3.4.2. data type

3.4.2.1. basic type

3.4.2.1.1. int

3.4.2.1.2. float

3.4.2.1.3. str

3.4.2.1.4. bool

3.4.2.1.5. bytes

3.4.2.2. python data structure collection(container) type (support x in set,..., len(set,..) and for x in set, ...) note ... is other type like: dict, tuple and list

3.4.2.2.1. data structure

3.4.2.2.2. key containers, no a priori order, fast key access, each key is unique

3.4.2.2.3. ordered sequences, fast index access, repeatable values

3.4.2.2.4. immutable

3.4.2.2.5. sequence type

3.4.2.2.6. some comparision

3.4.3. Conditions, Branching and Loop

3.4.3.1. all

3.4.3.2. branching

3.4.3.2.1. image

3.4.3.3. while

3.4.3.3.1. while

3.4.3.4. for

3.4.3.4.1. for in

3.4.3.4.2. for in range

3.4.4. Reading Files, Writing Files and Padas in Python

3.4.4.1. built-in read write

3.4.4.1.1. permissions

3.4.4.1.2. output = open('/tmp/spam', 'w') Create output file ('w' means write).

3.4.4.1.3. input = open('data', 'r') Create input file ('r' means read).

3.4.4.1.4. S = input.read( ) Read entire file into a single string.

3.4.4.1.5. S = input.read(N) Read N bytes (1 or more).

3.4.4.1.6. S = input.readline( ) Read next line (through end-line marker).

3.4.4.1.7. L = input.readlines( ) Read entire file into list of line strings.

3.4.4.1.8. output.write(S) Write string S into file.

3.4.4.1.9. output.writelines(L) Write all line strings in list L into file.

3.4.4.1.10. output.close( ) Manual close (done for you when file collected).

3.4.4.1.11. with open('file', mode) as

3.4.4.2. pandas read write

3.4.4.2.1. image

3.4.4.3. read w

3.4.5. Numpy

3.4.5.1. official docs

3.4.6. IPython Notebook

3.4.7. SciKit Learn

3.4.7.1. Introduction

3.4.7.1.1. ML models can learn by example

3.4.7.1.2. ML brings together: statistics, computer science,...

3.4.7.1.3. Case

3.4.7.2. Key Concepts in Machine Learning

3.4.7.2.1. Type

3.4.7.2.2. A basic ML workflow

3.4.7.3. Python Tools for Machine Learning

3.4.7.3.1. scikit-learn

3.4.7.3.2. SciPy Lib

3.4.7.3.3. NumPy

3.4.7.3.4. pandas

3.4.7.3.5. matplotlib

3.4.7.4. An Example Machine Learning Problem

3.4.7.5. Examining the Data

3.4.7.6. K-Nearest Neighbors Classification

3.4.7.6.1. Need 4 things specificed

3.4.7.6.2. sequence of operations using scikit-learn to apply the k-nearest neighbors classification method

3.4.7.7. Reading: Course Syllabus

3.4.7.8. Reading:Help us learn more about you!

3.4.7.9. Reading: Notice for Auditing Learners: Assignment Submission

3.4.7.10. Reading: Zachary Lipton: The Foundations of Algorithmic Bias

3.4.8. Math plotlib

3.4.8.1. official docs

3.5. Machine Learning Foundations: A Case Study

3.5.1. Lesson 14: Linear regression modeling

3.5.1.1. video

3.5.1.1.1. Video: Predicting house prices: A case study in regression

3.5.1.1.2. Video: What is the goal and how might you naively address it?

3.5.1.1.3. Video: Linear Regression: A Model-Based Approach

3.5.1.1.4. Video: Adding higher order effects

3.5.1.1.5. Video: Evaluating overfitting via training/test split

3.5.1.1.6. Video: Training/test curves

3.5.1.1.7. Video: Adding other features

3.5.1.1.8. Video: Other regression examples

3.5.1.1.9. Video: Regression ML block diagram

3.5.1.1.10. Reading: Slides presented in this module

3.5.1.2. summary

3.5.1.2.1. At the end of this lesson, you have learned about:

3.5.1.2.2. Regression, which is one of the most widely used statistical methods. Predicting real values from a set of known features are a common problem to be solved.

3.5.1.2.3. Learn how to analyze some insight to the problem via visualized data on the coordinate axis.

3.5.1.2.4. Apply Linear Regression on our Prediction problem, the intuitive behind the method and specific examples.

3.5.1.2.5. How do we think about choosing the right model order or model complexity and error checking

3.5.1.2.6. Find what happens to test error as we increase the model order, decreases model order…

3.5.1.2.7. We may achieve better performance by expanding the set of features.

3.5.1.2.8. Show some other Regression examples and learn how to know what we actually should use to fit or evaluate our model. That's gonna be our training data and evaluation data.

3.5.2. Lesson 15: Predicting house prices: IPython Notebook

3.5.2.1. Lesson Objective

3.5.2.1.1. 1. Linear regression modeling

3.5.2.1.2. 2. Evaluating regression models

3.5.2.1.3. 3. Summary of regression

3.5.2.1.4. 4. Predicting house prices: IPython Notebook

3.5.2.1.5. 5. Programming assignment

3.5.3. Lesson 16: Classification modeling

3.5.3.1. Reading: Reading Slides presented in this module

3.5.3.2. Videos

3.5.3.2.1. Video: Analyzing the sentiment of reviews: A case study in classification

3.5.3.2.2. Video: What is an intelligent restaurant review system?

3.5.3.2.3. Video: Examples of classification tasks

3.5.3.2.4. Video: Linear classifiers

3.5.3.2.5. Video: Decision boundaries

3.5.3.2.6. Video: Training and evaluating a classifier

3.5.3.2.7. Video: What's a good accuracy?

3.5.3.2.8. Video: False positives, false negatives, and confusion matrices

3.5.3.2.9. Video: Learning curves

3.5.3.2.10. Video: Class probabilities

3.5.3.2.11. Video: Classification ML block diagram

3.5.3.2.12. Reading: Slides presented in this module

3.5.4. Lesson 17: Analyzing sentiment: IPython Notebook

3.5.4.1. Overview

3.5.4.1.1. 1. Loading & exploring product review data

3.5.4.1.2. 2. Creating the word count vector

3.5.4.1.3. 3. Exploring the most popular product

3.5.4.1.4. 4. Defining which reviews have positive or negative sentiment

3.5.4.1.5. 5. Training a sentiment classifier

3.5.4.1.6. 6. Evaluating a classifier & the ROC curve

3.5.4.1.7. 7. Applying model to find most positive & negative reviews for a product

3.5.4.1.8. 8. Exploring the most positive & negative aspects of a product

3.5.4.2. Video: Loading & exploring product review data

3.5.4.3. Video: Creating the word count vector

3.5.4.4. Video: Exploring the most popular product

3.5.4.5. Video: Defining which reviews have positive or negative sentiment

3.5.4.6. Video: Training a sentiment classifier

3.5.4.7. Video: Evaluating a classifier & the ROC curve

3.5.4.8. Video: Applying model to find most positive & negative reviews for a product

3.5.4.9. Video: Exploring the most positive & negative aspects of a product

3.5.4.10. At the end of this lesson, you had:

3.5.4.10.1. Practiced building a sentiment analyzer for products in Jupyter Notebook

3.5.4.10.2. Creating a word count vector as part of using this data set and calculated the word count vector for each review.

3.5.4.10.3. Exploring the most popular product via data manipulation and example

3.5.4.10.4. Defining which reviews have positive or negative sentiment through number of stars rated

3.5.4.10.5. Training a sentiment classifier by following instructions on how to train the sentiment classifier

3.5.4.10.6. Evaluating a classifier with the ROC curve, we just talked in this module about classification error, precision, sorry, false positives and false negatives.

3.5.4.10.7. Applying model to find most positive & negative reviews for a product. Let's go and actually use the built model to predict reviews for a product.

3.5.4.10.8. In order to explore positive & negative aspects of a product, we built a sentiment analyzer to look at reviews to detect these aforementioned aspects. The model can be used to sort and show extremely positive and negative reviews of this particular product.

3.5.5. Lesson 18: Clustering models and algorithms

3.5.5.1. lesson objective

3.5.5.1.1. 1. What is the document retrieval task?

3.5.5.1.2. 2. Word count representation for measuring similarity

3.5.5.1.3. 3. Prioritizing important words with tf-idf

3.5.5.1.4. 4. Calculating tf-idf vectors

3.5.5.1.5. 5. Retrieving similar documents using nearest neighbor search

3.5.5.1.6. 6. Clustering documents task overview

3.5.5.1.7. 7. Clustering documents: An unsupervised learning task

3.5.5.2. Videos

3.5.5.2.1. Video: Document retrieval: A case study in clustering and measuring similarity

3.5.5.2.2. Video: What is the document retrieval task?

3.5.5.2.3. Video: Word count representation for measuring similarity

3.5.5.2.4. Video: Prioritizing important words with tf-idf

3.5.5.2.5. Video: Calculating tf-idf vectors

3.5.5.2.6. Video: Retrieving similar documents using nearest neighbor search

3.5.5.2.7. Video: Clustering documents task overview

3.5.5.2.8. Video: Clustering documents: An unsupervised learning task

3.5.5.2.9. Videos

3.5.5.3. Lesson Summaries:

3.5.5.3.1. At the end of this lesson, you had learned of:

3.5.5.3.2. Document retrieval: A case study in clustering and measuring similarity. Documents can be represented in vectors of features or external interactions (user view, etc..), which allowed us to compare them using basic vector comparisons.

3.5.5.3.3. The implementation of the document retrieval task

3.5.5.3.4. Understand the word count representation for measuring similarity through example

3.5.5.3.5. Prioritizing important words with TF-IDF, which really emphasize the important words in a document by the notion that the rarer the words, the more impactful their meanings became.

3.5.5.3.6. Combine the word count and TF-IDF to highlight articles with same uncommon words.

3.5.5.3.7. Understand retrieving similar documents using nearest neighbor search through example

3.5.5.3.8. Overview of clustering documents, how it is feasible.

3.5.5.3.9. How to clustering documents, it nature as an unsupervised learning task, and a detailed example.

3.5.6. Lesson 19: Summary of clustering and similarity

3.5.6.1. Lesson Overview:

3.5.6.1.1. In this lesson, you will expand on the task of clustering the documents to automatically group articles by similarity (e.g., document topic). In particular, you will actually build an intelligent document retrieval system for Wikipedia entries in an iPython notebook.

3.5.6.2. Lesson Objective

3.5.6.2.1. 1. k-means: A clustering algorithm

3.5.6.2.2. 2. Other examples of clustering

3.5.6.2.3. 3. Clustering and similarity ML block diagram

3.5.6.2.4. 4. Loading & exploring Wikipedia data

3.5.6.2.5. 5. Exploring word counts

3.5.6.2.6. 6. Computing & exploring TF-IDFs

3.5.6.2.7. 7. Computing distances between Wikipedia articles

3.5.6.2.8. 8. Building & exploring a nearest neighbors model for Wikipedia articles

3.5.6.2.9. 9. Examples of document retrieval in action

3.5.6.3. Learning Outcomes:

3.5.6.3.1. MLP301x_o31: Understand Clustering models and algorithms

3.5.6.3.2. MLP301x_o32: Comprehend about Clustering and similarity in IPython Notebook

3.5.6.3.3. MLP301x_o33: Practices clustering and similarity in Jupyter Notebook

3.5.6.4. Video: k-means: A clustering algorithm

3.5.6.5. Video: Other examples of clustering

3.5.6.6. Video: Clustering and similarity ML block diagram

3.5.6.7. Video: Loading & exploring Wikipedia data

3.5.6.8. Video: Exploring word counts

3.5.6.9. Video: Computing & exploring TF-IDFs

3.5.6.10. Video: Building & exploring a nearest neighbors model for Wikipedia articles

3.5.6.11. Video: Examples of document retrieval in action

3.5.6.12. Reading: Retrieving Wikipedia articles assignment

3.5.6.13. Lesson Summaries

3.5.6.13.1. At the end of this lesson, you have been taught about:

3.5.7. Lesson 20: Recommender systems

3.5.7.1. Lesson Overview

3.5.7.1.1. In this lesson, you will consider: Ever wonder how Amazon forms its personalized product recommendations? How Netflix suggests movies to watch? How Pandora selects the next song to stream? How Facebook or LinkedIn finds people you might connect with? Underlying all of these technologies for personalized content is something called collaborative filtering. You will learn how to build such a recommender system using a variety of techniques, and explore their tradeoffs.

3.5.7.2. Lesson Objective

3.5.7.2.1. 1. Recommender systems overview

3.5.7.2.2. 2. Where we see recommender systems in action

3.5.7.2.3. 3. Building a recommender system via classification

3.5.7.2.4. 4. Collaborative filtering: People who bought this also bought…

3.5.7.2.5. 5. Effect of popular items

3.5.7.2.6. 6. Normalizing co-occurrence matrices and leveraging purchase histories

3.5.7.2.7. 7. The matrix completion task

3.5.7.2.8. 8. Recommendations from known user/item features

3.5.7.2.9. 9. Predictions in matrix form

3.5.7.2.10. 10. Discovering hidden structure by matrix factorization

3.5.7.2.11. 11. Bringing it all together: Featurized matrix factorization

3.5.7.2.12. 12. A performance metric for recommender systems

3.5.7.2.13. 13. Optimal recommenders

3.5.7.2.14. 14. Precision-recall curves

3.5.7.2.15. 15. Recommender systems ML block diagram

3.5.7.3. Learning Outcomes

3.5.7.3.1. MLP301x_o34: Understand Recommender systems

3.5.7.3.2. MLP301x_o35: Comprehend about Recommender song application

3.5.7.4. At the end of this lesson, we have saw

3.5.7.4.1. - Recommender systems overview, what they are and how they are used. And one typical example of where recommender systems are really useful is in recommending products - Recommender systems in action and the implication that depending on the specific application different aspects of the objective we're trying to optimize are gonna be important. - A recommender system basing on classification: there are lots and lots of approaches for performing these types of recommendations. - This notion of collaborative filtering is that somehow we wanna leverage what other people have purchased. And the case of product recommendation or other links more generically between users and items, to make recommendations for other users - Effect of popular items: The co-occurrence matrices used in our models have to be normalized in the event of containing a very popular item – the vectors will be skewed due to the amount of attention on that item. - Normalizing co-occurrence matrices and leveraging purchase histories: To handle this situation of having very popular items, we can think about normalizing the the co-occurrence matrix. And one way in which we can normalize this matrix is with something called Jaccard similarity. - The matrix completion task: we'd like to be able to learn these features from the data. That'll help us cope with the problems we talked about where we might not have features available. - Recommendations from known user/item features: how are we gonna make these recommendations? How are we gonna guess what rating a person would give to a movie that they've never watched? - Predictions in matrix form: Take these ratings that we were just talking about. And instead of talking about them for a specific combination of a movie and a user, let's talk about how we can think about representing our predictions over the entire set of users and movies. And to do this, we're gonna need a little bit of linear algebra. - Discovering hidden structure by matrix factorization through specific examples - Bringing it all together: The question is, whether we can have some integrated approach to get the benefits of both worlds. So importantly the features of our classification based approach, or something like that can capture things like context, time of day, user information, past purchases. - A performance metric for recommender systems answer to the question is, how do we assess the difference in performance for these different systems we might consider using? - How would you maximize recall? By optimal recommenders. - Precision-recall curves: how we can use these metrics of precision and recall to compare our different algorithms that we might think about using. - Explore recommender systems ML block diagram by examples.

3.5.7.4.2. Recommender systems overview, what they are and how they are used. And one typical example of where recommender systems are really useful is in recommending products

3.5.7.4.3. Recommender systems in action and the implication that depending on the specific application different aspects of the objective we're trying to optimize are gonna be important.

3.5.7.4.4. A recommender system basing on classification: there are lots and lots of approaches for performing these types of recommendations.

3.5.7.4.5. This notion of collaborative filtering is that somehow we wanna leverage what other people have purchased. And the case of product recommendation or other links more generically between users and items, to make recommendations for other users

3.5.7.4.6. Effect of popular items: The co-occurrence matrices used in our models have to be normalized in the event of containing a very popular item – the vectors will be skewed due to the amount of attention on that item.

3.5.7.4.7. Normalizing co-occurrence matrices and leveraging purchase histories: To handle this situation of having very popular items, we can think about normalizing the the co-occurrence matrix. And one way in which we can normalize this matrix is with something called Jaccard similarity.

3.5.7.4.8. The matrix completion task: we'd like to be able to learn these features from the data. That'll help us cope with the problems we talked about where we might not have features available.

3.5.7.4.9. Recommendations from known user/item features: how are we gonna make these recommendations? How are we gonna guess what rating a person would give to a movie that they've never watched?

3.5.7.4.10. Predictions in matrix form: Take these ratings that we were just talking about. And instead of talking about them for a specific combination of a movie and a user, let's talk about how we can think about representing our predictions over the entire set of users and movies. And to do this, we're gonna need a little bit of linear algebra.

3.5.7.4.11. Discovering hidden structure by matrix factorization through specific examples

3.5.7.4.12. Bringing it all together: The question is, whether we can have some integrated approach to get the benefits of both worlds. So importantly the features of our classification based approach, or something like that can capture things like context, time of day, user information, past purchases.

3.5.7.4.13. A performance metric for recommender systems answer to the question is, how do we assess the difference in performance for these different systems we might consider using?

3.5.7.4.14. How would you maximize recall? By optimal recommenders.

3.5.7.4.15. Precision-recall curves: how we can use these metrics of precision and recall to compare our different algorithms that we might think about using.

3.5.7.4.16. Explore recommender systems ML block diagram by examples.

3.5.7.5. Video: Recommender systems overview

3.5.7.6. Reading: Slides presented in this module

3.5.7.6.1. Where we see recommender systems

3.5.7.6.2. Building a recommender system

3.5.8. Lesson 21: Song recommender: IPython Notebook

3.5.8.1. Lesson Objective

3.5.8.1.1. Lesson Overview: In this lesson, You will learn how to build such a recommender system using a variety of techniques, and explore their tradeoffs. One method we examine is matrix factorization, which learns features of users and products to form recommendations. In an iPython notebook, you will use these techniques to build a real song recommender system. Lesson Objective 1. Loading and exploring song data 2. Creating & evaluating a popularity-based song recommender 3. Creating & evaluating a personalized song recommender 4. Using precision-recall to compare recommender models Learning Outcomes: MLP301x_o36: Practices Recommending songs in Jupyter Notebook.

3.5.8.2. Video: Loading and exploring song data

3.5.8.3. Video: Creating & evaluating a popularity-based song recommender

3.5.8.4. Video: Creating & evaluating a personalized song recommender

3.5.8.5. Video: Using precision-recall to compare recommender models

3.5.8.6. At the end of this lesson, you learned to:

3.5.8.6.1. Loading and exploring song data: we're gonna complete our Jupyter Notebook to recommend songs we might want to listen to.

3.5.8.6.2. Creating & evaluating a popularity-based song recommender: Create a recommender system using the dataset we already have, basing on how popular the songs are.

3.5.8.6.3. Creating & evaluating a personalized song recommender: Build a song recommender with personalization, allowing us to tailor the recommendation to the specific user

3.5.8.6.4. Comparison: Let's do a more quantitative comparison between the personalize model and the popularity model, to see which is superior in which cases.

3.5.9. Lesson 22: Deep Learning: Searching for Images

3.5.9.1. Lesson Overview

3.5.9.1.1. In this lesson, You will learn about Deep Learning, which is gaining great renown across the world as one of the most promising techniques in novel machine learning problems, many of which had saw little success from traditional techniques. Many companies are dedicating resources to unlock the potential of deep learning, including for tasks such as image tagging, object recognition, speech recognition, and text analysis. In our final case study, searching for images, you will learn how layers of neural networks provide very descriptive (non-linear) features that provide impressive performance in image classification and retrieval tasks.

3.5.9.2. Lesson Objective

3.5.9.2.1. 1. Searching for images: A case study in deep learning

3.5.9.2.2. 2. What is a visual product recommender?

3.5.9.2.3. 3. Learning very non-linear features with neural networks

3.5.9.2.4. 4. Application of deep learning to computer vision

3.5.9.2.5. 5. Deep learning performance

3.5.9.2.6. 6. Demo of deep learning model on ImageNet data

3.5.9.2.7. 7. Other examples of deep learning in computer vision

3.5.9.2.8. 8. Challenges of deep learning

3.5.9.2.9. 9. Deep Features

3.5.9.2.10. 10. Deep learning ML block diagram

3.5.9.3. Learning outcomes

3.5.9.3.1. MLP301x_o37: Understand Deep Learning with searching for Images application

3.5.9.3.2. MLP301x_o38: Comprehend Deep features for image classification & image retrieval

3.5.9.4. Lesson Summaries

3.5.9.5. Reading: Slides presented in this module

3.5.9.6. Videos

3.5.9.6.1. Video: Searching for images: A case study in deep learning

3.5.9.6.2. Video: What is a visual product recommender?

3.5.9.6.3. Video: Learning very non-linear features with neural networks

3.5.9.6.4. Video: Deep learning performance

3.5.9.6.5. Video: Demo of deep learning model on ImageNet data

3.5.9.6.6. Video: Other examples of deep learning in computer vision

3.5.9.6.7. Video: Challenges of deep learning

3.5.9.6.8. Video: Deep Features

3.5.9.6.9. Video: Deep learning ML block diagram

3.5.10. Lesson 23: Deep features for image classification & image retrieval

3.5.10.1. Lesson Overview

3.5.10.1.1. In this lesson, you will construct deep features, a transfer learning technique that allows you to use deep learning even when you have little data to train the model. Using iPython notebooks, you will build an image classifier and an intelligent image retrieval system using Deep Features.

3.5.10.2. Lesson Objective

3.5.10.2.1. 1. Loading image classification data

3.5.10.2.2. 2. Training & evaluating a classifier using raw image pixels

3.5.10.2.3. 3. Training & evaluating a classifier using deep features

3.5.10.2.4. 4. Loading image retrieval data

3.5.10.2.5. 5. Creating a nearest neighbors model for image retrieval

3.5.10.2.6. 6. Querying the nearest neighbors model to retrieve images

3.5.10.2.7. 7. Querying for the most similar images for car image

3.5.10.2.8. 8. Displaying other example image retrievals with a Python lambda

3.5.10.2.9. 9. Open challenges in ML

3.5.10.2.10. 10. Where is ML going?

3.5.10.2.11. 11. What's ahead in the specialization

3.5.10.2.12. 12. Using precision-recall to compare recommender models

3.5.10.3. Learning outcomes

3.5.10.3.1. MLP301x_o39: Practices Deep features for image classification & image retrieval in Jupyter Notebook

3.5.10.4. Video: Loading image classification data

3.5.10.5. Video: Training & evaluating a classifier using raw image pixels

3.5.10.6. Video: Training & evaluating a classifier using deep features

3.5.10.7. Video: Loading image data

3.5.10.8. Video: Querying the nearest neighbors model to retrieve images

3.5.10.9. Video: Querying for the most similar images for car image

3.5.10.10. Video: Displaying other example image retrievals with a Python lambda

3.5.10.11. Video: Open challenges in ML

3.5.10.12. Video: Where is ML going?

3.5.10.13. Video: What's ahead in the specialization

3.5.10.14. Video: Using precision-recall to compare recommender models

3.5.10.15. Lesson Summaries

3.5.10.15.1. At the end of this lesson, you learned: - Practice loading image data by examples - Training & evaluating a classifier using raw image pixels. So we have decreased the font and train a basic model on this data set. - Training & evaluating a classifier using deep features. - Loading image retrieval data: Deep features are great for image classification. They allowed us to get quite good accuracy, even with a small amount of training data on a particular classification task by first finding features from this deep learning model that won the 2012 ImageNet competition. - Creating a nearest neighbors model for image retrieval: An useful way is to train a nearest-neighbors model for retrieving images using deep features. - Learn querying the nearest neighbors model to retrieve images by examples - Querying for the most similar images for car image: Using these helper functions, we can use our model to see if the nearest neighbors of a car image is other cars. - Displaying other example image retrievals with a Python lambda: let's create a lambda to find and show nearest neighbor images. - Open challenges in ML: there a lots of open challenges that still remain in machine learning. So let's discuss some of them. - Discuss the topic Where is ML going? - What's ahead in the specialization: Now that you guys are psyched about the future machine running, let's talk about what we're gonna cover in the specialization.

3.5.10.15.2. At the end of this lesson, you learned:

3.5.10.15.3. Practice loading image data by examples

3.5.10.15.4. Training & evaluating a classifier using raw image pixels. So we have decreased the font and train a basic model on this data set.

3.5.10.15.5. Training & evaluating a classifier using deep features.

3.5.10.15.6. Loading image retrieval data: Deep features are great for image classification. They allowed us to get quite good accuracy, even with a small amount of training data on a particular classification task by first finding features from this deep learning model that won the 2012 ImageNet competition.

3.5.10.15.7. Creating a nearest neighbors model for image retrieval: An useful way is to train a nearest-neighbors model for retrieving images using deep features.

3.5.10.15.8. Learn querying the nearest neighbors model to retrieve images by examples

3.5.10.15.9. Querying for the most similar images for car image: Using these helper functions, we can use our model to see if the nearest neighbors of a car image is other cars.

3.5.10.15.10. Displaying other example image retrievals with a Python lambda: let's create a lambda to find and show nearest neighbor images.

3.5.10.15.11. Open challenges in ML: there a lots of open challenges that still remain in machine learning. So let's discuss some of them.

3.5.10.15.12. Discuss the topic Where is ML going?

3.5.10.15.13. What's ahead in the specialization: Now that you guys are psyched about the future machine running, let's talk about what we're gonna cover in the specialization.

3.5.10.16. Videos

3.5.11. kết luận

3.5.11.1. At the end of this lesson, you have learned about:

3.5.11.2. Classification, one of the most common areas of machine learning alongside Regression, both widely used and extremely useful implementation of machine learning.

3.5.11.3. Intelligent restaurant review system. Reviews are to be broken down into sentences, analyzed and transformed into data to train our model. Using the data, we can make a model that could predict the sentiment of the users behind their review, even if they didn’t leave a rating.

3.5.11.4. Explore Examples of classification tasks in reality.

3.5.11.5. Learn about Linear Classifiers, one of the most common types of classifiers via examples and also the simplest to illustrate.

3.5.11.6. Decision boundaries and how they affect classifiers.

3.5.11.7. Training and evaluating a classifier. Our errors are a little different comparing to Regression, because we are talking about outputs as probabilities vectors instead of values.

3.5.11.8. Understand the accuracies or errors that you're actually getting from your classifier.

3.5.11.9. Learn false positives, false negatives, and confusion matrices via examples.

3.5.11.10. Learning curves, the relationship in terms of the amount of data you have to learn.

3.5.11.11. Learn about the probability of class applied to sentiment classification.

3.5.11.12. Understanding of Classification ML block diagram.

4. google docs