A Vectorized Python 🐍 implementation using only NumPy, SciPy, and Matplotlib resembling as closely as possible to both provided and personally-completed code in the octave/matlab as part of the excellent Stanford University's Machine Learning Course on Coursera. The course is taught by Andrew Ng a genius and an excellent popularizer, which is a rare combination.
This course helped me write a blog answering the following question What is Machine Learning?
Given a set of labeled observations, find a function f which can be used to assign a class or value to unseen observations. Predictions should be similar to real labels.
In a regression problem, we are trying to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function.
- 🐍 Demo | Linear Regression with multiple variables Notebook
▶️ Demo | Linear Regression with multiple variables Matlab
In a classification problem, we instead are trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories.
- 🐍 Demo | Neural Networks Notebook Part I, Demo | Neural Networks Notebook Part II
▶️ Demo | Neural Networks Matlab Part I, Demo | Neural NetworksMatlab Part II
Tackling Overfitting and Underfitting problems.
Labeling can be tedious (too long, too slow), often done by humans and no real labels to compare. Unsupervised learning allows us to approach problems with little or no idea what our results should look like. We can derive structure from data where we don't necessarily know the effect of the variables. We can derive this structure by clustering the data based on relationships among the variables in the data. With unsupervised learning there is no feedback based on the prediction results.
Group objects in clusters, similar within cluster, dissimilar between clusters.
Reduce data set dimensions. Used for ata compression or big data visualization.
Identifies rare items (outliers) which raise suspicions by differing significantly from the majority of the data.
Predicts the rating or preference a user would give to an item.