12/19/2017
Here, we’ll look at the K-Nearest Neighbors approach toward understanding one of the core ideas of machine learning, the regression.
In the context of a general regression problem, you have a bunch of information called features or covariates along with a numerical value associated with these features, and you want to use this information to estimate the numerical value as an output of a new input set of features. For example, we might have data about the GPAs of college seniors, the number of STEM classes each of the students took, and their starting salaries after graduating. In this case, the features are the quantities:
and their starting salary is the numerical value associated with a given student’s GPA and number of classes. We might “perform a regression” on this data and use it to predict a new student’s salary given his/her GPA and number of STEM classes taken. You might imagine that students with a lower GPA might tend to have lower earning potential, whereas taking more STEM classes might get you that snazzy engineer salary. The regression should capture this intuition and be able to predict salaries for new students.
In math, we can express a unique student’s features as a vector. We’ll denote a feature vector for a single unit by $\mathbf{x} \in \mathbb{R}^m$. In the example, $m$, the dimensions of the feature vector, is 2: GPA and number of STEM classes. We’ll also use $y \in \mathbb{R}$ to denote the output, the salary in the example. The goal is to define a function $h$ where $y = h(\mathbf{x})$. In other words, we will try to create a function which, given a student’s school information, correctly guesses his/her salary.
In this post, we’ll be talking about a relatively easy approach to regression, K-Nearest Neighbors. KNN is a non-parametric model, meaning that nothing is “learned” in the construction of the model. The idea behind KNN is pretty simple. You have a bunch of data, some things are more similar than others, and so for any new queried datapoint, we simply look for the $k$ most similar datapoints we’ve regressed on and take the average. In the salary example, suppose we have a bunch of students with 4.0’s taking 20 STEM courses, and their salaries are all in the 80k range plus or minus 1k. If I give you a new student also with a 4.0 who took 20 STEM courses, we can reasonably guess his/her salary will be $80,000 by taking the average of the other similar datapoints.
We can express the regression function $h$ to make an estimate $\hat{y}$ of an input feature vector as:
\[\hat{y} = h(\mathbf{x}; k) = \frac{1}{k}\sum_{i=1}^k y^{(i)}\]We can see KNN in action with the following D3 visualization. Think of the data as being plotted in $\mathbb{R}^3$, where $x_1$ and $x_2$ (GPA/STEM classes) are the planar axes, and $y$ (the student’s salary) is the opacity (all the points are the same color here). As you hover over the visualization, the $k$ nearest points are highlighted in red and their convex hull is shaded in. For any point along the plane, an estimate can be made as the average of the $k$ red points within the hull.
A quick note, you might argue that $k$ is a parameter, but again, it’s not learned, so we still say this is a non-parametric model.
The KNN approach is an intuitive and easy way to regress on data since no parameters need to be “learned,” but it does have its shortcomings.
KNN requires that you keep all your training data. For any query input vector, you’d have to calculate the distance to every point and find the top $k$. At best, this could be $O((n-k) \cdot log(k))$ time with a fully optimal priority queue. Unfortunately, this is still at least linear in $n$, and with lots of data, this becomes intractable to compute without smarter calculations.
KNN suffers from the curse of dimensionality. This is an even bigger problem. I’m not going to prove this here, but for high dimensionality, we find that Euclidean distance becomes uninformative to compute as a distance metric. Worse yet, many points would tend to have similar distances with large $n$, so KNN would skew toward these high-frequency feature vectors.
All in all, KNN is a approach to performing a simple form of regression, but it has its drawbacks.
08/22/2018
Toward the Light: Behind the Scenes
07/01/2018
Arch Linux: Chromebook C720 Webcam Microphone Disappeared
06/21/2018
SSH: How to Set Up a Simple Local Media Server
02/28/2018
Pacman: File Conflicts
01/17/2018
Making an Arch Linux USB Flash Install Medium
01/17/2018
Arch Linux: Post-Install Notes
01/15/2018
Binary Classification Metrics
01/14/2018
Probabilitistic Classification
01/09/2018
Classification and Perceptron
01/08/2018
Linear Regression: Bayesian Approach, Normal Conjugacy
01/08/2018
Nonlinearity: Basis Functions
01/04/2018
Linear Regression: A Probabilistic Approach
12/30/2017
Linear Regression: A Mathematical Approach
12/20/2017
2017 Reflections: A Year of Curating
12/19/2017
Introduction to Regression: K-Nearest Neighbors
12/18/2017
Welcome to my Miscellaneous Blog!
12/18/2017
A Definitive Arch Linux Install Guide for the Chromebook C720
10/01/2017
C4D: Volume Effector
09/18/2017
Algorithms: Maximum Sliding Window
09/10/2017
Introduction to Inference: Coins and Discrete Probability
09/05/2017
C4D: Unreliable Booles
08/30/2017
Welcome to my Tech Blog!
08/30/2017
Welcome to my Problem Solving Blog!
Previous: Welcome to my Miscellaneous Blog! | Next: 2017 Reflections: A Year of Curating