Final thoughts on K Nearest Neighbors




Welcome to the 19th part of our

Machine Learning with Python tutorial series

. We're going to cover a few final thoughts on the K Nearest Neighbors algorithm here, including the value for K, confidence, speed, and the pros and cons of the algorithm now that we understand more about how it works.

After performing a test of 100 samples, the average accuracy of the Scikit-Learn neighbors.KNeighborsClassifier classifier compared to our custom-made classifier was identical, with 0.97, or 97%, accuracy. Don't go patting yourself on the back through, since this algorithm is so very simple and basic. The real value in a K Nearest Neighbors classifier code is not so much in the accuracy, that's a given, it's in the speed. The main downfall of the K Nearest Neighbors classifier is indeed the speed with which you can perform the operations.

The speed, per round of the Scikit-Learn version of KNN was 0.044 seconds, vs our 0.55 seconds, per classification. Thus, while we achieved identical results, we're significantly slower than Scikit-Learn. The good news is, if you are curious how they do it, you can view the source code! We also mentioned that we'd discuss one major way to speed things up overall as well. K Nearest Neighbors doesn't actually have much in the way of training. The training is simply loading points into memory. You could keep the training data in memory, but the real pain point for the K Nearest Neighbors classifier is comparing to every point to find the closest one. Then, what happens when you have say 1,000 datasets that you're attempting to classify? Yikes! One option for us is to use threading. There's no benefit to running this all linearly, ie: 1 after another. Our method has us doing that, using only a tiny bit of our processing power overall. Instead, we can probably calculate at least 100-200 of these at the same time even on a cheap processor. If you want to learn how to thread, check out this threading tutorial. With Scikit-Learn, the KNN classifier comes with a parallel processing parameter called n_jobs. You can set this to be any number that you want to run simultaneous operations for. If you want to run 100 operations at a time, n_jobs=100. If you just want to run as many as you can, you set n_jobs=-1. Learn more about the options at your disposal by checking out the Nearest Neighbors documentation. There are ways of only comparing your data to data within a certain radius. If you are interested in speeding up KNN to be more similar to the Scikit-Learn version, you may want to look into that first.

Finally, one last point I will make is on actual prediction confidence. There are two ways to measure confidence. One way is by comparing how many examples you got correct vs incorrect in the testing stage, but, another way, is by checking the vote percentage. For example, your overall algorithm may be 97% accurate, but on some of the classifications the votes may have been 3 to 2. While 3 is the majority, it's only 60% vote rather than 100% which would be ideal. In terms of telling someone whether or not they have breast cancer, like the automatic car differentiating between a blob of tar and a child in a blanket, you probably prefer 100%! Thus, in the case of 60% vote on a 97% accurate classifier, you can be 97% sure that you are only 60% certain about your classification. It's totally possible that this 60% vote is responsible for part of the 3% that was inaccurately measured.

Alright, so we just wrote our own classifier that has 97% accuracy, but doesn't perform well on everything. K Nearest Neighbors is very useful since it performs well on both linear and non-linear data. The main downfall is in scale, with outliers, and with any bad data (recall our useless inclusion of the ID column).

Sticking with supervised machine learning, specifically classification, we're going to cover the Support Vector Machine next. Ending code:

import numpy as np
from math import sqrt
import warnings
from collections import Counter
import pandas as pd
import random

def k_nearest_neighbors(data, predict, k=3):
    if len(data) >= k:
        warnings.warn('K is set to a value less than total voting groups!')
    distances = []
    for group in data:
        for features in data[group]:
            euclidean_distance = np.linalg.norm(np.array(features)-np.array(predict))
            distances.append([euclidean_distance, group])

    votes = [i[1] for i in sorted(distances)[:k]]
    vote_result = Counter(votes).most_common(1)[0][0]
    confidence = Counter(votes).most_common(1)[0][1] / k
  
    return vote_result, confidence


df = pd.read_csv("breast-cancer-wisconsin.data.txt")
df.replace('?',-99999, inplace=True)
df.drop(['id'], 1, inplace=True)
full_data = df.astype(float).values.tolist()
random.shuffle(full_data)

test_size = 0.4
train_set = {2:[], 4:[]}
test_set = {2:[], 4:[]}
train_data = full_data[:-int(test_size*len(full_data))]
test_data = full_data[-int(test_size*len(full_data)):]

for i in train_data:
    train_set[i[-1]].append(i[:-1])
    
for i in test_data:
    test_set[i[-1]].append(i[:-1])

correct = 0
total = 0

for group in test_set:
    for data in test_set[group]:
        vote,confidence = k_nearest_neighbors(train_set, data, k=5)
        if group == vote:
            correct += 1
        total += 1
print('Accuracy:', correct/total)

The next tutorial:





  • Practical Machine Learning Tutorial with Python Introduction
  • Regression - Intro and Data
  • Regression - Features and Labels
  • Regression - Training and Testing
  • Regression - Forecasting and Predicting
  • Pickling and Scaling
  • Regression - Theory and how it works
  • Regression - How to program the Best Fit Slope
  • Regression - How to program the Best Fit Line
  • Regression - R Squared and Coefficient of Determination Theory
  • Regression - How to Program R Squared
  • Creating Sample Data for Testing
  • Classification Intro with K Nearest Neighbors
  • Applying K Nearest Neighbors to Data
  • Euclidean Distance theory
  • Creating a K Nearest Neighbors Classifer from scratch
  • Creating a K Nearest Neighbors Classifer from scratch part 2
  • Testing our K Nearest Neighbors classifier
  • Final thoughts on K Nearest Neighbors
  • Support Vector Machine introduction
  • Vector Basics
  • Support Vector Assertions
  • Support Vector Machine Fundamentals
  • Constraint Optimization with Support Vector Machine
  • Beginning SVM from Scratch in Python
  • Support Vector Machine Optimization in Python
  • Support Vector Machine Optimization in Python part 2
  • Visualization and Predicting with our Custom SVM
  • Kernels Introduction
  • Why Kernels
  • Soft Margin Support Vector Machine
  • Kernels, Soft Margin SVM, and Quadratic Programming with Python and CVXOPT
  • Support Vector Machine Parameters
  • Machine Learning - Clustering Introduction
  • Handling Non-Numerical Data for Machine Learning
  • K-Means with Titanic Dataset
  • K-Means from Scratch in Python
  • Finishing K-Means from Scratch in Python
  • Hierarchical Clustering with Mean Shift Introduction
  • Mean Shift applied to Titanic Dataset
  • Mean Shift algorithm from scratch in Python
  • Dynamically Weighted Bandwidth for Mean Shift
  • Introduction to Neural Networks
  • Installing TensorFlow for Deep Learning - OPTIONAL
  • Introduction to Deep Learning with TensorFlow
  • Deep Learning with TensorFlow - Creating the Neural Network Model
  • Deep Learning with TensorFlow - How the Network will run
  • Deep Learning with our own Data
  • Simple Preprocessing Language Data for Deep Learning
  • Training and Testing on our Data for Deep Learning
  • 10K samples compared to 1.6 million samples with Deep Learning
  • How to use CUDA and the GPU Version of Tensorflow for Deep Learning
  • Recurrent Neural Network (RNN) basics and the Long Short Term Memory (LSTM) cell
  • RNN w/ LSTM cell example in TensorFlow and Python
  • Convolutional Neural Network (CNN) basics
  • Convolutional Neural Network CNN with TensorFlow tutorial
  • TFLearn - High Level Abstraction Layer for TensorFlow Tutorial
  • Using a 3D Convolutional Neural Network on medical imaging data (CT Scans) for Kaggle
  • Classifying Cats vs Dogs with a Convolutional Neural Network on Kaggle
  • Using a neural network to solve OpenAI's CartPole balancing environment