Balancing neural network training data- Python Plays GTA V





Python Plays GTA V Part 10 - Balancing training data for self driving car neural network

Welcome to Part 10 of the Python Plays: Grand Theft Auto V tutorial series, where we're working on creating a self-driving car in the game.

Before we get into the neural network model, and training it, one other thing to think about is that, chances are, the vast majority of our moves are going to be forward. If we throw data at a neural network that is, for example, 80% biased towards this, the neural network will learn to always predict that class, EXCEPT in cases where it's seen that it is not. The problem here is that the network will almost certainly overfit. So, in training and validation, you might see that you're accuracy is 99%, so surely it's not just only predicting that 80% class, but, then, you throw some out out sample data at the network, or even attempt to actually use it, and you're baffled by the results! Well, you over fit and then created a bunch of rules basically for the edge cases in a case of overfitment.

For this reason, it's fairly useful, if possible, that we balance the data beforehand. There are other ways to handle this, and many ways to balance data. For me, I am going to elect to just take the data, take the least common row, take it's length, and then set all other classifications to that same max length, effectively throwing out any data over that threshold.

Let's begin creating a new python file, balance_data.py:

# balance_data.py

import numpy as np
import pandas as pd
from collections import Counter
from random import shuffle

train_data = np.load('training_data.npy')

df = pd.DataFrame(train_data)
print(df.head())
print(Counter(df[1].apply(str)))
                                                   0          1
0  [[50, 62, 173, 200, 188, 135, 131, 131, 125, 1...  [1, 0, 0]
1  [[38, 38, 163, 183, 175, 160, 129, 129, 128, 1...  [1, 0, 0]
2  [[82, 41, 93, 147, 191, 160, 138, 134, 112, 12...  [1, 0, 0]
3  [[80, 32, 28, 85, 61, 138, 191, 191, 158, 140,...  [0, 1, 0]
4  [[49, 31, 70, 89, 111, 61, 29, 52, 56, 83, 194...  [0, 1, 0]
Counter({'[0, 1, 0]': 70365, '[0, 0, 1]': 6708, '[1, 0, 0]': 6427})

Here, we can see that I couldn't make it to 100K samples, along with how unbalanced this data really is towards just going forward all of the time. If we trained right on this, we'd wind up with a model that only wanted to go forward.

Now, just to note, the way a neural network works generally is it will output the output layer, and then you, the programmer, apply an argmax to that output layer to get the predicted output, but the actual output layer is almost never a perfect 1 or 0, it's going to be a number like 0.85521, or 0.1241, and so on. Thus, if you really wanted to keep all this data, you could wind up building in rules like, if the forward prediction isn't greater than 0.95, go with the largest predicted turn, and so on. This way, even though via argmax(), your neural network said go forward, you actually can still detect turns. This is totally viable. People often don't consider the reality that they may have to apply a further algorithm to their neural network's output to get what they actually want.

I am going to balance this data, however!

lefts = []
rights = []
forwards = []

shuffle(train_data)

for data in train_data:
    img = data[0]
    choice = data[1]

    if choice == [1,0,0]:
        lefts.append([img,choice])
    elif choice == [0,1,0]:
        forwards.append([img,choice])
    elif choice == [0,0,1]:
        rights.append([img,choice])
    else:
        print('no matches')


forwards = forwards[:len(lefts)][:len(rights)]
lefts = lefts[:len(forwards)]
rights = rights[:len(forwards)]

We're also shuffling the data here. I've had some people correct me when I do a classification tutorial, saying that we should keep data that is temporal, in the order it came in, such as with finance data and so on. It depends on the algorithm you are running. In the case of our self driving car, when the network runs, it's only going to be looking frame by frame. We could consider using something like a recurrent neural network for concurrent frames, but that's not the model we're using here, so instead we're going to shuffle the data so we dont wind up with any strange biases. If you're someone who is against shuffling, feel free to not do it.

Once it's all been shuffled, we then set the forwards (the largest one for sure) to be sliced up to the length of the left turns, then again sliced up to the length of the right turns. Whichever is the shortest is now the length of the forwards. Thus, we just now need to set the lefts and rights to the length of the forwards, and boom, they're all the same length. Now we just need to put them all together and save!

final_data = forwards + lefts + rights
shuffle(final_data)

np.save('training_data_v2.npy', final_data)

The next tutorial:






  • Reading game frames in Python with OpenCV - Python Plays GTA V
  • OpenCV basics - Python Plays GTA V
  • Direct Input to Game - Python Plays GTA V
  • Region of Interest for finding lanes - Python Plays GTA V
  • Hough Lines - Python Plays GTA V
  • Finding Lanes for our self driving car - Python Plays GTA V
  • Self Driving Car - Python Plays GTA V
  • Next steps for Deep Learning self driving car - Python Plays GTA V
  • Training data for self driving car neural network- Python Plays GTA V
  • Balancing neural network training data- Python Plays GTA V
    You are currently here.
  • Training Self-Driving Car neural network- Python Plays GTA V
  • Testing self-driving car neural network- Python Plays GTA V
  • A more interesting self-driving AI - Python Plays GTA V