Welcome to part three of Deep Learning with Neural Networks and TensorFlow, and part 45 of the Machine Learning tutorial series. In this tutorial, we're going to be heading (falling) down the rabbit hole by creating our own Deep Neural Network with TensorFlow.
We're going to be working first with the MNIST dataset, which is a dataset that contains 60,000 training samples and 10,000 testing samples of hand-written and labeled digits, 0 through 9, so ten total "classes." I will note that this is a very small dataset in terms of what you would be working with in any realistic setting, but it should also be small enough to work on everyone's computers.
The MNIST dataset has the images, which we'll be working with as purely black and white, thresholded, images, of size 28 x 28, or 784 pixels total. Our features will be the pixel values for each pixel, thresholded. Either the pixel is "blank" (nothing there, a 0), or there is something there (1). Those are our features. We're going to attempt to just use this extremely rudimentary data, and predict the number we're looking at (a 0,1,2,3,4,5,6,7,8, or 9). We're hoping that our neural network will somehow create an inner-model of the relationships between pixels, and be able to look at new examples of digits and predict them to a high degree.
While the code here will not be all that long, it can be quite confusing if you're not fully understanding what is supposed to be happening, so let's try to condense what we've learned so far, and what we're going to be doing here.
First, we take our input data
, and we need to send it to hidden layer 1
. Thus, we weight
the input data
, and send it to layer 1
, where it will undergo the activation function
, so the neuron can decide whether or not to fire and output some data to either the output layer
, or another hidden layer. We will have three hidden layers in this example, making this a Deep Neural Network. From the output we get, we will compare that output to the intended output. We will use a cost function
(alternatively called a loss function), to determine how wrong we are. Finally, we will use an optimizer
function, Adam Optimizer
in this case, to minimize the cost
(how wrong we are). The way cost
is minimized is by tinkering with the weights, with the goal of hopefully lowering the cost. How quickly we want to lower the cost is determined by the learning rate
. The lower the value for learning rate, the slower we will learn, and the more likely we'll get better results. The higher the learning rate, the quicker we will learn, giving us faster training times, but also may suffer on the results. There are diminishing returns here, you cannot just keep lowering the learning rate and always do better, of course.
The act of sending the data straight through our network means we're operating a feed forward
neural network. The adjusting of weights backwards is our back propagation
.
We do this feeding forward and back propagation however many times we want. The cycle is called an epoch
. We can pick any number we like for the number of epochs
, but you would probably want to avoid too many, causing overfitment.
After each epoch, we've hopefully further fine-tuned our weights, lowering our cost and improving accuracy. When we've finished all of the epochs, we can test using the testing set.
Got it? Great. Prepare for launch!
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot = True)
We import TensorFlow and the sample data we are going to use. Note the one_hot
parameter there. The term comes from electronics where just one element, out of the others, is literally "hot," or on. This is useful for multi-class classification tasks, which we have here (0,1,2,3,4,5,6,7,8, or 9). Thus, rather than a 0's output being just a 0 and a 1 a 1, we have something more like:
0 = [1,0,0,0,0,0,0,0,0] 1 = [0,1,0,0,0,0,0,0,0] 2 = [0,0,1,0,0,0,0,0,0] 3 = [0,0,0,1,0,0,0,0,0] ...
Alright, so we have our data. I chose to use the MNIST dataset because it's a decent dataset to start with, and actually collecting raw data and converting it to something to work with can take more time than creating the machine learning model itself, and I think most people here want to learn neural networks, not web scraping and regular expressions.
Now we're going to begin building the model:
n_nodes_hl1 = 500 n_nodes_hl2 = 500 n_nodes_hl3 = 500 n_classes = 10 batch_size = 100
We begin by specifying how many nodes each hidden layer will have, how many classes our dataset has, and what our batch size will be. While you *can* in theory train the entire network all at once, it's impractical. Many of you probably have computers that can handle the MNIST dataset in full, but most of you do not have computers, or access to computers, that can do realistically sized datasets all at once. Thus, we do the optimization in batches. In this case, we will do batches of 100.
x = tf.placeholder('float', [None, 784]) y = tf.placeholder('float')
These are our placeholders for some values in our graph. Recall that you simply build the model in your TensorFlow graph. From there, TensorFlow manipulates everything, you do not. This will be even more obvious once we finish and you try to look for where we modify weights! Notice that I have used [None,784] as a 2nd parameter in the first placeholder. This is an optional parameter. It can be useful, however, to be explicit like this. If you are not explicit, TensorFlow will stuff anything in there. If you are explicit about the shape, TensorFlow will throw an error if something out of shape attempts to hop into that variable's place.
We're now complete with our constants and starting values. Now we can actually build the Neural Network Model:
def neural_network_model(data): hidden_1_layer = {'weights':tf.Variable(tf.random_normal([784, n_nodes_hl1])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))} hidden_2_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))} hidden_3_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl3]))} output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])), 'biases':tf.Variable(tf.random_normal([n_classes]))}
Here, we begin defining our weights and our... HOLD on, wait a sec, what are these biases!? The bias
is a value that is added to our sums, before being passed through the activation function, not to be confused with a bias node, which is just a node that is always on. The purpose of the bias here is mainly to handle for scenarios where all neurons fired a 0 into the layer. A bias makes it possible that a neuron still fires out of that layer. A bias is as unique as the weights, and will need to be optimized too.
All we've done so far is create a starting definition for our weights and biases. These definitions are just random values, for the shape that the layer's matrix should be (this is what tf.random_normal
does for us, it outputs random values for the shape we want). Nothing has actually happened yet, and no flow (feed forward
) has occurred yet. Let's start the flow:
l1 = tf.add(tf.matmul(data,hidden_1_layer['weights']), hidden_1_layer['biases']) l1 = tf.nn.relu(l1) l2 = tf.add(tf.matmul(l1,hidden_2_layer['weights']), hidden_2_layer['biases']) l2 = tf.nn.relu(l2) l3 = tf.add(tf.matmul(l2,hidden_3_layer['weights']), hidden_3_layer['biases']) l3 = tf.nn.relu(l3) output = tf.matmul(l3,output_layer['weights']) + output_layer['biases'] return output
Here, we take values into layer one. What are the values? They are the multiplication of the raw input data multipled by their unique weights (starting as random, but will be optimized): tf.matmul(l1,hidden_2_layer['weights'])
. We then are adding, with tf.add
the bias. We repeat this process for each of the hidden layers, all the way down to our output, where we have the final values still being the multiplication of the input and the weights, plus the output layer's bias values.
When done, we simply return that output layer. So now, we've modeled the network, and have almost completed the entire computation graph. In the next tutorial, we're going to build a function that actually runs and trains the network with TensorFlow.