Welcome to part eleven of the Deep Learning with Neural Networks and TensorFlow tutorials. In this tutorial, we're going to cover how to code a Recurrent Neural Network model with an LSTM in TensorFlow.
To begin, we're going to start with the exact same code as we used with the basic multilayer-perceptron model:
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot = True) n_nodes_hl1 = 500 n_nodes_hl2 = 500 n_nodes_hl3 = 500 n_classes = 10 batch_size = 100 x = tf.placeholder('float', [None, 784]) y = tf.placeholder('float') def neural_network_model(data): hidden_1_layer = {'weights':tf.Variable(tf.random_normal([784, n_nodes_hl1])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))} hidden_2_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))} hidden_3_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl3]))} output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])), 'biases':tf.Variable(tf.random_normal([n_classes])),} l1 = tf.add(tf.matmul(data,hidden_1_layer['weights']), hidden_1_layer['biases']) l1 = tf.nn.relu(l1) l2 = tf.add(tf.matmul(l1,hidden_2_layer['weights']), hidden_2_layer['biases']) l2 = tf.nn.relu(l2) l3 = tf.add(tf.matmul(l2,hidden_3_layer['weights']), hidden_3_layer['biases']) l3 = tf.nn.relu(l3) output = tf.matmul(l3,output_layer['weights']) + output_layer['biases'] return output def train_neural_network(x): prediction = neural_network_model(x) cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y) ) optimizer = tf.train.AdamOptimizer().minimize(cost) hm_epochs = 10 with tf.Session() as sess: sess.run(tf.initialize_all_variables()) for epoch in range(hm_epochs): epoch_loss = 0 for _ in range(int(mnist.train.num_examples/batch_size)): epoch_x, epoch_y = mnist.train.next_batch(batch_size) _, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y}) epoch_loss += c print('Epoch', epoch, 'completed out of',hm_epochs,'loss:',epoch_loss) correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct, 'float')) print('Accuracy:',accuracy.eval({x:mnist.test.images, y:mnist.test.labels})) train_neural_network(x)
From here, we're going to simply modify the model function, along with a couple variables.
To begin:
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data from tensorflow.python.ops import rnn, rnn_cell mnist = input_data.read_data_sets("/tmp/data/", one_hot = True) hm_epochs = 3 n_classes = 10 batch_size = 128 chunk_size = 28 n_chunks = 28 rnn_size = 128 x = tf.placeholder('float', [None, n_chunks,chunk_size]) y = tf.placeholder('float')
Here, we're importing TensorFlow, mnist, and the rnn model/cell code from TensorFlow. We're also defining the chunk size, number of chunks, and rnn size as new variables. Also, the shape of the x
variable is changed, to include the chunks. In the basic neural network, you are sending in the entire image of pixel data all at once. With the Recurrent Neural Network, we're treating inputs now as sequential inputs of chunks instead.
Now our new model function:
def recurrent_neural_network(x): layer = {'weights':tf.Variable(tf.random_normal([rnn_size,n_classes])), 'biases':tf.Variable(tf.random_normal([n_classes]))} x = tf.transpose(x, [1,0,2]) x = tf.reshape(x, [-1, chunk_size]) x = tf.split(x, n_chunks, 0) lstm_cell = rnn_cell.BasicLSTMCell(rnn_size,state_is_tuple=True) outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32) output = tf.matmul(outputs[-1],layer['weights']) + layer['biases'] return output
We have a weights/biases dictionary like before, but then we get to some modifications to our input data, x. We're doing this is purely to satisfy the structure that TensorFlow wants of us to fit their rnn_cell
model. The one confusing thing here is possibly our transpose operation. Any time there's an operation like this with TensorFlow, you can either play with the value in the interactive session, or you can just use Numpy for a quick example. For example, we can use the following Numpy code:
import numpy as np x = np.ones((1,2,3)) print(x) print(np.transpose(x,(1,0,2)))
The output:
[[ [ 1. 1. 1.], [ 1. 1. 1.] ]] [ [[ 1. 1. 1.]], [[ 1. 1. 1.]] ]
There, you can see the change from the first to the 2nd. Now, we just make a few minor changes in the training function:
def train_neural_network(x): prediction = recurrent_neural_network(x) cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y) ) optimizer = tf.train.AdamOptimizer().minimize(cost) with tf.Session() as sess: sess.run(tf.initialize_all_variables()) for epoch in range(hm_epochs): epoch_loss = 0 for _ in range(int(mnist.train.num_examples/batch_size)): epoch_x, epoch_y = mnist.train.next_batch(batch_size) epoch_x = epoch_x.reshape((batch_size,n_chunks,chunk_size)) _, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y}) epoch_loss += c print('Epoch', epoch, 'completed out of',hm_epochs,'loss:',epoch_loss) correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct, 'float')) print('Accuracy:',accuracy.eval({x:mnist.test.images.reshape((-1, n_chunks, chunk_size)), y:mnist.test.labels})) train_neural_network(x)
The changes here are in epoch_x
, where we reshape that so it is no longer just an input of 784 values, it's now organized by chunks of whatever our chunk_size
is. The other change we need to make is when we calcualte accuracy, where each example here is reshaped, again, to be the n_chunks
by chunk_size
, only the first dimension is just -1, rather than the batch_size
, since we're just checking the accuracy of a single image, rather than training a whole batch of images.
After this, with a mere 3 epochs:
Epoch 0 completed out of 3 loss: 192.525049236 Epoch 1 completed out of 3 loss: 54.7218228597 Epoch 2 completed out of 3 loss: 36.7738111783 Accuracy: 0.9748
With 10:
Epoch 0 completed out of 10 loss: 183.533833001 Epoch 1 completed out of 10 loss: 53.2128913924 Epoch 2 completed out of 10 loss: 36.641087316 Epoch 3 completed out of 10 loss: 28.2334972355 Epoch 4 completed out of 10 loss: 23.5787885857 Epoch 5 completed out of 10 loss: 20.3254865455 Epoch 6 completed out of 10 loss: 17.0910299073 Epoch 7 completed out of 10 loss: 15.3585778594 Epoch 8 completed out of 10 loss: 12.5780420878 Epoch 9 completed out of 10 loss: 12.060161829 Accuracy: 0.9827
As we can see, even on image data, a Recurrent Neural Network with an LSTM cell has a lot of potential. In the next tutorial, we're going to jump into the basics of the Convolutional Neural Network.