Welcome to part fourteen of the Deep Learning with Neural Networks and TensorFlow tutorials. Today, we're going to be covering TFLearn, which is a high-level/abstraction layer for TensorFlow.
In many cases, I am opposed to abstraction, I am certainly not a fan of abstraction for the sake of abstraction. I like to keep customization as much as possible. That said, if it's easy to make mistakes, or the code you are writing is overly verbose, chances are, abstraction might be a good idea. This tutorial couldn't be timed better, as I actually made a couple mistakes in the previous convolutional neural network video tutorial. Those mistakes went under the radar, costing us a few precious percentage accuracy points. In terms of being in the 90%s, the difference between 95% and 97% is actually very significant, but I didn't notice any issue in the code.
The issue was here:
conv1 = conv2d(x, weights['W_conv1']) conv1 = maxpool2d(conv1) conv2 = conv2d(conv1, weights['W_conv2']) conv2 = maxpool2d(conv2)
If you want to see the full code, check out the previous tutorial using the sidebar (or bottom bar on small screens). The biases don't matter much, but they do matter, and I forgot them initially. More importantly though, we forgot the activation functions per layer! We only had an activation function on the fully connected layer. Our network worked, but gave us about 95% accuracy, which is relatively poor compared to industry standards. The lines should have been:
conv1 = tf.nn.relu(conv2d(x, weights['W_conv1']) + biases['b_conv1']) conv1 = maxpool2d(conv1) conv2 = tf.nn.relu(conv2d(conv1, weights['W_conv2']) + biases['b_conv2']) conv2 = maxpool2d(conv2)
With this fix, our accuracy became ~97.5%.
Now I don't claim to be the best or brightest programmer, but I think mistakes like these are fairly easy to make, and, very easy to go unnoticed in production. With a higher level library, it's far less likely that we have mistakes like these.
Okay, so you're sold on higher level libraries, which one? Tensorflow is still in beta, and yet we have Keras, SKFlow, TFLearn, and TFSlim. Aside from Keras, the other three are TensorFlow-specific.
Why we need this many is beyond me, but we really just need to pick one. I was originally going to go with Keras (since it supports both TensorFlow and Theano), but, after looking deeper, it appears to me that TFLearn is the best one to use if you are using TensorFlow, since it works closely with other build in features of TensorFlow better, so we're going to use that.
To begin, I think it makes sense to do a model that we've already done. Let's repeat the Convolutional Neural Network (ConvNet/CNN). In straight TensorFlow code, that looks like:
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot = True) n_classes = 10 batch_size = 128 x = tf.placeholder('float', [None, 784]) y = tf.placeholder('float') keep_rate = 0.8 keep_prob = tf.placeholder(tf.float32) def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding='SAME') def maxpool2d(x): # size of window movement of window return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') def convolutional_neural_network(x): weights = {'W_conv1':tf.Variable(tf.random_normal([5,5,1,32])), 'W_conv2':tf.Variable(tf.random_normal([5,5,32,64])), 'W_fc':tf.Variable(tf.random_normal([7*7*64,1024])), 'out':tf.Variable(tf.random_normal([1024, n_classes]))} biases = {'b_conv1':tf.Variable(tf.random_normal([32])), 'b_conv2':tf.Variable(tf.random_normal([64])), 'b_fc':tf.Variable(tf.random_normal([1024])), 'out':tf.Variable(tf.random_normal([n_classes]))} x = tf.reshape(x, shape=[-1, 28, 28, 1]) conv1 = tf.nn.relu(conv2d(x, weights['W_conv1']) + biases['b_conv1']) conv1 = maxpool2d(conv1) conv2 = tf.nn.relu(conv2d(conv1, weights['W_conv2']) + biases['b_conv2']) conv2 = maxpool2d(conv2) fc = tf.reshape(conv2,[-1, 7*7*64]) fc = tf.nn.relu(tf.matmul(fc, weights['W_fc'])+biases['b_fc']) fc = tf.nn.dropout(fc, keep_rate) output = tf.matmul(fc, weights['out'])+biases['out'] return output def train_neural_network(x): prediction = convolutional_neural_network(x) cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y) ) optimizer = tf.train.AdamOptimizer().minimize(cost) hm_epochs = 10 with tf.Session() as sess: sess.run(tf.initialize_all_variables()) for epoch in range(hm_epochs): epoch_loss = 0 for _ in range(int(mnist.train.num_examples/batch_size)): epoch_x, epoch_y = mnist.train.next_batch(batch_size) _, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y}) epoch_loss += c print('Epoch', epoch, 'completed out of',hm_epochs,'loss:',epoch_loss) correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct, 'float')) print('Accuracy:',accuracy.eval({x:mnist.test.images, y:mnist.test.labels})) train_neural_network(x)97.5% accuracy.
Now, let's build the exact same thing in TFLearn! To begin, let's get our imports and data out of the way:
import tflearn from tflearn.layers.conv import conv_2d, max_pool_2d from tflearn.layers.core import input_data, dropout, fully_connected from tflearn.layers.estimator import regression import tflearn.datasets.mnist as mnist X, Y, test_x, test_y = mnist.load_data(one_hot=True) X = X.reshape([-1, 28, 28, 1]) test_x = test_x.reshape([-1, 28, 28, 1])
Immediately, you can see we're just importing a function for the convolution and pooling. You can also see we're importing fully_connected
and regression
. From here, we load in the data, and reshape the data.
Now we are going to begin building the convolutional neural network, starting with the input layer:
convnet = input_data(shape=[None, 28, 28, 1], name='input')
Next, we have 2 layers of convolution and pooling:
convnet = conv_2d(convnet, 32, 2, activation='relu') convnet = max_pool_2d(convnet, 2) convnet = conv_2d(convnet, 64, 2, activation='relu') convnet = max_pool_2d(convnet, 2)
Then we add a fully connected layer:
convnet = fully_connected(convnet, 1024, activation='relu') convnet = dropout(convnet, 0.8)
Now let's do the output layer:
convnet = fully_connected(convnet, 10, activation='softmax') convnet = regression(convnet, optimizer='adam', learning_rate=0.01, loss='categorical_crossentropy', name='targets')
Now to create the model:
model = tflearn.DNN(convnet)
Finally, train it:
model.fit({'input': X}, {'targets': Y}, n_epoch=10, validation_set=({'input': test_x}, {'targets': test_y}), snapshot_step=500, show_metric=True, run_id='mnist')
That's all there is to it. Full code:
import tflearn from tflearn.layers.conv import conv_2d, max_pool_2d from tflearn.layers.core import input_data, dropout, fully_connected from tflearn.layers.estimator import regression import tflearn.datasets.mnist as mnist X, Y, test_x, test_y = mnist.load_data(one_hot=True) X = X.reshape([-1, 28, 28, 1]) test_x = test_x.reshape([-1, 28, 28, 1]) convnet = input_data(shape=[None, 28, 28, 1], name='input') convnet = conv_2d(convnet, 32, 2, activation='relu') convnet = max_pool_2d(convnet, 2) convnet = conv_2d(convnet, 64, 2, activation='relu') convnet = max_pool_2d(convnet, 2) convnet = fully_connected(convnet, 1024, activation='relu') convnet = dropout(convnet, 0.8) convnet = fully_connected(convnet, 10, activation='softmax') convnet = regression(convnet, optimizer='adam', learning_rate=0.01, loss='categorical_crossentropy', name='targets') model = tflearn.DNN(convnet) model.fit({'input': X}, {'targets': Y}, n_epoch=10, validation_set=({'input': test_x}, {'targets': test_y}), snapshot_step=500, show_metric=True, run_id='mnist')
With this, we have ~30 lines, as compared to ~80, and these are much simpler. Moving forward, with the model, to predict, you can just do: model.predict(x)
. Want to save your model? model.save(filename)
. Need to load it now? model.load(filename)
.
For example, you can do:
model = tflearn.DNN(convnet) model.fit({'input': X}, {'targets': Y}, n_epoch=5, validation_set=({'input': test_x}, {'targets': test_y}), snapshot_step=500, show_metric=True, run_id='mnist') model.save('quicktest.model')
Once you've saved, you can load the model with:
import tflearn from tflearn.layers.conv import conv_2d, max_pool_2d from tflearn.layers.core import input_data, dropout, fully_connected from tflearn.layers.estimator import regression import tflearn.datasets.mnist as mnist X, Y, test_x, test_y = mnist.load_data(one_hot=True) X = X.reshape([-1, 28, 28, 1]) test_x = test_x.reshape([-1, 28, 28, 1]) # Building convolutional convnet convnet = input_data(shape=[None, 28, 28, 1], name='input') # http://tflearn.org/layers/conv/ # http://tflearn.org/activations/ convnet = conv_2d(convnet, 32, 2, activation='relu') convnet = max_pool_2d(convnet, 2) convnet = conv_2d(convnet, 64, 2, activation='relu') convnet = max_pool_2d(convnet, 2) convnet = fully_connected(convnet, 1024, activation='relu') convnet = dropout(convnet, 0.8) convnet = fully_connected(convnet, 10, activation='softmax') convnet = regression(convnet, optimizer='adam', learning_rate=0.01, loss='categorical_crossentropy', name='targets') model = tflearn.DNN(convnet) model.load('quicktest.model')
You still need to setup the structure of the model. The .load
method just simply restores the weights, so you also obviously need the model to have the same layers and neurons.
After this load, you could do something like this to test predictions:
import numpy as np print( np.round(model.predict([test_x[1]])[0]) ) print(test_y[1])
If you wanted to continue training? Just simply run .fit
on your new data, save it, and continue.