Neural Networks and TensorFlow - Deep Learning Series [Part 18]

in #deep-learning6 years ago

Like I said in the previous post, in this lesson we're going to create a session and we are going to execute the computational graph that we've built.

The first thing that we define is the number of steps or iterations that the graph is going to go through. So, the training process is actually a loop and at the end of every session the progress or the performance of the network is somewhat 'saved' and on the next iteration, the training proceeds further from that point.

One thing to notice is that we are training in batches. It has been observed by many academics, researchers and practitioners that it may be more efficient computationally-wise to train in smaller batches that to train on the entire dataset at once. Here we are using a minibatch of 50 samples.

Ok, so after 180 steps, the performance or the accuracy of the model on the training set is 92.9%, while on the test set is about 90%, which is quite decent for a model that's been trained on the CPU and in such a short period of time. However, we are dealing with grey images in this dataset. We'll see that in later lessons things get complicated - with images that are larger and colored.

Please see the video for the full walkthrough.


To stay in touch with me, follow @cristi


Cristi Vlad Self-Experimenter and Author

Coin Marketplace

STEEM 0.30
TRX 0.12
JST 0.033
BTC 64029.44
ETH 3157.04
USDT 1.00
SBD 4.02