RBM based Autoencoders with tensorflow

Recently I try to implement RBM based autoencoder in tensorflow similar to RBMs described in Semantic Hashing paper by Ruslan Salakhutdinov and Geoffrey Hinton. It seems that with weights that were pre-trained with RBM autoencoders should converge faster. So I've decided to check this.

This post will describe some roadblocks for RBMs/autoencoders implementation in tensorflow and compare results of different approaches. I assume reader's previous knowledge of tensorflow and machine learning field. All code can be found in this repo

RBMs different from usual neural networks in some ways:

Neural networks usually perform weight update by Gradient Descent, but RMBs use Contrastive Divergence (which is basically a funky term for "approximate gradient descent" link to read). At a glance, contrastive divergence computes a difference between positive phase (energy of first encoding) and negative phase (energy of the last encoding).

positive_phase = tf.matmul(
    tf.transpose(visib_inputs_initial), first_encoded_probability)
negative_phase = tf.matmul(
    tf.transpose(last_reconstructed_probability), last_encoded_probability)
contrastive_divergence = positive_phase - negative_phase

Also, a key feature of RMB that it encode output in binary mode, not as probabilities. More about RMBs you may read here or here.

As prototype one layer tensorflow rbm implementation was used. For testing, I've taken well known MNIST dataset(dataset of handwritten digits).

Read more…

Welcome Post

Hi to all! There only a few days I've set up this blog and I've decided that there should be at least one post. The main purpose of this blog to collect and organize some coding related info. At the Pages, I will keep lists with useful commands mainly without the long explanation. At the Blogs, I will post some more verbose things. Will see how it will go.