artificial-intelligence-elon-musk-hawking

Artificial Neural Networks: The first contact!

In the beginning, there was motivation

This was my first contact ever! Armed with a will to learn and inspiration gained from Tambet Matiisens presentation at Helmes Hack Day, I was ready to take on the black magic of artificial neural networks!

Before You proceed, take a look at Deep Mind learning to play Atari Breakout

The first contact

Since I had no idea whatsoever how does a Artificial Neural Network (ANN) operate or what do I even need to have one, I had to do some background studying. Also, I searched for “ready to use” networks such as Caffe or Project Oxford but stumbled on something called Fast Artificial Neural Network (FANN). It had a long history and latest commits in GitHub were few days old. I decided to use FANN and started to read more about it on FANN homepage.

500px-artificial_neural_network-svg

ANNs are basically systems that take some input data, take also the desired result and using this knowledge, it can reason about some new piece of data, whether it is probable or not. Read more about ANNs at Wikipedia.

Installing the Skynet

After initial research I was ready to install my very first very own ANN! This was a process containing the following procedures:

# apt-get install libfann-dev

By this point the core system was ready to go! Since my potential implementing systems were written in PHP, i needed a language binding. This process was as heavy as the previous one:

# pecl install fann

The PHP binding GitHub page contained an XOR switch example I could use to test if my system was set up successfully. It was!

Training the AI to suggest, not to killshadow_hand_bulb_large

All I needed now was to train the ANN according to my requirements. For this I had the perfect guinea pig: DrinkPlanet. This is a site somewhat similar to RateBeer but it has taste patterns for each drink. I used those to generate a training file for each user. This data is then used to suggest drinks that user has not tried yet! In order to do so, I had to divide all my values by 100, to make the input between -1 and 1. When my ANN reasons about drinks, I filter out everything that has a probability greater than 0.9!

Conclusion

In conclusion I’d like to emphasize that this is all quite easy to get started, the next steps are harder. I have to figure out the optimal number of cycles to train, the optimal number of layers, hidden neurons etc. But until then I have 10+ beers to taste, so I can be sure whether my current parameters are good or not!

More reading

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s