November 16, 2015

Smoother Node Map Learning

As I've discussed in several of my previous posts, node map learning is a highly flexible way to learn. (In machine learning.) Because of the way it works, it acts more as a manager overseeing the whole learning operation, rather than specializing on a specific task. This makes it more efficient for performing well in more complex environments. However, the main problem with node map learning was how rough the learning rate was.

After some careful thinking in the past few days, I stumbled upon an idea for smoothing out the learning operation. Imagine a neural network. These are basically programming functions which are able to be smoothly learning, and focused on. This makes it ideal for machine learning, as the slope is very well rounded. By taking the mechanics of a neural network, a node map can be adapted easily to make it learnable.

So let's dive into this. Alright, so this concept is largely built onto a neural network. So let's just for now, imagine a basic neural network.
(I have one pictured here.) As you can see, it's a basic network. Inputs, outputs, multiple layers of hidden nodes, and bias nodes for each later. I've color coded these for easier viewing. So I will now begin to build onto this one piece at a time. First off, the functions. In a normal neural network, each node has a function. (Usually something like sigmoid.) Well, a slight change I'm making, is making the function a basic line function. (y = mx+b) The values for the slope and y intercept are determined by the genetic algorithm much like the weights are. And are different for each node. Make sense so far? You can also easily use polynomials to create this line function. (Which may offer better results, but that's up to you.) An example of that would be ax^0+xb^1+x^2c+x^3d... and so on for however many places you want.

Now, for the actual network itself, this will be setup to be a fully self generating design, so all of the hidden nodes are placed by the genetic algorithm itself. This allows for a better position on the whole manager type position we had for the original node map learning algorithm. Connections are still fully connected from layer to layer, so that part can be taken out of the hands of the network, without removing any control. This gives a slight smoother effect for the learning, as well.

Next, there are "function" nods. A new concept for a node. They act similarly to bias node, but reversed. Full input, no outputs. These can be placed on any row, except for the input node layer. These all have an input function, though. What makes these nodes special, is that they perform an action when the input function gives a result greater then 0. This makes things very interesting, as outputs can now now only give results, but they can added functions in on a higher degree. Stuff like jump, or speak. The functions are performed after the neural network has completed it's step, in the order that they are called in. (I.e. Higher rows are called first.) This is an optional node, though. These sort of actions can simply be placed as output nodes, for more control towards the developer as so which functions are available, order of operation, etc.

Finally, the last node type, is network nodes. (Name subject to change.) These nodes act like mini neural networks, existing inside of the bigger neural network.With the inputs to the node being the input to the network, and output of the node, being the outputs of the network. This acts simply as a way to condense and hone specific parts of the network towards specific tasks. It's up to you whether you want to handle these smaller networks as their own brain, or part of the larger one.


More information on this topic, coming soon. So keep updated!

No comments:

Post a Comment