After some careful thinking in the past few days, I stumbled upon an idea for smoothing out the learning operation. Imagine a neural network. These are basically programming functions which are able to be smoothly learning, and focused on. This makes it ideal for machine learning, as the slope is very well rounded. By taking the mechanics of a neural network, a node map can be adapted easily to make it learnable.
So let's dive into this. Alright, so this concept is largely built onto a neural network. So let's just for now, imagine a basic neural network.

Now, for the actual network itself, this will be setup to be a fully self generating design, so all of the hidden nodes are placed by the genetic algorithm itself. This allows for a better position on the whole manager type position we had for the original node map learning algorithm. Connections are still fully connected from layer to layer, so that part can be taken out of the hands of the network, without removing any control. This gives a slight smoother effect for the learning, as well.
Next, there are "function" nods. A new concept for a node. They act similarly to bias node, but reversed. Full input, no outputs. These can be placed on any row, except for the input node layer. These all have an input function, though. What makes these nodes special, is that they perform an action when the input function gives a result greater then 0. This makes things very interesting, as outputs can now now only give results, but they can added functions in on a higher degree. Stuff like jump, or speak. The functions are performed after the neural network has completed it's step, in the order that they are called in. (I.e. Higher rows are called first.) This is an optional node, though. These sort of actions can simply be placed as output nodes, for more control towards the developer as so which functions are available, order of operation, etc.
Finally, the last node type, is network nodes. (Name subject to change.) These nodes act like mini neural networks, existing inside of the bigger neural network.With the inputs to the node being the input to the network, and output of the node, being the outputs of the network. This acts simply as a way to condense and hone specific parts of the network towards specific tasks. It's up to you whether you want to handle these smaller networks as their own brain, or part of the larger one.
More information on this topic, coming soon. So keep updated!
No comments:
Post a Comment