November 25, 2015

Project Sarica

So I've finally gotten around to making the artificial intelligence programs I've been talking about for however long it's been. I spent about a good two hours last night working on a simple neural network setup, and evolution algorithm to allow it to learn. I was able to finish them both in only 2 classes, each with less then 100 lines. Knowing that a lot of the processes here are going to be quite taxing, I challenged myself to make them run as memory friendly, and CPU friendly as possible. To achieve this, I make the entire network run solely in arrays. Using only 4 (really long) arrays, more specifically.

Wanting to stress test it, I make a 100 x 100 neuron brain, (plus 99 bias neurons), and ran it 1000 times to find the average time for a single step. I was quite surprised that the entire network, consisting off 10,099 neurons, and 999,900 connections, completed a single step in only a mere 2.7 ms. That's insane. I'm quite happy.

In addition to only using arrays to store all of the neutral network data, I didn't use and taxing functions such as Sigmoid. I used a base 4 polynomial, which was set by the evolutionary algorithm to whatever function is wanted. This means that all of the math used in the network was simply adding and multiplication. Because of the MAD rule in computing, (MAD basically means that multiplying then adding can be preformed in a single CPU cycle.) this was almost no problem at all for the computer. This is well more then enough neurons to complete most tasks that a neural network would need.

In fact, the network could be expanded on even more, if needed. The largest I was able to do before running into memory issues was 300 x 300. At this point, I was already using all of the virtual memory Java had given to me by default. I could've easily allocated more memory, but it was unnecessary. At 300 x 300 neurons, the network completed a single step in about 195\ms.

If you want to see the code for this class, see this link. (Note: By the time you read this, the code may have been expanded on, so it may have a lot more features, be faster, or whatever else I do to it.)



Now, to start out with this project, I need to think of an environment. I was originally going to make a hunt and gather type world, but that wouldn't really put must focus on the types of learning I've really been wanting to dive into. As of now, I'm stumped as for which type of world to make for it.

November 23, 2015

Sub Conscious Learning System

I had a thought, today. It was for a type of learning which seemed quite natural, but I had not even imagined being able to implement, previously. It is a system for self-scoring. As you might know, in general evolutionary algorithms, a computer program is given a score based on how well it functions, and it's children carry on that score, and build onto it. Eventually, after many generations, it gets smarter and smarter. However, with this system, if forces the entire species to lean as a whole, rather then individually. This works perfectly fine if the algorithm is designed to only master a single task, even if it's complex. But what about more complex tasks?

How could an artificial intelligence agent be designed, so that it can make it's own choices. Score it's own actions and abilities. This is an important trait for agents who want to have any hope in thriving in a complex environment, where more then a single action must be scored. This is a critical base for higher level thinking machines. Machines that learn strategy, and can plan out actions ahead of time. Taking into consideration events which only they specifically have encountered, not their previous generations.

Here is my proposal. For this system, the original generation based learning algorithm is still in place. This judges actions solely around things such as survival of the individual, and expanding the population. But there is also another learning system in play, here. The second learning system is per agent, and effects only their brain. This is constantly updating, as well.

In this system, imagine we have an entire AI brain network. Neurons and all. Now, take a chunk of it. Say, 25% or so, and mark this as the subconscious. This part is unable to be edited by the agent. In addition, this chunk is close to the end or the network, and all linked together closely. Yes, there are many connections going in, and also several going out. Though, in the middle of these, there are "emotion" neurons. These release emotional responses based on their input. Positive input, results in a positive output. Negative input, results in a negative output. Highly positive input, results in a highly positive output. And so on. Now, these emotional responses are the "score" values for the agent. It wants these to be as high as possible. So it is constantly weighing in different neuron values for the rest of the brain to try to influence the emotions. Now, the main evolutionary algorithm still wants the brain to function a certain way, to accomplish it's overall tasks. Survive, and thrive.

When a new agent is born, it's default brain state is given to it by the evolutionary algorithm. It also decides which parts of the brain are editable, or not. (This network is also assuming the same setup as the one I described in my article "Smoother Node Map Learning".)

By doing it this way, the agent is allowed to learn and adapt to it's environment at it's own pace, while the main algorithm gives a general path for everyone to follow. Both algorithms fight each other, and also work together to achieve a goal.

As each agent is it's own brain, and learning path, each one will have it's own personality, goals, and so on. Each will behave in their own way, but still follow the same overall instincts given to them when they are born.

I am quite excited to see how this plays out, so I will be playing with the code for it. (Though most likely I'll just get lazy and never finish it, as is the case for most of these ideas.)

November 21, 2015

Can't Seem to Finish a Game?

If you're an indie game developer, especially a new one, chances are you're having trouble finishing a game. You have lots of ideas, and plenty of content, yet, you just can't seem to stay focused long enough to finish. Or the game grows to big, too quickly, and becomes very overwhelming. Well, if this is the case, then here's a neat little trick you can do to help, a little.

All you have to do is "charge" yourself for work. Force yourself to give up a dollar or so for each feature you add to the game. Put it all into a jar, which you cannot collect from until the game is finished. No matter how big or small a feature is, you must pay for it. Some features that take longer to implement, cost more, while quick changes cost less. If you have a friend, have them hold onto it to stop you from losing will power and giving up, too soon.

How does that help? Like this.


  1. This helps slow down the rate of small useless features, and forces you to focus more on the important ones.
  2. By investing so much money, especially in larger games, you cause yourself to be more inclined to put your money to use, and feel more rewarded when a feature is finished, and works well.
  3. When the game is done, you get everything in the pot. Which can be quite a bit if the game took a long time to finish. A nice little reward to look forward to when you call the game done.
  4. When you get into making larger and larger games, paying others for features will start to become a common thing. This gets you used to it, early on.
  5. You get a better feel for how time is important, and wasting time on useless features, can waste a lot of money.
  6. You'll start to plan out features ahead of time, to judge their importance and difficulty to implement. This is critical for most games to be successful.
So all-in-all, this may or may not help you. But, it's worth a shot.
Also, don't cheat. Charge feature fairly, and don't sneak money from your jar. You'll only be cheating yourself, and not helping anything.

If you can get a friend to help, do so. Make them hold onto the cash, and if possible, even set the prices. This will help keep you in check.

November 16, 2015

Smoother Node Map Learning

As I've discussed in several of my previous posts, node map learning is a highly flexible way to learn. (In machine learning.) Because of the way it works, it acts more as a manager overseeing the whole learning operation, rather than specializing on a specific task. This makes it more efficient for performing well in more complex environments. However, the main problem with node map learning was how rough the learning rate was.

After some careful thinking in the past few days, I stumbled upon an idea for smoothing out the learning operation. Imagine a neural network. These are basically programming functions which are able to be smoothly learning, and focused on. This makes it ideal for machine learning, as the slope is very well rounded. By taking the mechanics of a neural network, a node map can be adapted easily to make it learnable.

So let's dive into this. Alright, so this concept is largely built onto a neural network. So let's just for now, imagine a basic neural network.
(I have one pictured here.) As you can see, it's a basic network. Inputs, outputs, multiple layers of hidden nodes, and bias nodes for each later. I've color coded these for easier viewing. So I will now begin to build onto this one piece at a time. First off, the functions. In a normal neural network, each node has a function. (Usually something like sigmoid.) Well, a slight change I'm making, is making the function a basic line function. (y = mx+b) The values for the slope and y intercept are determined by the genetic algorithm much like the weights are. And are different for each node. Make sense so far? You can also easily use polynomials to create this line function. (Which may offer better results, but that's up to you.) An example of that would be ax^0+xb^1+x^2c+x^3d... and so on for however many places you want.

Now, for the actual network itself, this will be setup to be a fully self generating design, so all of the hidden nodes are placed by the genetic algorithm itself. This allows for a better position on the whole manager type position we had for the original node map learning algorithm. Connections are still fully connected from layer to layer, so that part can be taken out of the hands of the network, without removing any control. This gives a slight smoother effect for the learning, as well.

Next, there are "function" nods. A new concept for a node. They act similarly to bias node, but reversed. Full input, no outputs. These can be placed on any row, except for the input node layer. These all have an input function, though. What makes these nodes special, is that they perform an action when the input function gives a result greater then 0. This makes things very interesting, as outputs can now now only give results, but they can added functions in on a higher degree. Stuff like jump, or speak. The functions are performed after the neural network has completed it's step, in the order that they are called in. (I.e. Higher rows are called first.) This is an optional node, though. These sort of actions can simply be placed as output nodes, for more control towards the developer as so which functions are available, order of operation, etc.

Finally, the last node type, is network nodes. (Name subject to change.) These nodes act like mini neural networks, existing inside of the bigger neural network.With the inputs to the node being the input to the network, and output of the node, being the outputs of the network. This acts simply as a way to condense and hone specific parts of the network towards specific tasks. It's up to you whether you want to handle these smaller networks as their own brain, or part of the larger one.


More information on this topic, coming soon. So keep updated!