December 24, 2015

Wraithaven Books

Maxus, Backstory. (Name subject to change.):

This book is about the fall, and return of once-great-king, Maxus. A half-elf, who lost everything with one mistake, struggles to undo the damage he caused, and start a new life.

December 22, 2015

Maxus, Backstory

Chapter 1:

He shivered, subtly. Sweat dripped from his brow as he removed the crown.

Filled with too much shame to even open his eyes, and face his audience. A thousand bloodied faces, a million glaring eyes pierced into his very being.  He struggled to keep from tearing up.

Maxus hesitantly placed his crown on the seat of his throne; staring blankly at it for a moment. He breathed deeply. It was over now. All of it. Everything he had ever known was gone. He thought back to when he first donned the crown, all those years ago. He was younger then. So arrogant, so naive, so ambitious. He believed nothing could get in his way. He had been training for the role since he could walk. Maxus vowed to himself to be the greatest king, Encarta had ever know. No, the greatest king the world had ever known. Under his rule, the kingdom had prospered more then anyone had ever thought possible. But look at him now. A failure. A disgrace. A monster. One mistake that no number of good deeds could ever make up for.

Maxus slowly turned, facing the crowd. He glanced toward the castle windows, still unable to look them in the eye. It was so nice out. The sun shined so brightly, a calm and reassuring promise. A light breeze patterned leaves softly against the window. The sky glowed a brilliant shade of eye-watering blue. Maxus smiled a little from the view. At least there was some light in this darkness.

The moment faded with the sounds of booing. Cries of sadness and rage filled his ears. The aching in their hearts could be hear vividly. Maxus could hold out now longer. A tear rolled down his cheek. His chest pounded. He felt his throat tighten in sorrow. It hurt, so much. He wished from every part of his soul to be anywhere else.

Maxus took a slow step forward. He wiped his eyes quickly, and attempted to regain his composure. The half-elf cleared his throat, and began to speak. "These past few days shall forever been known as the darkest days in Encarta's long history. We wished this day had never come... But here I stand before you, afraid. No eye in the kingdom remains dry. No heart left unbroken."

Maxus stood a little straighter. "Today," he continued, "marks a new day. As another takes my place,-", many people cheered at this comment, " a new hope shall rise, and shine before you. My only wish-." An object flew by Maxus's face, barely missing him. he cut of, looking back the the object. A smashed tomato slip down the wall behind him. Only a few feet away from the throne. Maxus looked back at the crowd, preparing himself.

There were several bursts of shouting, intensifying more and more every second. A man stood up, crying out "Why would we care about your wishes?" Another man hurled an old boot at the previous ruler. It hit Maxus's right shoulder, hard.

Maxus grabbed his shoulder, and cried out in pain. More shouts of anger rose from the crowd. They echoed loudly over the stone walls. He had to escape. Maxus started to panick, eyeing the escape passage behind the throne.

Only royalty knew of the passage. He was made when the castle was built, centuries ago. It was designed so the castle royal could escape in the necessity arose. It only opened from the outside. Once closed, it could be sealed permanently, so no one else could enter, to avoid enemies from following them. The door was hidden directly behind the throne, so it could not be spotted by wondering eyes.

Smash! A boot hit the side of Maxus's head. His ear rang loudly. He suddenly became dizzy, and struggled to keep his balance, but failed. He fell to one knee. Still clutching his shoulder, he peered at the crowd. His vision was doubled. Several man in the crowd began pushing through the thick gathering. Charging for the platform the Maxus was kneeling on.

Maxus shook his head violently, trying to clear his distortion. He had to get out of there before they killed him. The guards that previously given their lives for him without a second thought, now were among the faces demanding his head. He lunged for the throne, grasping at the brass arms for balance.

More people from the crowd begin moving towards Maxus. It was apparent that he would be dead soon, if he didn't get away. Using all the force he could muster, he pushed the throne to the side. Several people began to understand what he was doing. The people in the front started towards the stairs, quickly gaining on him.

Maxus Whipped open the small door, and crawled through the narrow passage. A man leaped for the doorway, to stop it from closing. His fingers closed around the side before Maxus could close it fully. He kicked at the man's hand, fiercely. Maxus hard a cry of pain, and the man's grip loosened. He slammed the door close, flipped the latch. The was loud banging on the door from the outside. The door was thick, and should hold easily. But he decided it was best not to trust three hundred year ago locksmithing.

Maxus crawled forward throw the narrow pitch-black passage.

It's seemed like ages. Inch after inch. Meter after meter. How much time has passed? How much distance ahead? Maxus's muscles ached everywhere. It felt like hours have passed. The passageway, though slightly larger then at the entrance, was quite small. Maxus was forced to crouch the entire way. It felt like his spine could split at any moment, now.

The passage was designed to be navigated with a torch. Though in the rush of the crowd, he didn't exactly have time to prepare. Maxus collapsed with his back against the wall. His hands were cut raw from feeling his way along the rough walls. He held them on his lap, palms up.

Maxus closed his eyes, and breathed deeply. It was so quite here. Nothing seemed to exist anymore, other then him. Him and this tunnel. He needed to get some rest. It would likely be a while before he reached the end of the passage. The tunnel led deep underground, and went on for miles, toward the outskirts of the city. Maxus couldn't remember where it ended. He didn't even know if it had been used before. He breathed deeply again. The sound echoed slightly down the tunnel.

Maxus gave himself a few minutes. He took off his robe, and started to fold it into a makeshift pillow. Was it always this brisk in here? Maxus shivered. He folded his arms, carefully to avoid touching his palms. As he started to lay his head onto the soft fabric, something tickled his nose. He brushed it off, thinking nothing of it.

Maxus's eyes drifted closed, slowly. So... tired...  Something tickled his arm, and raced up the his shoulder. His eyes burst open, as he leaped up, frantically brushing at his sleeve. He screamed, loudly.

The sound traveled far, in the quiet passageway. Maxus was frightened and enraged. After he was sure the insect was gone, he fell to his knees, and covered his face with his hands. Tears flooded down his cheeks. He couldn't hold it in, anymore. He weaped, loudly.

He hurt, deeply. His muscles felt like mush. His bones felt like twigs, ready to go at any time. And his heart, oh his heart held the greatest weight. It felt like it shattered, puncturing all of his other organs. His ribs throbbed from taking the abuse of his pounding heart.

He cried harshly until his eye's dried out, and could produce no more tears. He felt numb. His arms fell to his sides, lifelessly.

Maxus stared into the darkness. Unable to even tell if his eyes were open or closed. He stared for a long time. The wires in his mind flickering in every direction possible. He face, emotionless.

Maxus let himself fall backwards, his head hitting the pillow. He watched the ceiling for hours, until time blurred, and reality faded away. Maxus drifted into a rough, painful, anxiety-filled sleep. Caring not if he even awoke the next morning.

November 25, 2015

Project Sarica

So I've finally gotten around to making the artificial intelligence programs I've been talking about for however long it's been. I spent about a good two hours last night working on a simple neural network setup, and evolution algorithm to allow it to learn. I was able to finish them both in only 2 classes, each with less then 100 lines. Knowing that a lot of the processes here are going to be quite taxing, I challenged myself to make them run as memory friendly, and CPU friendly as possible. To achieve this, I make the entire network run solely in arrays. Using only 4 (really long) arrays, more specifically.

Wanting to stress test it, I make a 100 x 100 neuron brain, (plus 99 bias neurons), and ran it 1000 times to find the average time for a single step. I was quite surprised that the entire network, consisting off 10,099 neurons, and 999,900 connections, completed a single step in only a mere 2.7 ms. That's insane. I'm quite happy.

In addition to only using arrays to store all of the neutral network data, I didn't use and taxing functions such as Sigmoid. I used a base 4 polynomial, which was set by the evolutionary algorithm to whatever function is wanted. This means that all of the math used in the network was simply adding and multiplication. Because of the MAD rule in computing, (MAD basically means that multiplying then adding can be preformed in a single CPU cycle.) this was almost no problem at all for the computer. This is well more then enough neurons to complete most tasks that a neural network would need.

In fact, the network could be expanded on even more, if needed. The largest I was able to do before running into memory issues was 300 x 300. At this point, I was already using all of the virtual memory Java had given to me by default. I could've easily allocated more memory, but it was unnecessary. At 300 x 300 neurons, the network completed a single step in about 195\ms.

If you want to see the code for this class, see this link. (Note: By the time you read this, the code may have been expanded on, so it may have a lot more features, be faster, or whatever else I do to it.)



Now, to start out with this project, I need to think of an environment. I was originally going to make a hunt and gather type world, but that wouldn't really put must focus on the types of learning I've really been wanting to dive into. As of now, I'm stumped as for which type of world to make for it.

November 23, 2015

Sub Conscious Learning System

I had a thought, today. It was for a type of learning which seemed quite natural, but I had not even imagined being able to implement, previously. It is a system for self-scoring. As you might know, in general evolutionary algorithms, a computer program is given a score based on how well it functions, and it's children carry on that score, and build onto it. Eventually, after many generations, it gets smarter and smarter. However, with this system, if forces the entire species to lean as a whole, rather then individually. This works perfectly fine if the algorithm is designed to only master a single task, even if it's complex. But what about more complex tasks?

How could an artificial intelligence agent be designed, so that it can make it's own choices. Score it's own actions and abilities. This is an important trait for agents who want to have any hope in thriving in a complex environment, where more then a single action must be scored. This is a critical base for higher level thinking machines. Machines that learn strategy, and can plan out actions ahead of time. Taking into consideration events which only they specifically have encountered, not their previous generations.

Here is my proposal. For this system, the original generation based learning algorithm is still in place. This judges actions solely around things such as survival of the individual, and expanding the population. But there is also another learning system in play, here. The second learning system is per agent, and effects only their brain. This is constantly updating, as well.

In this system, imagine we have an entire AI brain network. Neurons and all. Now, take a chunk of it. Say, 25% or so, and mark this as the subconscious. This part is unable to be edited by the agent. In addition, this chunk is close to the end or the network, and all linked together closely. Yes, there are many connections going in, and also several going out. Though, in the middle of these, there are "emotion" neurons. These release emotional responses based on their input. Positive input, results in a positive output. Negative input, results in a negative output. Highly positive input, results in a highly positive output. And so on. Now, these emotional responses are the "score" values for the agent. It wants these to be as high as possible. So it is constantly weighing in different neuron values for the rest of the brain to try to influence the emotions. Now, the main evolutionary algorithm still wants the brain to function a certain way, to accomplish it's overall tasks. Survive, and thrive.

When a new agent is born, it's default brain state is given to it by the evolutionary algorithm. It also decides which parts of the brain are editable, or not. (This network is also assuming the same setup as the one I described in my article "Smoother Node Map Learning".)

By doing it this way, the agent is allowed to learn and adapt to it's environment at it's own pace, while the main algorithm gives a general path for everyone to follow. Both algorithms fight each other, and also work together to achieve a goal.

As each agent is it's own brain, and learning path, each one will have it's own personality, goals, and so on. Each will behave in their own way, but still follow the same overall instincts given to them when they are born.

I am quite excited to see how this plays out, so I will be playing with the code for it. (Though most likely I'll just get lazy and never finish it, as is the case for most of these ideas.)

November 21, 2015

Can't Seem to Finish a Game?

If you're an indie game developer, especially a new one, chances are you're having trouble finishing a game. You have lots of ideas, and plenty of content, yet, you just can't seem to stay focused long enough to finish. Or the game grows to big, too quickly, and becomes very overwhelming. Well, if this is the case, then here's a neat little trick you can do to help, a little.

All you have to do is "charge" yourself for work. Force yourself to give up a dollar or so for each feature you add to the game. Put it all into a jar, which you cannot collect from until the game is finished. No matter how big or small a feature is, you must pay for it. Some features that take longer to implement, cost more, while quick changes cost less. If you have a friend, have them hold onto it to stop you from losing will power and giving up, too soon.

How does that help? Like this.


  1. This helps slow down the rate of small useless features, and forces you to focus more on the important ones.
  2. By investing so much money, especially in larger games, you cause yourself to be more inclined to put your money to use, and feel more rewarded when a feature is finished, and works well.
  3. When the game is done, you get everything in the pot. Which can be quite a bit if the game took a long time to finish. A nice little reward to look forward to when you call the game done.
  4. When you get into making larger and larger games, paying others for features will start to become a common thing. This gets you used to it, early on.
  5. You get a better feel for how time is important, and wasting time on useless features, can waste a lot of money.
  6. You'll start to plan out features ahead of time, to judge their importance and difficulty to implement. This is critical for most games to be successful.
So all-in-all, this may or may not help you. But, it's worth a shot.
Also, don't cheat. Charge feature fairly, and don't sneak money from your jar. You'll only be cheating yourself, and not helping anything.

If you can get a friend to help, do so. Make them hold onto the cash, and if possible, even set the prices. This will help keep you in check.

November 16, 2015

Smoother Node Map Learning

As I've discussed in several of my previous posts, node map learning is a highly flexible way to learn. (In machine learning.) Because of the way it works, it acts more as a manager overseeing the whole learning operation, rather than specializing on a specific task. This makes it more efficient for performing well in more complex environments. However, the main problem with node map learning was how rough the learning rate was.

After some careful thinking in the past few days, I stumbled upon an idea for smoothing out the learning operation. Imagine a neural network. These are basically programming functions which are able to be smoothly learning, and focused on. This makes it ideal for machine learning, as the slope is very well rounded. By taking the mechanics of a neural network, a node map can be adapted easily to make it learnable.

So let's dive into this. Alright, so this concept is largely built onto a neural network. So let's just for now, imagine a basic neural network.
(I have one pictured here.) As you can see, it's a basic network. Inputs, outputs, multiple layers of hidden nodes, and bias nodes for each later. I've color coded these for easier viewing. So I will now begin to build onto this one piece at a time. First off, the functions. In a normal neural network, each node has a function. (Usually something like sigmoid.) Well, a slight change I'm making, is making the function a basic line function. (y = mx+b) The values for the slope and y intercept are determined by the genetic algorithm much like the weights are. And are different for each node. Make sense so far? You can also easily use polynomials to create this line function. (Which may offer better results, but that's up to you.) An example of that would be ax^0+xb^1+x^2c+x^3d... and so on for however many places you want.

Now, for the actual network itself, this will be setup to be a fully self generating design, so all of the hidden nodes are placed by the genetic algorithm itself. This allows for a better position on the whole manager type position we had for the original node map learning algorithm. Connections are still fully connected from layer to layer, so that part can be taken out of the hands of the network, without removing any control. This gives a slight smoother effect for the learning, as well.

Next, there are "function" nods. A new concept for a node. They act similarly to bias node, but reversed. Full input, no outputs. These can be placed on any row, except for the input node layer. These all have an input function, though. What makes these nodes special, is that they perform an action when the input function gives a result greater then 0. This makes things very interesting, as outputs can now now only give results, but they can added functions in on a higher degree. Stuff like jump, or speak. The functions are performed after the neural network has completed it's step, in the order that they are called in. (I.e. Higher rows are called first.) This is an optional node, though. These sort of actions can simply be placed as output nodes, for more control towards the developer as so which functions are available, order of operation, etc.

Finally, the last node type, is network nodes. (Name subject to change.) These nodes act like mini neural networks, existing inside of the bigger neural network.With the inputs to the node being the input to the network, and output of the node, being the outputs of the network. This acts simply as a way to condense and hone specific parts of the network towards specific tasks. It's up to you whether you want to handle these smaller networks as their own brain, or part of the larger one.


More information on this topic, coming soon. So keep updated!

October 14, 2015

Why can't machines feel emotion?

This is a heavy topic, in my opinion. This is something that has been thrown around as a artificial intelligence stereotype since the topic of artificial intelligence via computer was thought up.

Maybe I should start with why this bias against machines was first used. In early days of computers, it was assumed that no matter how much math a computer could do, a human would always be more intelligent. Eventually, a computer was designed specifically to play chess. After beating a professional chess player with little effort, computer scientists began to see that there really was potential in computer programs surpassing human level intelligence. The idea of AI has been thrown around for centuries. Even Leonardo Da Vinci once built a robot (armor, that was controlled by a series of ropes and pulleys), to simulate AI. But now, this seemed like it was actually within reach.

Years later, the first "chat bot" was a created. A program that was designed to respond to human sentences, in a was they was comprehensible. Many people even thought that there was just talking to a regular person over a computer. However, though the program formed correct speech, and was able, to an extent, respond correctly to human input, the responses all seemed emotionless and heartless. This is where the famous serotype that all machines were "forever emotionless."


I disagree with this, completely. Though, to explain my logic, I should explain what "emotion" is. Or at least what it is in my opinion. I believe that what we think of as emotion, is nothing more then a complex scoring system. Whether you believe in evolution or not is irrelevant for this point. Simple adaptation, a quick and observable event, is all the proof that is needed to convey this reasoning. People who are loving are more likely to have families, kids, and relationships, etc. Thus, they can pass down this mental trait to their children, grandchildren, and further down the line. So imagine if a person was unable to feel. They would have nothing to motivate them to live. To encourage them to have families, or take care of those families. Their chances of having children are significantly low. Even if they did, their children would share this trait and the pattern would continue until the trait finally woven out of their family tree, from the mental traits of the other parent.

So emotion is definitely important to keep a family tree going. It can also been observed that more emotional people can also strengthen this point, as they are even more encouraged to have families, and similar. So emotion is a key factor in creating families. Ok, simple enough.

Let's move on to another point. Emotions such as fear, happiness, anger. With fear, fear keeps people alive. People with little fear are more likely to do stupid stuff, and get themselves killed. Thus, fear keeps you alive longer, and gives you more time to start a family and carry on that trait of fear to your children. Anger, gives you a boost of adrenaline to fight off threats and protect yourself, or loved ones from what your brain sees as danger. (Mental or physical.) Happiness, pleasure, and similar, gives you an objective to work towards; a purpose in life. A feeling of pride when you hold your child, or a gleam across your face when you see someone else you care about smile. That motivation also keeps us alive as a society. The drive to build better tools, stronger shelter, faster means of travel. Each of these helping the survival of humans immensely. Sadness, tells us points or things to avoid. Pain causes sadness, and pain hurts the body. So stay away from what causes you pain. Better, and longer life. Etc. I could keep going for hours, but I feel I've made my point.

Emotion has given us the ability to learn as well. Imagine if a baby was born without the ability to feel emotion in any way. It wouldn't be able to learn even simple tasks, as it would be unable to answer the simple question, "why?"

"Why should I walk? Why should I read and write? Why should I do this instead of that?"

Without being able to understand why it needs to do something, it has no reason to do it. No push. No encouragement. No remorse, no regrets, nothing to aim for, or avoid. Pain would mean nothing. Pleasure would mean nothing. It would uncaring do dangerous things without caring if it was hurt in the process. Simple just because it could. There's also a good chance it wouldn't even move. As why bother wasting the energy?

Emotion is critical in learning. And in social situations, more emotion is much stronger for learning, and survival as a whole. So is it not sort of like a goal? People always seek out to please their emotions. Avoid pain, seek pleasure, try to be happy, protect yourself with anger or fear as necessary, etc. In simple adaptation, traits that are better for a being's survival are more likely to be traced to their kids, grand kids, and enviably everyone generations down the road. Almost like a score on how helpful that trait was.


AI is no different. It has a goal, and gets a score based on that goal. If it gets a low score, it's sad. A higher score makes it happy. It is constantly surviving towards whatever makes the program said makes it happy. (I.e. Gives it a higher score.) Using multiple score situations, the AI basically can feel multiple emotions. The current score being it's current emotion. Tracking scores over long term could even translate towards long term goals, hopes, or dreams. As each AI learns in their own way, they each have their own personalities, and ideas of what makes them happy, and what doesn't. Depending on how the scores are set up, AI can have the exact same emotions as a human can. Or more, or less. Or a set that's completely different from ours in every way.

So why can't an AI feel emotion? It very well, can.



So, did I screw up somewhere? Disagree with something, or simply want to add your own input? Go ahead and leave a comment. :)

October 7, 2015

Increasing Performance in Path Finding AI

As I stated in my previous post, performance is a huge issue when it comes to simulating AI. Well, here's a swing at a way to attempt to reduce the amount of time it takes to come up with a solution. Instead of releasing all of the actions at once, release in chunks, with similar actions combined. Then, after coming up with a possible path, test each chunk to see if it can be preformed. If it can't, disable that node chunk, and check again.

For example: Imagine that you had a group of actions:

  • Move Forward
  • Turn Left
  • Turn Right
  • Sprint Forward
  • Heal
  • Fire Arrow
  • Swing Sword
Now, all of the move commands are basically under the same category. So you can simplify the whole process by changing the nodes to:
  • Move to Player
  • Move to Nearest Hiding Location
  • Run away
  • Heal
  • Fire Arrow
  • Swing Sword
Doing it this way significantly lowers the number of nodes that must be traveled in the first attempt. Though the overall amount of possible nodes is increased, the depth is much, much lower by comparison. As always, there are pros and cons to this.

Pros being obviously much higher performance, and faster calculations. Cons: Well, there is not as much flexibility as there would be without. As the AI might have found a potentially better solution to the problem that was not listed as one of your pre-made nodes. In the example above, the pathfinding may have found a spot on the bridge where it can shoot the player, dodge behind a pillar, and heal in less then a second. This solution would not have been available in the chunk based system.

Al-in-all, this solution is only reasonable if you are willing to give up some accuracy in exchange for performance. It may or may not be useful in some situations. Either way, hope this helps. xD (Sorry for the short, quick post. Just a simple idea I thought up earlier today.)

October 5, 2015

Pathfinding as Better AI

When most people think of pathfinding algorithms, such as A Star, they think of simply finding the shortest distance between two locations on a map. Although this was it's original purpose, it can be so much more then that. A lot of implementations of pathfinding take advantage of the movement cost to make the paths more realistic. Such as avoiding wall-crawling, or preferring to walk on the sidewalk instead of grass unless necessary. These are very simple operations, but yet have already rocketed the simple find-the-shortest-path to a more natural, actual walking pattern.

But why do we have to stop there?
Let's build onto this a little bit. Instead of messing with solely the cost to move to a specific node, let's add more nodes. In addition to nodes like "move north," "move south," etc. Let's add "sprint forward," and "jump forward." Then adjust the cost to include time taken, and energy consumed, and demand to get to the end as quickly as possible. By doing this, more complex paths are available. Enemies will walk around normally, but when a player starts to walk by, they will rush towards you. Possibility breaking into full sprint to get to the player if they need. Adding a maximum energy could even persuade the enemy to give up, if it becomes too difficult to reach the player. Imagine how realistic monster movement would be now!

We can go even deeper.
What if we add even more complex functions. Such as "wait," "swing sword, " "shoot bow, " "drink healing potion." At this point, imagine how much more complex the creature's intelligence will become. It could actually hide behind a pillar out of the player's sight, and shoot a bow when they're not looking. Or charge up and hit you with a sword, then run. The possibilities quickly become immense. Simple enemies are become something to fear, and computer controlled pets become the most important companions.


Pros and Cons?
Well, there are a few of each. For pros, it adds a very high intelligence simulations. And it's highly flexible.
Cons, wells, balancing the point values can be a little tricky. I'll explain how to work with scoring, below. But the biggest issues are calculating heuristics, if applicable, and performance.

Performance is something that becomes a huge resource on this. As so many more dimensions are being explored. Working even on a simple grid instantly becomes a massive chore for the CPU. In addition, if heuristics, which are already very difficult to calculate, are done incorrectly, the computer could be wondering around for quite a while before finding an acceptable path. As for heuristics, it's much harder to calculate, as now multiple paths are being explored, such as "should I sprint or not?" This effect can hugely break the outcome for algorithms that depend on it. A star can get away with it, by always setting the value to 0. Though it will take longer to find a path. (Though it will always be the most accurate path to meet your needs.

Scoring is a little weird, but easy if you use square mean. Basically, make sure all values you're scoring, raise about the same way. (Such as the all have roughly the same starting and ending values. And none level up faster then the others.) Now, for each node, get all scores for it. Raise all of them to the power of 2. And find the average. That is the cost for this cell. :)

Now, much sure for make each cell store as little memory as possible. In really difficult situations, it may take many, many nodes to find a result.



In summary, this is my proposal. It sounds great in theory, but I'll have to test it some to see how practical it would actually be in real AI. Thanks for reading, and I hope this gives you some ideas for your game/program/whatever. xD
Hello, everyone. Sorry I've been gone for so long. But, I'm back now. :)
I should be able to post again, over-killing simple concepts and ideas with unnecessary and irrelevant, over explained, information. I hope you enjoy reading my mind, again. xD

February 11, 2015

Learning to Learn

One really interesting thing about node map learning, it it's flexibility. Given the right functions, and enough time, the node map can be theoretically capable of learning to preform any task. Though this can be very slow, and very daunting process. So why not teach itself?

A quick Google search will show how powerful learning algorithms are. Amazing tasks can be taught to the AI. Some preforming calculations far more powerful then previously thought possible. Other things can be taught to. Such as creating the fastest car, or the best fighting stick figure. Well, that's awesome. But I'm sure it took a while to get to that state using only self learning methods. And it does. And when thrown into a new environment, it can take a very, very long time to make the correct adaptations. Our algorithms are good, but not that good.

So let's let the AI figure it out. AI algorithms are already able to define the perfect algorithms for specific tasks, to a point that far surpases that of any human. Why not let the AI find a perfect learning algorithm?

The concept behind this is to have a "master" node map which is constantly learning and expanding it's knowledge of the functions it has, and how to use them. And use that to generate possible learning algorithm maps. Then send these maps out into a large series of varying tests. (There must be a large number of them, and they must be very diverse in order to minimize pattern finding and exploiting.) It wouldn't do every test, but a random selection of them. This helps to prevent the node map from excelling in a specific test to boost it's score. Then return the results over time for each test. Take all of the results from each test, and find the average learning curve. The idea for the master node map is find and create an algorithm that generates the highest learning curve. This process is very time consuming, and takes many, many, many more tests and passes then usual. That's because all of the learning is done ahead of time. More tests will have to constantly be added, and do to randomness, and test weaknesses, learning should not cease. This new learning algorithm would be constantly getting better over time, until it can finally surpases all of our current learning algorithms by far.

February 9, 2015

The Opposite of Zero

This is a theory I have, and is not factual. But, I have not yet found anything that disproves it, That is why I'm posting it here. My theory, is on the opposite of zero. It a number in the same sense infinity is a number. Not something you can count to, but you know it's there.

Currently when you think of numbers, you see them as in a line. This way of thinking implies
that infinity and negative infinity are infinitely separated from each other. The number in which I am proposing (Not Zero?) is between these. At the point where infinity and negative infinity "touch"


Now, they don't actually touch, but touch in the same way you can say -10^(-infinity) and 10^(-infinity) touch. They are infinity close to each other. Each racing closer and closer to each other, meeting 0 in the middle. This number, (I'll be referring to if from now on as Not Zero, as I don't have an official name) is where infinity and negative infinity "meet," or rather, the number between them before they meet. The best way to think of it is to add another dimension to the number line. (1D becomes 2D, 2D becomes 3D, etc.) Now, imagine the number line as a circle instead of a line. This circle contains every non-imaginary number. Thus, it has an infinite circumference. Pick a point on the edge of this circle, Mark this point as zero. Now, moving clockwise around the circle, the numbers move towards negative infinity. Moving counter clockwise, the numbers move towards positive infinity. They both start at the same position, and move the same distance, (infinity) but in opposite directions. So when the do meet, it is on the exact opposite side of the circle. The farthest you could possibly get from zero. Not greater then infinity, and not less then negative infinity, but the number farther from zero then either negative infinity or positive infinity. Some infinities are bigger than others.

Looking at this in 2D may help explain a little better. Let's look at a basic y=1/x equation. As the two sides approach 0, they race upwards, and downwards infinity. But they never actually touch it. In fact, at the point of x=0, there is no data. A glitch. A divide by zero, responding in a hole. Though, what if we apply our circle logic, from above? Well, let's start at the left side, and move right. As we approach 0, We start moving downwards, into negative infinity. Faster and faster as we get closer to 0. At a single instance, no data, then we are suddenly at positive infinity. Still moving downwards, but slower, slower, slower, until we are back down below 5, and keep moving to the right away from zero. That instance where we have no data, could that have been Not Zero? It's like we took a single trip, all the way around the number line. Wait, we never reached zero, though!  Ah, but you're forgetting the horizontal line. As I said above, add another dimension. So 2D graphics, would turn into 3D. What's the 3D version of a circle? A sphere. The horizontal line makes the exact same trip around the sphere. Starting from he top, and moving to the bottom. It races right, passes through Not Zero, then comes back from the left side and keeps moving downward.

You can even do this with non-dividing equations, such as y=x. It's a simple equation that draws a diagonal line. But looking at it as a sphere, The line travels diagonally from zero, passes through Not Zero, and comes back around. Making a perfect loop around the sphere.

This would also explain the famous divide-by-zero error. Anything divided by zero, is NotZero. The farthest point from zero you can possibly get.

February 8, 2015

Cleaning Up The Map

With the node map, running a function may or may not be very taxing. But even if it's not, excessive use of these small functions do build up. The node map has no actual idea how the functions operate. It only know that the functions do operate, somehow, and some return data, while others don't. Certain ones will actually help the node map achieve it's goal, while others will just process data. It is important to try and reduce the size of the map by removing unnecessary uses of functions. (Such as having 13 functions processing a line of data that goes no where, is completely useless. To remedy this, the node map is designed with a "cost". The idea is to find the shortest path, sort of speak. When giving a function to the node map, a cost would also be supplied. This cost would explain to the node map how extensive it is against CPU. A function which just adds two numbers is very simple, and can be called many more times then say, a distance formula. So the add function would have a much lower cost then the distance formula. Here's how these costs are counted:

The sum of each cost of all functions on the map is collected. Next, the "fitness" of each trial is collected. The sum is taken away from this number, and the fitness continues down it's path. By doing it this way, the node map will take function cost into account, and only use functions which help it reach it's goal better then the amount of power it would require at it's current state. If a function is just sitting there taking up space, it is just draining the fitness. The node map will catch on to this, and remove the useless function instance.

February 7, 2015

Node Map Learning, and Other Learning Functions

Now, node map learning is functional, and is capable of learning on it's own. Though, through the trial-and-error-like method of learning, any function that has non-static data could screw up the whole learning process, right? Surely something as constantly changing as another learning function would slow all learning to a crawl, forcing the node map to take many more passes to travel the same distance, right? Well, yes and no. Though this logic is correct, to an extent, it actually would really help the AI.

The best way to think of this is like a business. The AI represents the entire company. The node map is the CEO of that company. Each function you hand the node map is another employee type working under the CEO. The types in this case are not actual employees, but are "job titles" which can be filled by multiple people, or simply not used at all. The CEO's job is to get the business operating as well as possible with only these job titles and no more. The only thing the node map (CEO) is able to do is choose where in the company to put each job title. So when the process first starts, he'll look at things, and make logical guesses; passing data from person to person. The thing about learning algorithms in this: they are adaptable. Employees like this have no special talent in a single field, which is difficult for the company to find them a position, but they can adapt to any position, given enough time. Like a business has managers, and people working below them, and other managers, and even more people working below them, node maps can be nested. All forms of learning algorithms are able to be placed inside of a node map and nested. Though, how these are nested is up to the node map.

As stated above, a potential issue with using functions that operate in a non-static manner are likely to throw off the node map's calculations. This is true, and will always slow down the node map's learning curve. In the example of a business, the CEO is putting different people in positions everywhere, where they function poorly. So it moves them. This repeats, a lot. And because a learning algorithm can only learn if it is not moved from it's current position, the curve only has a change to grow when the node map is focusing another another area, instead of the learning algorithm. When the node map places a learning function higher up in the chain, then if has a less chance of being changed, and thus given a much larger chance to learn and grow.

At this point, all of the functions and learning algorithms, including the main node map, are all attempting to work together in order to master their environment. Each one will often trip over each other, or misread data from each other. But, they are help each other. A single algorithms success significantly improves the success of the others. A single algorithms failures can be patched by the others. It is important to use learning algorithms inside a node map, because of this. It may slow down things at first, (by a lot) but it really speeds things up in the long run, and gives the node map much more flexibility.

February 4, 2015

Node Map Reading

The first step to creating this Node Map Learning algorithm, is to design a node map object which can be read, and executed correctly. This doesn't include any learning functions, yet. Or even any of the functions which would be placed inside of it. Creating this type of node map has it's difficulties because it does not flow completely straight forward. It branches inward, outward, and skips layers. There are even multiple inputs, if desired. (Using many sources of input can be good because the AI can choose which ones it wants to use, and which ones are useless and just take up space. Some input functions may be something like random number generators which also help the AI in some way or another.) Though I use the term "Input" a lot, there are technically no inputs. Just parent-less functions. To make this node map function correctly, all that's really needed is to store a list of function instances. Then each step, run through each function on the list. If that function has already been ran, skip it and move on to the next one. If it hasn't, check to see if all of it's parents have been ran. If they have, then run the function with the data that was returned by it's parent functions. For "input" functions, it has no parents, so this will always return true. If only a single function step is wanted, then return after running a single function. If the iterator passes over the last element on the last, then every function has been ran. If all functions can be ran gap-less, then simply continue to run down the list, calling functions when they can be. When the end of the list is reached, jump back to the beginning of the list, and start over. Do this until a full pass has been made over the list, beginning to end, without calling a single function. When this is the case, all functions have been ran, thus ending the node map step.

After this algorithm is complete, then we can move onto the next step, node map generation. I'll explain this step in more detail as I complete more and more sections of working code bits.

February 2, 2015

Node Map Learning


Artificial Intelligence is an amazing topic. The whole concept of a computer program learning about it's environment and adapting to it is amazing. I've always loved AI, and have made many attempts to reproduce it. Though I have run into numerous problems along the way. Mostly due towards my lack of knowledge on the subject, and because of the ridiculous amount of work involved to make such a program. Though this is to be expected. AI is not a simple subject. There's so much involved in it, that it's almost necessary to use large teams to design anything of a decent scale. Though it can be done. So why don't we have AI around us commonly if it possible? Well, in a small sense we do. Neural Networks for example, are used at banks to help read off hand writing into computer text, and can do so with over 99% accuracy. But this isn't the type of machine learning I am referring to. What about emotions? Self set goals? These things are possible, and have been created in several instances. But the design behind them almost seem primitive compared to even small animals. This is because even though we have many different algorithms to aid in machine learning, they all have one thing in common. They are slow. It will often take many, many, many passes before the computer finally gets anywhere logical. Yes, in a sense the human mind is the same way, taking the first several years of our lives to learn about how to interact with the world around us. But we can't wait that long. For a computer, an environment on a scale such as ours would take almost centuries to understand, and lots of storage space. Not to mention a ridiculous CPU to calculate decisions in real time. Resources we don't have to spare. That's why AI is such a largely studied field. It's constantly coming up with new ways to speed things up, and make it more efficient. That's why I'm here. I proposing a new type of learning method which I hope will make learning on a mass scale much easier. Though I do not fully understand every part of the algorithm yet, I hope to keep fleshing it out more and more over time. This information right here is a basic summary of the algorithm, rather then any detailed math behind it.


The Node Map Learning method is machine learning concept which is hopefully better at understanding much larger and more complex environments then regular algorithms such as Neural Networks and Genetic Algorithms. It works by basically creating it's very own node map, (which functions similar to a program's code, shown right) then filling this node map with many, many functions. Anything that can be given to it. This includes stuff like distance between two points, sorting a list, adding numbers, and of course, actual functions for interacting with the environments. The AI takes all this data, and tries to organize it into a complex tree of data processing to work with such. The tree is constantly changing and evolving. Some of the functions given to the AI might be things such as Neural Networks or Genetic Algorithms, in which case it would also use these functions, and as the AI operated, would also learn to make better use of these functions. One last thing to note, is that each limb of the tree, (the data that comes out of functions, and gets thrown into other ones) must have a "type" assigned to it. Such as "number". This means that a function which accepts 1 input in the form of a number can only accept data that is assigned as a type of number. Data types can extend other data types. Such as "integer" would extend "number". Thus, all integers can be used as numbers, but not all numbers can be used as integers. The tree grows constantly as well, becoming more and more complex as needed. This process occurs in a similar method to genetic algorithms. The AI will semi-randomly (see paragraph below) start testing out different tree designs by adding, removing, or replacing branches on the tree. If the new design functions better, use that tree instead. If not, revert to previous tree. This process continues infinity, until there is absolutely no better way of designing the tree, without completely remaking it.

It chooses a base design by randomly creating X number of completely random tree designs initially. (X is determined by the complexity of the environment, though is usually around 100 or so. Though can be much higher for intricate environments.) Then it tests all X concepts, and looks for patterns in each one, and compares these patterns to how well it did. For example, the AI may test 250 designs, and looking at the designs, trees the used the pattern "Function 1 > Function 2 > Function 3 and 4" tended to have better results, while trees that use the pattern "Function 1 > Function 4" seemed to function very poorly. Taking all the patterns into account, it designs a new tree a sees how it functions. Constantly testing it, changing it, and making it better. Each and every new tree that is tested is broken down into a list of patterns and given a numerical value based on how well it functioned. This builds up after a while, and eventually will give the AI a very clear idea of what kind of designs work better then others.

This is my concept. I hope to have a working proof of concept out later. Thanks for taking the time to read this, and follow to keep updated. I will be posting more on the subject soon.
Hello. This is my first time using this site, so I'm just testing out the controls. This post is mainly just a test to get a feel for everything. Have an awesome day. ^^