One really interesting thing about node map learning, it it's flexibility. Given the right functions, and enough time, the node map can be theoretically capable of learning to preform any task. Though this can be very slow, and very daunting process. So why not teach itself?
A quick Google search will show how powerful learning algorithms are. Amazing tasks can be taught to the AI. Some preforming calculations far more powerful then previously thought possible. Other things can be taught to. Such as creating the fastest car, or the best fighting stick figure. Well, that's awesome. But I'm sure it took a while to get to that state using only self learning methods. And it does. And when thrown into a new environment, it can take a very, very long time to make the correct adaptations. Our algorithms are good, but not that good.
So let's let the AI figure it out. AI algorithms are already able to define the perfect algorithms for specific tasks, to a point that far surpases that of any human. Why not let the AI find a perfect learning algorithm?
The concept behind this is to have a "master" node map which is constantly learning and expanding it's knowledge of the functions it has, and how to use them. And use that to generate possible learning algorithm maps. Then send these maps out into a large series of varying tests. (There must be a large number of them, and they must be very diverse in order to minimize pattern finding and exploiting.) It wouldn't do every test, but a random selection of them. This helps to prevent the node map from excelling in a specific test to boost it's score. Then return the results over time for each test. Take all of the results from each test, and find the average learning curve. The idea for the master node map is find and create an algorithm that generates the highest learning curve. This process is very time consuming, and takes many, many, many more tests and passes then usual. That's because all of the learning is done ahead of time. More tests will have to constantly be added, and do to randomness, and test weaknesses, learning should not cease. This new learning algorithm would be constantly getting better over time, until it can finally surpases all of our current learning algorithms by far.
My personal blog, where I post random bits of information that interest me. Mostly I'll just go into extreme unnecessary detail about things you probably don't care about.
February 11, 2015
February 9, 2015
The Opposite of Zero
This is a theory I have, and is not factual. But, I have not yet found anything that disproves it, That is why I'm posting it here. My theory, is on the opposite of zero. It a number in the same sense infinity is a number. Not something you can count to, but you know it's there.
Currently when you think of numbers, you see them as in a line. This way of thinking implies
that infinity and negative infinity are infinitely separated from each other. The number in which I am proposing (Not Zero?) is between these. At the point where infinity and negative infinity "touch"
Now, they don't actually touch, but touch in the same way you can say -10^(-infinity) and 10^(-infinity) touch. They are infinity close to each other. Each racing closer and closer to each other, meeting 0 in the middle. This number, (I'll be referring to if from now on as Not Zero, as I don't have an official name) is where infinity and negative infinity "meet," or rather, the number between them before they meet. The best way to think of it is to add another dimension to the number line. (1D becomes 2D, 2D becomes 3D, etc.) Now, imagine the number line as a circle instead of a line. This circle contains every non-imaginary number. Thus, it has an infinite circumference. Pick a point on the edge of this circle, Mark this point as zero. Now, moving clockwise around the circle, the numbers move towards negative infinity. Moving counter clockwise, the numbers move towards positive infinity. They both start at the same position, and move the same distance, (infinity) but in opposite directions. So when the do meet, it is on the exact opposite side of the circle. The farthest you could possibly get from zero. Not greater then infinity, and not less then negative infinity, but the number farther from zero then either negative infinity or positive infinity. Some infinities are bigger than others.
Looking at this in 2D may help explain a little better. Let's look at a basic y=1/x equation. As the two sides approach 0, they race upwards, and downwards infinity. But they never actually touch it. In fact, at the point of x=0, there is no data. A glitch. A divide by zero, responding in a hole. Though, what if we apply our circle logic, from above? Well, let's start at the left side, and move right. As we approach 0, We start moving downwards, into negative infinity. Faster and faster as we get closer to 0. At a single instance, no data, then we are suddenly at positive infinity. Still moving downwards, but slower, slower, slower, until we are back down below 5, and keep moving to the right away from zero. That instance where we have no data, could that have been Not Zero? It's like we took a single trip, all the way around the number line. Wait, we never reached zero, though! Ah, but you're forgetting the horizontal line. As I said above, add another dimension. So 2D graphics, would turn into 3D. What's the 3D version of a circle? A sphere. The horizontal line makes the exact same trip around the sphere. Starting from he top, and moving to the bottom. It races right, passes through Not Zero, then comes back from the left side and keeps moving downward.
You can even do this with non-dividing equations, such as y=x. It's a simple equation that draws a diagonal line. But looking at it as a sphere, The line travels diagonally from zero, passes through Not Zero, and comes back around. Making a perfect loop around the sphere.
This would also explain the famous divide-by-zero error. Anything divided by zero, is NotZero. The farthest point from zero you can possibly get.

that infinity and negative infinity are infinitely separated from each other. The number in which I am proposing (Not Zero?) is between these. At the point where infinity and negative infinity "touch"

Looking at this in 2D may help explain a little better. Let's look at a basic y=1/x equation. As the two sides approach 0, they race upwards, and downwards infinity. But they never actually touch it. In fact, at the point of x=0, there is no data. A glitch. A divide by zero, responding in a hole. Though, what if we apply our circle logic, from above? Well, let's start at the left side, and move right. As we approach 0, We start moving downwards, into negative infinity. Faster and faster as we get closer to 0. At a single instance, no data, then we are suddenly at positive infinity. Still moving downwards, but slower, slower, slower, until we are back down below 5, and keep moving to the right away from zero. That instance where we have no data, could that have been Not Zero? It's like we took a single trip, all the way around the number line. Wait, we never reached zero, though! Ah, but you're forgetting the horizontal line. As I said above, add another dimension. So 2D graphics, would turn into 3D. What's the 3D version of a circle? A sphere. The horizontal line makes the exact same trip around the sphere. Starting from he top, and moving to the bottom. It races right, passes through Not Zero, then comes back from the left side and keeps moving downward.
You can even do this with non-dividing equations, such as y=x. It's a simple equation that draws a diagonal line. But looking at it as a sphere, The line travels diagonally from zero, passes through Not Zero, and comes back around. Making a perfect loop around the sphere.
This would also explain the famous divide-by-zero error. Anything divided by zero, is NotZero. The farthest point from zero you can possibly get.
February 8, 2015
Cleaning Up The Map
With the node map, running a function may or may not be very taxing. But even if it's not, excessive use of these small functions do build up. The node map has no actual idea how the functions operate. It only know that the functions do operate, somehow, and some return data, while others don't. Certain ones will actually help the node map achieve it's goal, while others will just process data. It is important to try and reduce the size of the map by removing unnecessary uses of functions. (Such as having 13 functions processing a line of data that goes no where, is completely useless. To remedy this, the node map is designed with a "cost". The idea is to find the shortest path, sort of speak. When giving a function to the node map, a cost would also be supplied. This cost would explain to the node map how extensive it is against CPU. A function which just adds two numbers is very simple, and can be called many more times then say, a distance formula. So the add function would have a much lower cost then the distance formula. Here's how these costs are counted:
The sum of each cost of all functions on the map is collected. Next, the "fitness" of each trial is collected. The sum is taken away from this number, and the fitness continues down it's path. By doing it this way, the node map will take function cost into account, and only use functions which help it reach it's goal better then the amount of power it would require at it's current state. If a function is just sitting there taking up space, it is just draining the fitness. The node map will catch on to this, and remove the useless function instance.
February 7, 2015
Node Map Learning, and Other Learning Functions
Now, node map learning is functional, and is capable of learning on it's own. Though, through the trial-and-error-like method of learning, any function that has non-static data could screw up the whole learning process, right? Surely something as constantly changing as another learning function would slow all learning to a crawl, forcing the node map to take many more passes to travel the same distance, right? Well, yes and no. Though this logic is correct, to an extent, it actually would really help the AI.
The best way to think of this is like a business. The AI represents the entire company. The node map is the CEO of that company. Each function you hand the node map is another employee type working under the CEO. The types in this case are not actual employees, but are "job titles" which can be filled by multiple people, or simply not used at all. The CEO's job is to get the business operating as well as possible with only these job titles and no more. The only thing the node map (CEO) is able to do is choose where in the company to put each job title. So when the process first starts, he'll look at things, and make logical guesses; passing data from person to person. The thing about learning algorithms in this: they are adaptable. Employees like this have no special talent in a single field, which is difficult for the company to find them a position, but they can adapt to any position, given enough time. Like a business has managers, and people working below them, and other managers, and even more people working below them, node maps can be nested. All forms of learning algorithms are able to be placed inside of a node map and nested. Though, how these are nested is up to the node map.
As stated above, a potential issue with using functions that operate in a non-static manner are likely to throw off the node map's calculations. This is true, and will always slow down the node map's learning curve. In the example of a business, the CEO is putting different people in positions everywhere, where they function poorly. So it moves them. This repeats, a lot. And because a learning algorithm can only learn if it is not moved from it's current position, the curve only has a change to grow when the node map is focusing another another area, instead of the learning algorithm. When the node map places a learning function higher up in the chain, then if has a less chance of being changed, and thus given a much larger chance to learn and grow.
At this point, all of the functions and learning algorithms, including the main node map, are all attempting to work together in order to master their environment. Each one will often trip over each other, or misread data from each other. But, they are help each other. A single algorithms success significantly improves the success of the others. A single algorithms failures can be patched by the others. It is important to use learning algorithms inside a node map, because of this. It may slow down things at first, (by a lot) but it really speeds things up in the long run, and gives the node map much more flexibility.
The best way to think of this is like a business. The AI represents the entire company. The node map is the CEO of that company. Each function you hand the node map is another employee type working under the CEO. The types in this case are not actual employees, but are "job titles" which can be filled by multiple people, or simply not used at all. The CEO's job is to get the business operating as well as possible with only these job titles and no more. The only thing the node map (CEO) is able to do is choose where in the company to put each job title. So when the process first starts, he'll look at things, and make logical guesses; passing data from person to person. The thing about learning algorithms in this: they are adaptable. Employees like this have no special talent in a single field, which is difficult for the company to find them a position, but they can adapt to any position, given enough time. Like a business has managers, and people working below them, and other managers, and even more people working below them, node maps can be nested. All forms of learning algorithms are able to be placed inside of a node map and nested. Though, how these are nested is up to the node map.
As stated above, a potential issue with using functions that operate in a non-static manner are likely to throw off the node map's calculations. This is true, and will always slow down the node map's learning curve. In the example of a business, the CEO is putting different people in positions everywhere, where they function poorly. So it moves them. This repeats, a lot. And because a learning algorithm can only learn if it is not moved from it's current position, the curve only has a change to grow when the node map is focusing another another area, instead of the learning algorithm. When the node map places a learning function higher up in the chain, then if has a less chance of being changed, and thus given a much larger chance to learn and grow.
At this point, all of the functions and learning algorithms, including the main node map, are all attempting to work together in order to master their environment. Each one will often trip over each other, or misread data from each other. But, they are help each other. A single algorithms success significantly improves the success of the others. A single algorithms failures can be patched by the others. It is important to use learning algorithms inside a node map, because of this. It may slow down things at first, (by a lot) but it really speeds things up in the long run, and gives the node map much more flexibility.
February 4, 2015
Node Map Reading
The first step to creating this Node Map Learning algorithm, is to design a node map object which can be read, and executed correctly. This doesn't include any learning functions, yet. Or even any of the functions which would be placed inside of it. Creating this type of node map has it's difficulties because it does not flow completely straight forward. It branches inward, outward, and skips layers. There are even multiple inputs, if desired. (Using many sources of input can be good because the AI can choose which ones it wants to use, and which ones are useless and just take up space. Some input functions may be something like random number generators which also help the AI in some way or another.) Though I use the term "Input" a lot, there are technically no inputs. Just parent-less functions. To make this node map function correctly, all that's really needed is to store a list of function instances. Then each step, run through each function on the list. If that function has already been ran, skip it and move on to the next one. If it hasn't, check to see if all of it's parents have been ran. If they have, then run the function with the data that was returned by it's parent functions. For "input" functions, it has no parents, so this will always return true. If only a single function step is wanted, then return after running a single function. If the iterator passes over the last element on the last, then every function has been ran. If all functions can be ran gap-less, then simply continue to run down the list, calling functions when they can be. When the end of the list is reached, jump back to the beginning of the list, and start over. Do this until a full pass has been made over the list, beginning to end, without calling a single function. When this is the case, all functions have been ran, thus ending the node map step.
After this algorithm is complete, then we can move onto the next step, node map generation. I'll explain this step in more detail as I complete more and more sections of working code bits.
After this algorithm is complete, then we can move onto the next step, node map generation. I'll explain this step in more detail as I complete more and more sections of working code bits.
February 2, 2015
Node Map Learning
Artificial Intelligence is an amazing topic. The whole concept of a computer program learning about it's environment and adapting to it is amazing. I've always loved AI, and have made many attempts to reproduce it. Though I have run into numerous problems along the way. Mostly due towards my lack of knowledge on the subject, and because of the ridiculous amount of work involved to make such a program. Though this is to be expected. AI is not a simple subject. There's so much involved in it, that it's almost necessary to use large teams to design anything of a decent scale. Though it can be done. So why don't we have AI around us commonly if it possible? Well, in a small sense we do. Neural Networks for example, are used at banks to help read off hand writing into computer text, and can do so with over 99% accuracy. But this isn't the type of machine learning I am referring to. What about emotions? Self set goals? These things are possible, and have been created in several instances. But the design behind them almost seem primitive compared to even small animals. This is because even though we have many different algorithms to aid in machine learning, they all have one thing in common. They are slow. It will often take many, many, many passes before the computer finally gets anywhere logical. Yes, in a sense the human mind is the same way, taking the first several years of our lives to learn about how to interact with the world around us. But we can't wait that long. For a computer, an environment on a scale such as ours would take almost centuries to understand, and lots of storage space. Not to mention a ridiculous CPU to calculate decisions in real time. Resources we don't have to spare. That's why AI is such a largely studied field. It's constantly coming up with new ways to speed things up, and make it more efficient. That's why I'm here. I proposing a new type of learning method which I hope will make learning on a mass scale much easier. Though I do not fully understand every part of the algorithm yet, I hope to keep fleshing it out more and more over time. This information right here is a basic summary of the algorithm, rather then any detailed math behind it.

The Node Map Learning method is machine learning concept which is hopefully better at understanding much larger and more complex environments then regular algorithms such as Neural Networks and Genetic Algorithms. It works by basically creating it's very own node map, (which functions similar to a program's code, shown right) then filling this node map with many, many functions. Anything that can be given to it. This includes stuff like distance between two points, sorting a list, adding numbers, and of course, actual functions for interacting with the environments. The AI takes all this data, and tries to organize it into a complex tree of data processing to work with such. The tree is constantly changing and evolving. Some of the functions given to the AI might be things such as Neural Networks or Genetic Algorithms, in which case it would also use these functions, and as the AI operated, would also learn to make better use of these functions. One last thing to note, is that each limb of the tree, (the data that comes out of functions, and gets thrown into other ones) must have a "type" assigned to it. Such as "number". This means that a function which accepts 1 input in the form of a number can only accept data that is assigned as a type of number. Data types can extend other data types. Such as "integer" would extend "number". Thus, all integers can be used as numbers, but not all numbers can be used as integers. The tree grows constantly as well, becoming more and more complex as needed. This process occurs in a similar method to genetic algorithms. The AI will semi-randomly (see paragraph below) start testing out different tree designs by adding, removing, or replacing branches on the tree. If the new design functions better, use that tree instead. If not, revert to previous tree. This process continues infinity, until there is absolutely no better way of designing the tree, without completely remaking it.
It chooses a base design by randomly creating X number of completely random tree designs initially. (X is determined by the complexity of the environment, though is usually around 100 or so. Though can be much higher for intricate environments.) Then it tests all X concepts, and looks for patterns in each one, and compares these patterns to how well it did. For example, the AI may test 250 designs, and looking at the designs, trees the used the pattern "Function 1 > Function 2 > Function 3 and 4" tended to have better results, while trees that use the pattern "Function 1 > Function 4" seemed to function very poorly. Taking all the patterns into account, it designs a new tree a sees how it functions. Constantly testing it, changing it, and making it better. Each and every new tree that is tested is broken down into a list of patterns and given a numerical value based on how well it functioned. This builds up after a while, and eventually will give the AI a very clear idea of what kind of designs work better then others.
This is my concept. I hope to have a working proof of concept out later. Thanks for taking the time to read this, and follow to keep updated. I will be posting more on the subject soon.
Labels:
advanced,
ai,
amazing topic,
artificial,
artificial intelligence,
branches,
code,
computer,
intelligence,
learning,
machine,
machine learning,
map,
node,
node map,
node map learning,
nodes,
patterns,
programming,
tree
Subscribe to:
Posts (Atom)