<< content Chapter 7
Software programs inside a predicted model
The most difficult object to predict for Earth are human beings. In fact, predicting the future actions of any living organism is very difficult to do. In order to predict future events of a football game, all future actions from players, coaches, referees, and fans in the stadium must be predicted accurately and uniformly.
In this chapter, my initial goal was to provide proof that it is possible to predict the 5 senses, thoughts, and actions of a human being. Things like the 5 senses of a person and their thoughts and actions are hidden to an observer. For example, a camera canít capture the thoughts of a person simply by seeing them. In order to understand someoneís 5 senses and thoughts, artificial intelligence is needed to logically assume what a person is sensing and thinking.
There are two methods to really understand how a person senses and thinks. These methods are: 1. building simulated software based on a personís past. 2. building simulated software based on a personís physical body (the brain is the most important body part). For the first method, virtual characters has to collect lots of electronic information from a person, such as email, web activities, chat conversations, surveillance cameras, buying behavior, decision making behavior, desires and dislikes, and so forth, to create a composite of how that person senses and thinks. This method will not ultimately determine exactly how a person thinks. However, it can capture a personís behaviors and patterns so that the AI software can give a probability of what that person will do in the future.
The second method is painstakingly difficult to accomplish because it requires mapping out every atom in a personís brain (and the rest of his body). The teams of virtual characters have to use the signalless technology to find out all the atoms in the personís head, hierarchically. The identification of atoms in the personís brain should go from general to specific. For example, universal pathways are identified first. These universal pathways are very general and donít include any detailed instructions. Next, specific pathways in the personís brain are identified, whereby every location of neurons and dendrites are mapped out perfectly.
If a perfect map of the brain is created and the organs of the brain are delineated, then the virtual characters can convert that information into simulated software. All knowledge of the person is contained in his brain. This means that within his brain only one of the pathways will be selected to take action.
If we analyze a football player, his brain only contains a very limited amount of actions in terms of playing football. The rules of football limits the actions he can take. Also, the human brain contains about 50 billion neurons to store data. In a football playerís brain, the knowledge of decision making for a football game is just a small fraction of 50 billion neurons. In other words, the virtual characters need to hierarchically predict the football playerís brain by predicting pathways that matter first. Knowledge about football will be predicted first compared to the knowledge of solving a math equation.
The human beingsí brain doesnít just include pathways, but organs that allow data to be extracted and processed. There are certain organs that create chemical electricity that travels through neurons. Other brain organs include trapping certain chemical electricity and sending them to the bodyís nervous system to move certain body parts. The virtual characters must identify and predict brain organs too, as well, as stored pathways.
The two methods above to predict how a person senses, thinks and acts must work together. The virtual characters must use both methods to predict the future actions of a human being. Information from the signalless technology will be sent to the prediction internet. The virtual characters will find specific data they are looking for and process them further.
In terms of how the signalless technology will map out every neuron and dendrite in a humanís brain will depend on sensed data from the human. For example, the brain is encased in a skull and skin so the 5 sense camera system canít see the brain, nor its inner elements. However, when a person thinks, electrical charges are given off. The 5 sense camera can use these electrical charges to assume what caused them; and to use AI to map out the atom-by-atom structure of the brain. No X-rays or sonar devices are ever used to scan an object. Refer to my signalless technology book to find out how this is done.
The signalless technology will send information about an object (a human being) to the prediction internet hierarchically, from general to specific. While this is happening, the virtual characters have to build software programs that can represent the human brain in a manageable way. Sensed data from the human being are limited and the brain pathways selected are limited. The virtual characters handcrafted the most important elements of how the human brain works and convert that information into a software program. A user can see the most important aspects of the human brain through the software program in terms of sensed input, intelligence processing and pathway selection.
Human beings are really stupid
Human beings are very stupid and they can only focus on a limited amount of data at any given moment. People might look around us and see a world that has hundreds of objects per second. Reality is that a human being can only focus on 2-3 objects from our environment. If the human being is in a busy city during lunch time, he is mainly focused on 2-3 objects at any given moment. The rest of the objects are fuzzy and ignored.
Even the thoughts of a human being are very limited. It takes about 1 second for 1 thought to activate in the brain. Sometimes, thoughts take 2-3 seconds to activate. Things like searching for answers to complex questions require the brain to search for that information and this process takes time.
Because human beings sense and act slowly, it is quite possible to predict their future actions, even if this information is hidden.
Another reason that it is possible to predict the future actions of a human being comes from a famous statement: ďa personís goals become realityĒ. That statement sums up why it is possible to predict the future. If the virtual characters predicted that the goals of a quarterback is to hand the ball to the runningback, then that is exactly whatís going to happen in the future. The quarterback has to decide what his going to do (logically or randomly) before every gameplay. If his goal before a gameplay is fixed, he will carry out that goal in the future. Also, when a person makes a decision, it is very unlikely that they will change their minds in the next few seconds because human beings are rational and not fickle minded.
In terms of a football game, events are happening so fast that it is very difficult to change your mind. In fact, quarterbacks that change their mind at the last second usually fail. Also, there are some team players that coordinate their actions without any prior notice. They use common knowledge from practices to know what each other are thinking.
In other cases, the quarterback doesnít have a fixed goal. He will make a decision to throw the ball and he will use intelligent pathways in memory to search for ďopenedĒ players. His thinking might be to throw to a far receiver. If no receiver is open, his instruction is to throw the ball to any close player. This behavior to act, comes from a universal pathway in the QBís brain to decide what he will do during runtime in the game.
I think the most important aspects of a human being (a quarterback) are the human beingís brain and his physical body. These two parts have to be predicted separately at first and a team of virtual characters at the top level has to predict their interactions.
The virtual charactersí job is to observe past behaviors of the quarterback and to devise a software program that will store possible decision making pathways. Any linear methods that the quarterback uses should be stored as possible pathways.
In order to do this, the AI time machine (aka universal computer program) has to analyze any past football games played by this quarterback. The AI time machine, in this case, is used to identify linear methods of play and thought by the quarterback. The virtual character will compile this information about the quarterback and handcraft another software that create a simulated brain of the quarterback.
A very sophisticated type of brain simulation is needed (yet to be discovered software program) to really simulate the exact brain behavior of a given human being. My guess is that the AI time machine is used to encapsulate a very sophisticated type of simulation software that caters specifically to human brains and to predict what it will sense and think in the future.
Software programs inside a predicted model
Any given predicted model in the prediction tree has a software program that the teams of virtual characters are responsible for (FIG. 27). This software program is the interface that allows other users (parent nodes or child nodes or interested nodes) to gain access to the limited information in this predicted model. Functions in the software program will help navigate the user so that information can be found quickly and accurately. This software program has to be interactive as well so that the user can input variables and a desired output will be presented.
This software program will be based on the focused objects and the peripheral objects. Letís say the focused objects are: overall fans, QB and receiver. The software program for that predicted model is only interested in presenting data on these three objects. All other objects are minor and will be ignored or mildly considered.
FIG. 28 is a diagram depicting two predicted models (M1 and M2). M1 is responsible for creating a software program that will take a pathway selected by the football playerís brain and to insert that input into the football playerís physical body. The output is the interaction between the two parts. A software program will include taking a selected pathway from a brain, extracting the instructions from the pathway, generating electrical signals to the physical body and displaying a 360 degree animation of the football player.
The software program can take in any selected pathway from memory and the physical body will behave according to the instructions written in the pathway. If pathwayB is selected, the football player will move in this manner. If pathwayT is selected, the football player will move in that manner. The software program for M1 should be interactive and the user can control what variables to input and the desired output should be accurate.
If you look at the lower levels of M1, the brain predicted model is only interested in the brain. Software program in the brain predicted model is catered to only the brain. The same thing should be said about the physical body predicted model. The virtual characters that are responsible for mapping the physical body has considered what would happen if a different brain signal was sent to the arm or the leg or the neck. Each body part is simulated in terms of how it works.
There is one software program for the brain predicted model and one software program for the physical body predicted model. The responsibility of M1 is to merge the two software programs together and to tweak their functions so that the user can access hybrid information from the two individual software programs.
M1 is also responsible for adding new functions and interfaces; and merging the two software programs. Variables or functions that can be applied to the software program can be limited or simplified by M1. The user doesnít have to insert the intelligent pathway from the brain into the software program. M1 can provide a list of ranked possible pathways that is selected by the brain. Itís up to the user to use human intelligence to determine if these ranked possibilities are correct in terms of their predicted model.
M1 also has to output ranked future possibilities. These future possibilities can be in any media type M1 thinks is appropriate for its predicted model. The future possibilities can be a 3-d animation, or a short document, or a book, or a comic book or a 2-d movie, or a website, etc. The software program should give the user options to view possibilities, analyze possibilities, see properties of possibilities, manipulate possibilities and so forth.
As stated before, the ranked future possibilities and the software program is based on the focused objects and the peripheral objects for that predicted model.
Automated function changes for a software program
All software programs from multiple neighbor predicted models can form unified functions that changes variables. Referring to FIG. 28, the brain predicted model might output a new ranking of possible pathways selected. This new ranking should be automatically transmitted to the software program in M1. This in term should result in M1 automatically (or manually) changing its future possibilities.
In another example, the physical body predicted model might change its software program. The modified program includes a more detailed depiction of the football playerís body. This change should not affect M1 in a major way. The functions of the physical body software program are exactly the same. The input is still a selected pathway from the brain. The only difference is that in M1, when a selected pathway is inputted into the physical body, the 3-d animation of the football player will be more accurate and detailed.
Thus, dependable functions must be created so that lower and higher level predicted models can change variables and functions in their software programs without human intervention.
Itís up to the virtual characters working for a predicted model to determine if they want dependable functions in their software program or they want to manually change data (or both).
Very complex software programs that cater to something like a human brain will have many hierarchically structured dependable software programs. Each software program in the hierarchy has to be handcrafted and tested for reliability. A human brain is very complex, but if we use this type of method to simulate it, it might be possible to know how it will work in the future.
Some linear thoughts of a human being donít depend on the 5 sense data from the current environment. Thoughts in the brain are based on a cascading affect, whereby chemical electricity propagate outwards in certain areas of the brain. For example, a person might look at a bird, and a bird image pops up in his mind. Next, a memory of the personís pet bird pops up in his mind. Then, a memory of the birdcage the pet was living in popped up. These are linear thoughts of the person based on the sensed image of a bird.
Although thoughts donít activate linearly exactly every time he sees a bird, the same encounter with a bird might activate similar linear thoughts. In another case, a person might be sad and the sadness will activate the instructions: light up a cigarette. Next, the thought activates: ďgo outside and light upĒ. So, the next time that an event triggers a sad moment, the person will probably do the same linear things, which are: light up cigarette and go outside to light up. There is no guarantee that this linear behavior will happen every time that the person gets sad, but it is one proven behavior because this person has done it repeatedly in the past.
The software program to simulate a brain has to consider these linear thoughts. Most of these linear thoughts are learned in school and others are self-learning. The human brain has to send out a series of chemical electricity throughout the brain (based on the 5 senses) in order to produce linear thoughts. The virtual characters have to consider how the activities in the brain functions as a whole. Everything from the internal organs to the stored pathways to the inputted 5 senses has to be analyzed to determine the factors that make up linear thoughts.
The virtual characters might create a simulated brain that contains general location of pathways (universal, as well as, detailed pathways). They also have the current 5 sense data ready to be inserted into the simulated brain. Upon inserting the current 5 sense data, a function will generate chemical electricity to travel on the pathways. This simulation will reveal which areas in the brainís memory will be accessed and what information was extracted.
If the physical brain structure is mapped out correctly in terms of where pathways are stored and how the organs work, the linear thoughts of the person should be revealed.
The reason why this is important is because linear thoughts contain future tasks a given human being will do in the future. If a person is determined to do something, they will make it a reality. If the football player plans to pass the ball to the runningback, that is the direction of the future gameplay. The virtual characters are responsible for predicting what other players will do as a result of the QB passing the ball to the runningback. Thus, the main factor is actually predicting the decision making process of the QB before a gameplay. Decision making can be a behavior. For example, in the past, the QB usually likes to give the ball to the runningback when he is near the end zone.
A personís behavior is based on universal pathways in memory to make decisions. Thus, the virtual characters can use observed behaviors as clues to determine the universal pathways. What about the detailed pathways? How will the virtual characters predict detailed pathways stored in the QBís brain? The answer is the signalless technology. A 3-d map of the QBís brain (general or specific) must be given to the virtual characters and they have to use logic to fill in all the missing data. For example, the signalless technology can only map out molecule-by-molecule of the QBís brain. The virtual characters will use logic to map out the atom-by-atom structure of the QBís brain. Using this atom-by-atom structure, they can translate this data and determine what are universal pathways and detailed pathways and what are the instructions in each pathway.
Example of an interconnected software program for tree branches
Objects in predicted models have to have some way of interfacing with other objects. For intelligent objects like animals and human beings there are two factors that determine object dependability: 1. 5 sense data. 2. thoughts.
The QB sees other players, therefore the relationship is the visual image of other players the QB is seeing. Also, the QB can think of other players that it canít sense. For example, in the next gameplay, the QB has coordinated with the runningback with a nod that he will pass the ball to him. The QB, during the gameplay, is thinking about the runningback even though he doesnít see him. This common knowledge of where the running back should be from the QBís perspective is the relationship between the two players.
The software in neighbor predicted models has to have a means of establishing relationships among objects (most notably, human beings). FIG. 29 is a diagram depicting a prediction tree for one gameplay. J1 is the QB and a close player, J2 is the QB and the runningback, and J3 is the QB and the receiver. Each predicted model is only concerned with their focused objects. S2ís focused objects are elements in J2 and J3. Predicted model D1ís focused objects are all its lower levels (S2, J1, J2, and J3). Finally, J1-J3 are all pointing to the QB predicted model.
Each predicted model in this tree branch has their own software program. These software programs have to be interconnected so that if one software program from a predicted model changes, the other software programs from neighbor predicted models will also change.
FIG. 30 is a diagram depicting relationship functions between different software programs. The output of the future possibilities of J2 are 3-d animation of the quarterback and the runningback. The 3-d animation shows the possible physical interactions of the QB and the runningback. This 3-d animation is ranked in terms of what will probably happen in the future. In D1, the QBís 5 senses will have relational links with the 3-d animations. An image processor is needed to convert the 3-d animations into 2-d animations based on the QBís perspective. Letís say that the 3-d animations outputted by J2 are modified. This means that the data in D1 will automatically be modified as well. The modified 3-d animations will be converted to new 2-d animations. The old 2-d animations from D1 will be deleted and replaced with the new 2-d animations. This means the 2-d animation of the runningback will be changed in the QBís 5 senses in predicted model D1.
Predicted model D1 has automated software that basically takes in the modified 5 sense data of the QB; and functions in the software will output an accurate pathway selection from the QBís brain. These selected pathways is one output from D1.
If the lower level predicted models like J2 is changed, the QBís 5 senses in D1 also changes. The software program in D1 will also automatically change its output.
D1 can have automated QBís 5 sense data changed based on all its lower levels. J1-J3 can be changed and the QBís 5 senses in D1 will also be changed. For example, if J3 changes, the receiver animation of the QBís 5 senses from D1 will be changed. If J2 changes, the runningback animation of the QBís 5 senses from D1 will be changed. If J1 changes, the close playerís animation of the QBís 5 senses from D1 will also be changed.
FIG. 31 is a diagram depicting the software program in D1 that will take in the QBís 5 senses and the simulated brain (pointer 2) will output a selected pathway. As the QBís 5 senses are changed the simulated brain (pointer 2) will output a different selected pathway. This selected pathway will be fed into a simulated body (pointer 4) of the QB and the result is the 3-d animation of the QB.
This just shows that there is an automated system, whereby neighbor predicted models can change their outputs or software program and other predicted models will adapt their outputs and software program. This automated system should be considered in conjunction with manual manipulation of outputs in software programs. For example, the automated system might produce wrong results. The virtual characters recognize this and manually change their outputs and modify their software so that it never happens in the future.
The reason that the lower level predicted models change their prediction is because each team did further investigation and found better predictions. For example, in J3, the virtual characters found out that the receiver will run to the right and not the left like they previously predicted.
Copyright 2007 (All rights reserved)