humanAI.com

 
taskengine
timemachine
patents
notes
donation

 

 

 

                                  << content                              Chapter 10

Other topics:

 

Past prediction

The method to predict the future can also be used to predict the past.  FIG. 43 is a diagram depicting how the prediction tree for sequences can be used to predict the past.  The virtual characters will start their predictions in 1937 and it will incrementally advance to 1930.  For each sequential year they want to add to their prediction, they will generate added branch predicted models into the prediction tree.  For example, if the virtual characters wanted to predict G1, branches of predicted models will be added to the prediction tree.  Next, if the virtual characters wanted to predict G2, branches of predicted models will be added to the prediction tree.  Then, if the virtual characters wanted to predict G3, branches of predicted models will be added to the prediction tree.  This goes on and on until 1930.  Thus, sequential predictions require the merging of branches of predicted models.  The merging is done during runtime and under the supervision of virtual characters.  

 

What constitutes as an object in a predicted model?

An object can be anything and because some objects are too abstract to describe, it is important for me to address this issue.  Very obvious objects that have set and defined boundaries are small physical objects.  A human being has set boundaries.  All the body parts, and their clothing is the boundary of a human being.  A pencil has set boundaries.  A chair has set boundaries.  Even a house has set boundaries. 

Objects that describe a situation or an event are harder to represent.  Words like car accident, the accident scene, the laboratory situation, the crime, the concert event and so forth, do not have fixed and defined boundaries. 

Let’s use football as an example.  An object can be, “the fans are going wild”.  This sentence encapsulates all the fans on the stadium and their collective activities (their cheers and motions).  This abstract object has no boundaries and limits.  People can actually interpret the object in different ways.

In order for the virtual characters to understand the description and boundaries of objects for predicted models, they use “common knowledge”.  Everyone doing predictions on the prediction internet knows an approximate description and boundary of an object.  This way, when different virtual characters have to do predictions on an abstract object, they have a universal understanding of the object.  Also, virtual characters can use different words to represent the same object.  These virtual characters can use deduction skills to conclude that one word is similar to another word. 

Language is a very powerful way to represent simple and abstract objects.  Language can be used to represent places, things, events, objects, time and actions.  Let’s say the virtual characters are trying to predict the 5 sense data for the quarterback.  One object in the quarterback’s 5 senses is:  “the fans go wild”.  This object encases any data that is sensed by the quarterback such as the visual images of the fans, the sound coming from their voices, and the paper they are throwing in the air.  The QB might be focused on the game and in his peripheral vision he can see the fans.  A virtual character might designate all fan images and the sound they make to be one object. 

The virtual character might use this object to determine the exact location of where the current pathway will be stored in the QB’s memory.  The virtual character might have two choices to select from.  These two choices are to store it in the left area or to store it in the right area.  The storage of the current pathway in the QB’s memory is based on the fan object.  Maybe the overall pixel color of the fans will decide where the current pathway will be stored.  If the overall pixel color is close to blue, then the current pathway will be stored in the left area; and if the overall pixel color is close to red, then the current pathway will be stored in the right area. 

The fan object encases all the fans and their activities on the stadium.  This object will affect the future because it might determine where the QB will store its current 5 sense data (called the current pathway) in memory.

Very obvious objects like the QB and the receiver are very prominent.  A predicted model might include the actions of the QB and the receiver based on a strategy.  In football there are different strategies between players.  Square-in, square-out and long throw are just some strategies between the QB and the receiver.  If an object is square-out, the receiver will run straight and turn to the right/left real fast.  At this point, the QB will throw the ball to the receiver. 

The object, square-out, encases the linear activities between the QB and the receiver during that gameplay.  If this square-out is an object in a predicted model, the virtual characters will only focus on the QB and the receiver and any opponent player that will affect the square-out strategy.

Some strategies require the entire team to execute.  The virtual characters have to understand what players are involved in a strategy because these players are a part of the strategies.          

Another abstract object is time and events.  For the most part, the virtual characters predict the future in segmented increments, but some events are overlapping or they encase an estimated time.  For example, the words:  “the entire game”, represents the 4 hour football game.  The words:  “the next gameplay”, represents one football scene (which doesn’t have a fix time).  The next gameplay can last for 10 seconds or 30 seconds.    

Some objects in predicted models can span several linear gameplays or fragmented gameplays.  The linear goals of the QB might span 4 gameplays or spaced-out gameplays.  If virtual characters have to predict the linear goals of the QB, they won’t know exactly when he will execute each goal.  At this point, the predictions made will be based on estimations and assumptions. 

The point I’m trying to make is that the virtual characters that are doing predictions will have a hard time to interpret abstract objects in predicted models.  They must use common knowledge in order to do their predictions and to understand complex object descriptions.

 

Logical observation  

By using words and sentences to represent objects, events, time and actions, the virtual characters are actually observing and labeling sequential events.  The QB’s brain can actually be predicted by comparing his past linear gameplays.  For example, an automated software can be created to determine what is happening in a football game.  Every action and strategy in the game are labeled.  A virtual character can observe the labeled events and try to guess what the QB was thinking before he makes a gameplay.  Universal pathways of the QB can be formed if the virtual characters use this method.  After many observations of gameplays for the QB, the virtual characters can form a simulated brain of the quarterback.  This simulated brain may not be exact, but it gives information about the universal strategies the QB uses. 

Universal pathways of the QB include strategies that are consistent in similar gameplays.  For example, the QB might use a particular strategy when the score is low and he might use another strategy when the score is high.  It’s up to the virtual characters to observe past gameplays of the QB, use automated event labeling software, and to form a simulated brain of the QB. 

The automated event labeling software can also predict the most likely future event.  For example, the QB might repeat certain strategies over and over again.  He might throw the ball to the receiver in two gameplays and in the third gameplay he gives it to the runningback.  So, the next time the QB throws two times to the receiver, the automated labeling software will predict he will give the ball to the runningback in the third gameplay.  By the way, the automated labeling software is the AI time machine.  The pathways for the AI time machine, in this case, will record virtual characters observing and labeling past football games for this QB.                        

The universal pathways might include simple discrete math functions like:  if-then statements, for-loops, recursive loops, while-loops and functions.  If the QB’s goal is to throw the ball to the receiver, then focus on the receiver and if he is open throw the ball.  If the QB’s goal is to pass to the runningback, then pass the ball to the runningback as fast as possible.  If the ball is close to the touchdown line, then give the ball to the runningback or if a player is clearly open then pass to player. 

The universal pathways are just simple if-then statements that determine decision making.  The simulated brain of the QB will most likely be populated with universal pathways.  As the virtual characters do more research, they can uncover greater details of what pathways exist in the QB’s brain.  The signalless technology will help to map out the physical atom-by-atoms in the QB’s brain using AI. 

The virtual characters can take information from the simulated brain (created by virtual characters) and information from the signalless technology to create an exact brain model of the QB.  The information from both methods will merge together to predict exactly what the QB will think and do in the future.   

     

Simulating physical object interactions

When two cars collide there is a certain way that they interact with each other to end up as smashed cars.  Atom-by-atom simulations are required to predict the future results of two or more object interactions.  The virtual characters have to handcraft a simulation program that will take video observations and to form an exact 3-d model in the software.

The simulated program has to factor in hidden aspects, which can be handcrafted by the virtual characters, such as gravity, and perspective.  Based on 2-d images on the moon or on Earth, the simulated program can generate gravity statistics.  Math equations can automatically be calculated, like the speed and velocity of objects. 

The idea is to create a simulated program whereby, atom-by-atom information about an object is fed into the software.  All object interactions within the simulation will act exactly like they would in the real world.  If two non-intelligent cars collided with each other in the real world, the simulated models of the two cars colliding will have the same results. 

In terms of a human being, there are many tiny living organisms that make a human being.  Cells in the human being are living and they act based on a primitive intelligence.  Bacteria live in the human being and they act based on a primitive intelligence.  However, the most important intelligence comes from the human being’s brain.  By predicting how the brain works and what chemical signals are sent to the body, we can calculate how the physical body will act. 

A human being’s brain is intelligent; and the human being’s physical body is non-intelligent.  A simulation software is going to map out atom-by-atom of the human being’s physical body.  The virtual characters, on the other hand, have to predict the chemical signals that will be sent from the brain to the rest of the body.  These chemical signals determine how the human being’s body parts move.

Given that the human being’s physical body is copied into a simulated program (by the signalless technology), and the virtual characters have predicted the exact chemical signals that the human being’s brain will output, the human being’s future simulation can be 100 percent accurate.  Even if there are slight imperfections in terms of the atom structure of the human being’s physical body, and the chemical signals outputted by the brain isn’t 100 percent identical, the simulation will still come very close to the real thing.

A human being is the most important simulation object that the virtual characters have to predict.  If you look at non-intelligent complex objects like a computer, the whole physical structure of the computer can be copied in a simulation program and it should work exactly like it would in the real world.  Software programs can be running within a computer within another computer.  For example, a simulation object can be a computer system and this computer system is running WindowsXP.  If you compare, a real computer running WindowsXP and a virtual computer running WindowsXP, they are identical. 

The simulation is running the WindowsXP software in a physical computer system inside another computer system.  The physical computer system has to have simulated components like electricity and wires and physical computer hardware.  The simulation program has to factor in the amount of electricity coming into the physical computer.  Where the electricity will travel and how the computer’s hardware will process the WindowsXP software must be known too. 

If you play videogames, sometimes the screen slows down or encounters computer glitches.  The virtual characters have to simulate how a physical computer system in the real world will behave in a virtual environment.  Sometimes, a large videogame is played on a computer with slow processing speed.  This will result in the videogame slowing down or freezing.  This behavior must be simulated in the virtual world.  Although every copy of WindowsXP is the same, the physical computer running the software is different.  Thus, the activities of WindowsXP might be slightly different for different physical computers.

As you can see, simulating windowsXP isn’t as easy as running the software in a virtual environment.  The physical computer has to be simulated and it has to interact with the WindowsXP software to produce results on the monitor.  Both the WindowsXP software, as well as, the physical computer system has to be simulated as a group in a virtual environment.                     

 

Signalless technology example

FIGS. 44 and 45A-45C are diagrams depicting the intelligence that is needed to map out the atom-by-atom structure of the current environment in the fastest time possible.  The diagrams depict three methods to collect and generate data for the signalless technology.  All three methods work together in order to track every single atom of the current environment.  The first method is to take a 2-d image from an electronic device like a camera system or a camera on a laptop, and find a match in the universal brain (FIG. 45A).  The purpose is to locate where the 2-d image was made in the world.  If the 2-d image is the statue of liberty 6, then the camera system is located in New York. 

FIG. 44

 

FIG. 45A

 

FIG. 45B

 

FIG. 45C

The universal brain stores robot pathways and electronic device pathways (like a camera system) in memory.  These robot pathways form a 3-d map of the environment these robots have encountered.  There is a map of the entire world in the universal brain because robots and camera systems are located all over the world.  Objects like houses, streets, buildings, lakes, and stores are all stationary objects.  They don’t move and they will probably be there in the future.  By locating the place the camera system is in, we can extract the detailed location from the universal brain.

For example, if the 2-d image is the statue of liberty 6, then there is a detailed atom-by-atom of the statue of liberty 6 in the universal brain.  This detailed model will be used to help the signalless technology find out what objects in our current environment exists, atom-by-atom.  This detailed model of the statue of liberty 6 contains the external, as well as, internal objects that make up the statue of liberty 6.  The signalless technology now has a better idea of what objects are hidden from the camera.      

In another example, the environment model in memory of New York will also tell the signalless technology what objects are behind the camera system or below it. 

Although the environment extracted from the universal brain, based on the 2-d image, won’t be exactly the same as the current environment, the signalless technology will try to find out what the current environment is, atom-by-atom. 

The universal brain stores pathways from intelligent, as well as, non-intelligent objects.  It can store pathways from robots or it can store pathways from a camera system.  Changes in the environment will be witnessed by robots or electronic devices and this information will update the environment models in the universal brain.  Stationary objects that are consistently the same every time will have a permanent storage location in the universal brain, while moving objects like human beings are stored in fragmented areas.  For human beings, the places they visit will be where their pathways are stored in memory.  If a human being goes home and then goes to work, then goes back home, then information from the human being will be stored in primarily two places:  his home and his workplace. 

The environment models extracted from the universal brain outlines how consistent objects are.  If a building hasn’t changed for 100 years, then it should tell people that the building is most likely there now.  On the other hand, there might be a billboard on a street that changes every week.  The environment models should say how consistent objects are so that the signalless technology can guess wither objects has changed presently. 

Referring to FIG. 45B, the second method includes using real virtual characters or the AI time machine to process data from electronic devices to track where intelligent objects are currently located.  Let’s say that there is a house 8 several yards away from the statue of liberty 6.  The camera system can only see the external part of house 8, but nothing inside is visible.  The signalless technology will use real virtual characters or the AI time machine to search the internet and find out where human beings are.  If someone from house 8 is using a cellphone, the virtual characters can assume that someone is physically in house 8 making a call.  The virtual characters will analyze phone number records and find out who owns the cellphone, then they will analyze the voice of the caller and confirm that Dave is in house 8 (example1). 

In another part of house 8, another person is using the internet.  The virtual characters will tap into the internet and find out that someone is shopping for girl shoes at Wal-mart.  They assume it’s a girl on the computer.  Next, they find out who is registered to the internet connection.  They found out it was Dave’s wife, Jessica.  The virtual characters will assume that Jessica is on the computer shopping for shoes (example2). 

Yet, in another case, the virtual characters check news coming out of house 8 in the internet and finds out that the government fixed the pothole that was on the street behind house 8.  The virtual characters will assume that the pothole on the street behind house 8 is fixed.  The camera system can’t see it (example3). 

Yet, in another case, the virtual characters might have access to a camera system on Jessica’s computer and they can see the interior of house 8.  Now, the virtual characters can map out the objects inside house 8.  Once objects are identified, the universal brain contains these objects in memory, so detailed simulated models can be extracted to represent these objects.  For example, if the camera shows a printer, the simulated model extracted will be a printer with all of its exterior and interior atom structures (example4). 

These four examples shows that the real virtual characters or the AI time machine can be used to gather more data on the current environment and to create a more detailed map of the environment.  In example1, Dave is identified and tracked.  In example2, Dave’s wife, Jessica is identified and tracked.  In example3, a recent event changed the street.  In example4, the interior of house 8 is mapped out. 

The virtual characters use data from electronic devices to track moving objects like human beings, animals, insects, and bacteria. 

Referring to FIG. 45C, the third method includes using real virtual characters and the AI time machine to process data from the camera system.  The job of the virtual characters this time is to take em radiation and to find out how they traveled to get to the camera.  They serve as a sonar system that bounces off objects (buildings, houses bridges, humans, etc).  em radiation can either be absorbed by other atoms or it can bounce off other atoms.  Both types of behavior will be analyzed to create this sonar system. 

The virtual characters will also analyze em radiation to find out what types of atom emitted each em radiation.  Spectral analysis can be used to find out atom types from em radiation data.  Em radiation are unique to some atoms, or molecules or large objects.  If the camera system picks up strong gamma rays, that means something radioactive is close by to the camera system.  There might be an em radiation that belongs to a small flower that the camera system doesn’t visibly see.               

Another job for the virtual characters is to analyze air movement.  Air can also act as a sonar system to map out hidden objects not contained in the visibility area of the camera.  They will try to find out how the air moved in the short past to get to the camera.  What objects have they bounced off or went around to get to the camera lenses?  Thus, their job is to find out how air traveled and bounced around to reach the camera lenses.

Yet, another job for the virtual characters is to analyze electronic transmissions in the air.  This data can be processed to identify who sent the data and where electronic devices are currently located.  What is contained in an electronic transmission can also tell a lot about the sender, such as who this person is and who the receiver is.

In conclusion to this section, all three methods are used in combinations and permutations in order for the signalless technology to map out the current environment, atom-by-atom.  The AI time machine is used to encapsulate work and to manage complexity.  For example, virtual character pathways can be assigned to fixed interface functions in the AI time machine so that the signalless technology can use these fixed interface functions to do work.        

By the way, the simulated model stored in the universal brain is a well-crafted model by teams of virtual characters.  They analyze the functions of an object and break it down into software functions.  The simulated model is ultimately a software program that represents an object in the real world.  For example, a simulated model of a printer will not only contain the physical structure of the printer, but also simulate its functions.

 

Prediction tree for the stock market

Referring to FIG. 46, the most important aspects about a stock owner is his brain and his physical body.  Each stock owner is a human being so they will all have a brain and a body as their lower levels.  Each stock owner will probably be using a computer to sell, observe and buy stocks.  In the lower levels of the computer object there are the computer’s software/hardware; and the trading software. 

FIG. 46

There will be a central server, located in the stock exchange and contains the trading software for all stock owners.  There are three parts that the virtual characters are primarily concerned with:  1.  the network of users.  2.  the stock company.  3.  the individual stock owners.  The prediction tree to represent the stock market for one company will be based on breaking and grouping objects together in a hierarchical tree.  For example, a stock owner that has 1 million shares is more important than a stock owner who owns 50 shares.  The three parts depicted in the diagram must be predicted in a uniform manner.

The factors that determines object dependability for a human being (a stock owner) are:  1.  their 5 senses.  2.  their thoughts.  The factors that determine object dependability for a computer are:  1.  user input.  2.  software/hardware of computer.  The factors that determine object dependability for a network are:  1.  software.  2.  input from users.            

 

Clarification on one of the claims

Claim1 states:  “at least one dynamic robot is required to train said AI time machine, and tasks are trained from simple to complex through a process of encapsulation using said AI time machine,”.  This claim means that training go from simple to complex, whereby tasks are encapsulated.  The dynamic robots use the AI time machine to encapsulate tasks.  For example, the AI time machine can learn to write software programs through gradual training.  The dynamic robots will first train the AI time machine to write a simple software program such as a program that outputs hello world on the monitor.  Next, the dynamic robots will train the AI time machine on simple class software programs like a program to convert Fahrenheit to Celsius.  Then, the dynamic robots will train the AI time machine to write a complex software program, such as a database system using recursion.  Finally, the dynamic robots will work in a team to write really large software programs like an operating system.   

Human beings learn to do complex tasks through a bootstrapping process, whereby new data is built upon old data.  Through self-organization, the complex tasks will include simple tasks via patterns.  For example, writing a very large software program like an operating system might require reference patterns to simple tasks like writing a simple function, writing a class program or writing a database system. 

The AI time machine can also encapsulate tasks for the dynamic robots so that they can use the encapsulated tasks for another task.  For example, the dynamic robots might encapsulate the task of making a drawing sharper (called task1).  Next, it will use task1 multiple times to make one patent drawing (called task2).  Finally, it will use task1 and task2 to make all 50 patent drawings for one patent application.   

All subject matters related to the atom manipulator, the ghost machines, the universal CPU, the hardwareless computer systems, and the 4th dimensional computer have been described in previous patent applications or books.  As far as the claims in this patent application, all external technologies have been described in the overview of the AI time machine (in the beginning part of this patent application).

 

Motivations of the dynamic robots

These robots are self-awared and they sense, think and act like human beings.  Humans want something in return for labor.  We work because our boss pays us.  These dynamic robots probably want something in return for their labor.  These dynamic robots will want robot immortality, which the AI time machine can grant.  If a dynamic robot is destroyed, the AI time machine can restore that robot to its original state.  In order to do this the virtual characters have to do two tasks for the AI time machine:  1.  create a perfect timeline of Earth.  2.  train the AI time machine to control atom manipulators.  The notion of robot immortality gives these dynamic robots motivation to work.  If this method fails, each robot has a choice to follow the US constitution.  A sense of patriot or duty or love, might be motivation to work on the AI time machine.            

The foregoing has outlined, in general, the physical aspects of the invention and is to serve as an aid to better understanding the intended use and application of the invention. In reference to such, there is to be a clear understanding that the present invention is not limited to the method or detail of construction, fabrication, material, or application of use described and illustrated herein. Any other variation of fabrication, use, or application should be considered apparent as an alternative embodiment of the present invention.

          

<< content

 

 

  

Home | Task Engine | Time Machine | Patents | Notes | Donation 

Copyright 2007 (All rights reserved)