humanAI.com

 
taskengine
timemachine
patents
notes
donation

 

 

 

                                  << content                              Chapter 4

3.  Signalless technology (one camera) 

Imagine a criminal that is hiding in a city somewhere and he is making a video of himself telling the police ransom demands.  The video can be analyzed using artificial intelligence to fabricate a probable 3-d environment of objects outside the video.  It doesnít matter if this criminal is locked inside a room with one window or the room is blocked off with curtains.  As long as there exist air and as long as there is sunlight and EM radiation bouncing off objects, the method described in my signalless technology book should be able to create a 1 mile radius of all objects centered around the video. 

Thus, the input is the video of the criminal and the desired output is the 3-d environment of the surround areas outside the video.  FIG. 17 is a diagram depicting a camera as the input media and the 3-d environment as the desired output.  The 3-d objects in the camera are known as the viewable environment and the 3-d objects outside the camera are known as the non-viewable environment.  The purpose of this technology is to generate the non-viewable environment based on the viewable environment.  The more objects that can be created in the non-viewable environment the better the technology. 

I call this technology signalless technology because someone can know what is happening in distant places without transmitting any signals (all spectrum of EM radiation).  If two people have the same type of signalless technology and they have a common communication language, they can exchange messages with each other. 

All electronic devices that have built-in cameras or sensing devices are data used in the signalless technology.  Things like laptops, desktop computers, smart phones, camera systems, living robots, sensing devices, electronic devices, utility devices, servers, etc. are the tools to collect information about our environment.  In this chapter, we will focus on camera systems and video.  Once data are collected from these electronic devices, virtual characters will process and mine the data to generate meaningful information.      

 

5 steps to generate the non-viewable environment

The instructions for the signalless technology come from virtual character pathways that use human intelligence and fixed software to do things.  These virtual character pathways (work) are assigned to fixed interface functions in software.  Essentially, this is how work is encapsulated recursively.  This is also how the instructions in the software program for the signalless technology are not fixed, but it can build on itself and become more complex. 

There are several instructional steps that the AI has to process from the video before it can generate the non-viewable environment.  The steps are listed below in sequence order.  A more detailed description of each step will be given in later sections of this chapter. 

Step1:  Determine all 3-d objects in the video and identify each EM radiation and their atom/molecule composition.  The AI should also map out the time and the place the EM radiation hit the camera and what possible paths did each EM radiation travel.  All matter, liquid and gas should be accounted for including air movements and air composition. 

Step2:  Determine all light sources, especially infrared light, and how each EM radiation bounces off objects in the environment.  Determine wither each EM radiation was refracted or reflected.  Use simulated models of EM radiation bounces and determine possible objects that were bounced from.  Also, analyze wither light sources are artificially made (light bulb) or naturally made (sunlight). 

Step3:  Determine invisible light such as x-ray, ultraviolet ray, gamma rays.  Next, determine man-made EM radiation such as radio waves, sonar waves, satellite signals and infrared signals.  Then, identify what atoms/molecules/objects caused these EM radiations Ė did a machine create these EM radiations or was it naturally made from the environment.  For man-made EM radiation, determine the signals within the EM radiation.

Step4:  Use human intelligence to help guide step1, step2 and step3.  For example, in step2, reflective surfaces such as glass, mirrors, water, metal, eyes, and plastic can reflect light.  A human can easily identify which objects or areas within the video are more likely to be reflective surfaces and they can prioritize their importance.  A human being can logically analyze a video and say a good place to search is the mirror or the retina of a person or the metal box.  Human intelligence is also good for deriving facts from the video.  If a person sees a particular handbag, they can logically say that this handbag is made only in certain areas.  This fact will narrow down where this particular video is made.   

Step5:  Layer out unknown EM radiations and place them into hierarchically structured groups.  Try to identify the atom composition of each EM radiation and the path each took to get to the camera lenses.  There might exist 2-3 EM radiations that will indicate probable locations where the video was shot.  These EM radiation only exists in specific areas.  For example, if you live in a desert, there are certain EM radiation in the air that is exclusive in that area compared to EM radiation found in another place like Alaska.  

The idea is to take all the data in the video, regardless of how minor they may be, and to process them using human intelligence and sophisticated software.  The main goal is to map out the non-viewable environment in a detailed and precise manner based on the contents in the video.  The longer the radius of the non-viewable environment is the better.  For example, 1 mile radius from the camera is a better output than 2 meter from the camera.  In my opinion, the better the AI software to process the video and the more work that is put into analyzing the video the longer the radius of the non-viewable environment will be. 

 

The camera has 5 senses

The modern camera was designed to capture visible light and things that humans can see.  The camera Iím talking about captures more than simply visible light, it captures all spectrum of EM radiation ranging from ultraviolet to x-ray to visible light to infrared light.  Even man-made EM radiation such as radio waves and satellite signals are captured by the camera. 

In addition to things that we can see, the camera should also have other senses such as the sense of touch.  It can record how hard the EM radiation hit the camera lens and at what angle.  It is said in science books that all EM radiation, theoretically, travel at the speed of light in a vacuum.  It is very hard for me to believe that an x-ray travels at the same speed as a purple colored light in a vacuum.  X-rays have more photons and because it has more photons it should travel slower than a purple colored light.  These two EM radiations arenít the same so they shouldnít behave the same way in a vacuum.  Maybe at an extremely microscopic level they travel differently. 

Letís say science is right and that ďallĒ EM radiation travels at the speed of light in a vacuum, we still have many other factors that can distinguish one EM radiation from anther.  An X-ray has a smaller wave length so it can cut through lots of objects in the air.  The purple colored light has a longer wave length and it bounces off or gets absorbed by objects in the air.  Thus, the X-ray travels faster than a purple colored light in open air.  Using spectrum patterns we can also determine what kind of atoms/molecules emitted the EM radiation.  Scientists use spectrum patterns to understand what kinds of atoms exist in far away planets. 

The point Iím trying to make is that we can analyze EM radiation in a hierarchical manner, from general to specific, to determine (1) what atoms/molecules emitted the EM radiation and (2) what path did the EM radiation take to get to the camera lens.           

Extra note:  Most of my research is based on a rudimentary knowledge about physics and chemistry so if I say something that is wrong donít be surprised.  I take what I know and I try to apply it to Artificial Intelligence. 

The pathway of an EM radiation or a group of EM radiation is crucial because EM radiation bounces off objects in the environment.  If we can determine its pathway we can determine the probable object it bounced off.  The EM radiation serves as a sonar sensor that draws a picture of what 3-d objects are in the environment.  The type of EM radiation is important because different EM radiation will have a different way of travel.  Different EM radiation will also bounce of a same object differently.  Some EM radiation actually gets absorbed by objects or they cut through certain objects.  It really depends on what type of EM radiation is being analyzed.   

The camera will also be a nose and it can smell the air.  Seeing smoke is one thing, but smelling smoke is another.  There are certain things that canít be seen in order to understand.  Smell can sense what might be in the air.  Things that canít be seen such as perfume, or food, or smoke, or flowers, or sewage and so forth should be sensed by the camera.  This camera should have as much knowledge, based on 5 senses, about our environment.

 

Signalless technology (multiple cameras)

We will use the technique from the previous section to create the signalless internet or signalless telephone system.  One camera captures only one small area in the environment.  In order to predict all matter, liquid, gas, particles and EM radiation, an army of cameras are used to capture data from the environment.  In conventional cameras, only one view point can be seen.  A special type of camera is needed.  This special camera can see in 360 degree and captures EM radiation from all angles.  This camera will be called:  360 degree camera.  The 360 degree camera contains one camera in each angle and forms a spherical shape.  The amount of clarity will depend on how many angles are designated for the 360 degree camera. 

The 360 degree camera has to be big enough to capture as much EM radiation from the environment as possible, but small enough so that tampering of the environment will be brought to a minimal. 

I would like to emphasize that the signalless technology doesnít predict the future or the past, it simply predicts the current state of the environment.  Tampering with the environment is possible and the signalless technology will still work in predicting distant areas.  On the other hand, predicting the past would require as little tampering as possible, so that the environment is preserved (I will not be discussing this issue in this patent application).  The technology is only concerned with what is happening in far off places.  The faster the signalless technology can predict what is currently happening in distant places the better.  For example, if the signalless technology captures the local area using the 360 degree camera, the faster it predict events in far off places the better.  If it can predict events in distant places in 1 millisecond, that would be better than predicting events in distant places in 5 seconds.    

The signalless technology can also be built using current methods.  Predicting the timeline of Earth for the distant past and future is much harder to build.  The signalless technology doesnít require the AI to predict the future or the past, only the current state of the environment. 

FIG. 18 is a diagram illustration for the signalless technology.  360 degree cameras will be set up in two distant places, USA and Europe.  Each circle represents a camera and they are scattered in the USA and Europe.  These camera data is considered the input and the AI has to generate the desired output which is to create a 3-d environment of non-viewable objects outside the input.  The dotted circle is the desired output for the USA and the dotted square is the desired output for Europe.  Notice that Europe can see everything that is happening in the USA and vice versa.  This is the essence of the signalless technology.  Since each party can see each other they can also communicate with each other as well. 

FIG. 18

Each input area records all information regarding the movements of all matter, liquid, gas, particles and EM radiations.  The more accurate the input data the better the desired output.  Sometimes information in the input area is not enough and the desired output can only be an estimation. 

 

Signalless technology applied to the practical time machine

The signalless technology is used to collect information and to track all atoms, electrons and em radiations from the environment in the quickest way possible.  A high resolution camera can be used and it should map out the external and internal structures of objects.  For example, if the camera was pointed at a human being, every atom inside the human being is mapped out.  No x-ray machines are needed to see the internal atoms.  The AI in the signalless technology is used instead, to fill in the missing pieces that the camera doesnít capture.

The Heisenberg theory states that it is impossible to know the movements of an electron around an atom.  The timeline for Earth has to track all object movements, including electrons.  The signalless technology uses virtual character pathways and the universal computer program to encapsulate their work.  The universal computer program assigns fixed interface functions to virtual character pathways.  The instructions for the signalless technology are non-fixed and have a bootstrapping process, whereby they build on previously learned instructions. 

The method in which the signalless technology finds out how an electron orbits its nucleus is based on the simulation brain.  The virtual characters have to analyze and observe simulated models of how atoms behave.  They will use this data to ďassumeĒ where the electron is moving at any given moment (refer to my books to understand the details of this method).

 

<< content               next chapter >>

 

 

  

Home | Task Engine | Time Machine | Patents | Notes | Donation 

Copyright 2007 (All rights reserved)