Rev 1 - The Physical Constants of Digital Space

110
Original release date: February 15, 2016 The Physical Constants of Digital Space By: Nathan S. Richard July 17, 2016 Opening The information contained herein derives from theory originated by me. Other than obviously required references to currently accepted ideas, this work attempts to explain scientific observations using a new model developed from my own hypotheses. Any reference to the previous work of others is acknowledged either by naming the person or their work, (such as; Einstein and General Relativity.) Other prior work or discoveries are referenced by commonly accepted terminology, (such as; Quantum Mechanics, beta decay, expanding space and electromagnetic theory.) Objective The following text introduces a new mathematical model of the physical world. This model can be used to show the origins Revision 1, Page 1 of 110

Transcript of Rev 1 - The Physical Constants of Digital Space

Page 1: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

The Physical Constants of Digital Space

By: Nathan S. Richard

July 17, 2016

Opening

The information contained herein derives from theory originated by me. Other

than obviously required references to currently accepted ideas, this work attempts to

explain scientific observations using a new model developed from my own hypotheses.

Any reference to the previous work of others is acknowledged either by naming the

person or their work, (such as; Einstein and General Relativity.) Other prior work or

discoveries are referenced by commonly accepted terminology, (such as; Quantum

Mechanics, beta decay, expanding space and electromagnetic theory.)

Objective

The following text introduces a new mathematical model of the physical world.

This model can be used to show the origins and interrelations of the basic physical

constants. Once the interrelations are understood it becomes much more obvious why

these constants have the set values that we measure. Continuing with discussions of

the constants and how they are related, the reader can begin to form their own

understanding of the mechanisms of natural physical processes. Though many

questions will arise, the simplicity of this new model provides the tools which future

researchers will use to uncover the answers.

Revision 1, Page 1 of 75

Page 2: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

Introduction

There are several different physical constants that govern the interactions of

energy and matter. But what are their origins? What determines their values? Obviously

some of the constants are linked. But might they all share the same basis? If so, why do

their numerical values, magnitudes and scales vary so greatly from one to the other?

The one thing that all physical interactions and the constants that regulate them have in

common is the space in which they occur. Certainly some physical constants are

inherent properties of space. Perhaps more closely examining the structure of space

itself could tell us something about the others. But how do you examine something that

is nothing? And if it is nothing then how can it have structure? General Relativity

describes how space is warped by the presence of energy and matter, so there must be

some sort of underlying structure to the void.

As there are no absolute reference frames for space and time, there can only be

a meaningful comparison between two independent space-time frames; referencing one

to the other. The same concept applies to numbers and the values they represent. A

comparison of two numbers is called a ratio. The analysis presented in this text focuses

on two very special ratios and how each can be used to represent specific properties of

the void of space. The attention then shifts to a comparison of these two ratios and the

properties which will have been assigned to them; a ratio of ratios if you will. An obvious

starting point would seem to be one of the physical constants that we know depends on

space-time, such as the speed of light. However, even that single basic number

represents multiple concepts making it too complex for a starting point. It has units

assigned to it which give it meaning yet, at the same time, they limit our understanding

Revision 1, Page 2 of 75

Page 3: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

of it. In pursuing a mathematical analysis, a purely numerical value with no units, is

most desirable in order to preserve simplicity.

This is where ratios are applied to the problem to represent the two single most

basic properties of space: firstly, the void exists, and secondly, the void is expanding.

Notice that the property of time is not mentioned. Time itself is neither an inherent

property, as are the other two, nor does it even exists as an independent quality of our

Universe. Instead, time is an emergent property which stems from the very action of

expansion combined with the presence of energy or matter. This makes the term

“space-time” all the more appropriate as it applies to the void in which all the cosmos

exists. However, the concept of time is more integral to the Universe and our perception

of it than simple terminology can reflect. The Hubble constant is a linear measure that

represents the expansion of the void of space. As it is a rate, it is a straightforward

manifestation of the passage of time. Indirectly though, time is evidenced by the very

presence of energy and matter. Both of which could not even exist if space were not

expanding. The full scope of that statement will become apparent as simple ratios are

used to focus our present understanding.

A Combined Model

So how does one use a ratio to model any property of space? Considering the

first property, the very existence of the void of space; the key is to visualize how to best

fill a given volume of space. Any size volume on any scale, size is of no concern if units

are not involved. This is very much like a common exercise in the field of Materials

Science in which one endeavors to determine the crystalline arrangement of atoms,

(represented by spheres,) within the structure of a solid. There are several different

Revision 1, Page 3 of 75

Page 4: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

lattice structures that could be considered, all with varying degrees of sphere packing

efficiency to them, known as packing factor.

However, the packing factor efficiency of a crystalline solid can be increased

when there is more than one size of sphere used. Instead of being likened to an

elemental crystal this is more analogous to the structure found in the crystal lattice of an

ionic compound. These regularly occur as an interwoven arrangement of two face

centered cubic lattices, as is the case with the ionic compound sodium chloride,

(common table salt.) A greater degree of packing efficiency is attained because the

sodium ions of one lattice take up the smaller vacancies in the lattice of chlorine ions;

however, there is still a good deal of unoccupied space throughout the structure.

Replacing the sodium ion with the even smaller lithium ion does away with most of this

unused space by allowing all the atoms to squeeze more tightly together.

In abstraction, this structure, realized in its most ideal form, would be one in

which the small spheres, “blues” for the sake of simplicity, fit perfectly into the spaces

between the larger ones, “reds.” This allows for greater stability as each blue sphere is

being touched on six different sides by six red spheres. Also, each of the reds is being

touched at twelve different points by twelve other reds. The most important parameter in

this analogy is the ratio of the diameter between the red and blue spheres. If the blues

are assigned a diameter of 1 then the reds must have a diameter of (√2 + 1). This is

more commonly known as the Silver Ratio.

Although it was mentioned earlier that scale did not matter, if this were a real

structure, there would still be a set percentage of unused volume between the spheres

Revision 1, Page 4 of 75

Page 5: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

throughout the double lattice structure. But if you were to decrease the scale of the

spheres as compared to the entire structure built up by the compilation of spheres, then

the importance of that volume percentage becomes vanishingly small. Though yet

again, there remains the problem of reference. What measure can be used? Who or

what is to determine how small is small enough? At what scale does it become possible

for matter to sustain the subatomic oscillations that provide nuclear stability or allow

electrons to absorb and emit photons? For a resolution some sort of universal basis is

needed; a “Gold Standard” if you will.

Unfortunately, as Relativity tells us, there is no absolute reference to use for any

measurement. Although, instead of a Gold Standard, there is another ratio called the

Golden Ratio and yet another property of the void to which it can be applied. Space-

Time expands in the same way at all scales of measure so scale is irrelevant when

applying the Golden Ratio to the expansion just as it is irrelevant when applying the

Silver Ratio to any volume of space. Luckily, because of the previous analogies, the

Golden Ratio can be applied in a bit more straightforward manner. The last element

considered in the Silver Ratio analogy was the diameter ratio of the spheres. That is a

linear, two dimensional parameter. Therefore, this comparison model of the Golden and

Silver ratios will focus on their linear application to space-time.

The Golden Ratio has the value of (√5 + 1)/2 approximately 1.61803398875.

More descriptively, its value can be represented by a comparison of the length of two

line segments. Imagine a line segment A and another longer line segment B such that A

is proportional to B identically as B is proportional to the sum of A and B. For a brief

diversion into the third dimension, perhaps you’ve seen the pictogram of the Golden

Revision 1, Page 5 of 75

Page 6: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

Ratio spiral overlaid on the cross-section of a spiral sea shell. You can easily visualize

this in three dimensions.

Imagine that the spiral of the sea shell is composed of progressively larger

squares spiraled outward from a central point so that the spiral line stays just inside the

edges of the squares. This squared spiral is another common pictogram representing

the Golden Ratio. Now, visualize that instead of squares for a spiral in two dimensions

they are replaced by cubes for a spiral in three dimensions. The smallest cubes are at

the center and the largest are on the outside. The spiral and cubes can continue either

outward getting infinitely larger or inward getting infinitely smaller, progressively

encompassing all scales of measure within the volume of space enclosed by the spiral.

Again, this shows how the use of this ratio is appropriate for any scale being

considered. However, there is a certain order of magnitude for separation between the

two scales at which each of these ratios are used for the comparison in the combined

model. This will be pointed out later when it becomes relevant.

It must be made clear that in this comparison of the Golden Ratio to the Silver

Ratio, digitization is the tool that links abstract ratios to physical reality. The Silver Ratio

is used for a digital representation of volume in space-time and the Golden Ratio is

used to digitize the expansion of space-time. It is easy to see that the interwoven double

face centered lattices used in the Silver Ratio analogy can be called a digitization. But

the digital representation of the expansion is done with only an approximation of the

Golden Ratio; albeit a very good approximation. The edge lengths of the squares or

cubes used in the Golden Ratio spiral are actually the values of the Fibonacci number

sequence. Starting with zero, this is a sequence in which each successive number is

Revision 1, Page 6 of 75

Page 7: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

the sum of the two previous numbers. Any one of the numbers in the sequence can be

divided by the previous number for an approximation of the Golden Ratio. The larger the

two consecutive numbers chosen from the sequence are the more accurate the

approximation will be. So how accurate of an approximation is good enough for nature?

Combining the volume and expansion digitization models into a single model will reveal

the answer.

For the sake of simplicity, throughout the remainder of this text the terms ‘Golden

Ratio’ and ‘Silver Ratio’ will most always be abbreviated to ‘GR’ and ‘SR’ respectively.

To continue, in making a digital recreation of an electronic signal, small individual “step”

voltage levels are used to approximate the original. The smaller in time each step is the

more accurate the signal recreation will be. This same principle applies in three

dimensions to the SR digitization of any volume or shape of space-time. Conversely, as

described earlier, the larger the two integer values one selects from the Fibonacci

sequence serves to increase the accuracy in the quotient approximation of the GR.

The fact that larger integers used for the division lead to a more accurate GR

approximation is quite fitting. If you stop to think about it, as a point in space-time

expands, the only one of its properties that changes is the “size,” relatively speaking. If

you were to try and describe the newly expanded space-time point in terms of its own

properties prior to having expanded, the only thing you could say about a difference is

that more than one of the prior points would fit into the new point. So as space-time

expands to infinitely larger scales it can more precisely represent any energy pattern it

contains, as referenced to its beginning volume digitization. This is true under the

limiting condition that said energy pattern increases in space-time size at the same rate

Revision 1, Page 7 of 75

Page 8: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

that space-time itself increases in size. Although, that would require the contained

energy pattern to spontaneously increase its total energy content as well.

Since that obviously isn’t going to happen as additional energy cannot be

created, it must mean that energy patterns are most easily sustained at some absolute

minimum value below which they could not exist. Furthermore, if any compounded

energy pattern were to gain or lose a component of energy there would have to be

some absolute minimum value for that transferable component energy. That minimum

size component or packet is the ‘quanta’ that is the central focus of Quantum Physics.

Indeed, it is digitized space-time that determines the size of the quanta and is, by

extension, the foundation of all physical reality.

Basis of the Physical Constants

Logically, a most common way to compare two like properties or objects is to find

out how many of the smaller fit into one of the larger. The same thing is done here for

the combined comparison model of Space-Time. The GR digitization of expanding

space-time is divided by the SR digitization of space-time volume. Since the numerator

is smaller than the denominator, the expansion to volume ratio quotient will be less than

one. This fact becomes particularly relevant later in determining the order of magnitude

for several of the physical constants. The primary benefit of this quotient is the

representation of both time and space in one number. Instead of having to make

separate considerations for space and time in the continuum of space-time, the

character of both components is consolidated in one ratio. It is the basis of the

continuum of Ratio-Space and is beneficial in analyzing physical phenomena regardless

of whether the attributes of said phenomena are mostly defined by spatial parameters or

Revision 1, Page 8 of 75

Page 9: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

whether temporal qualities prevail. Firstly though, we must focus on the expansion

quanta, define it and determine its value.

Since the Golden Ratio is used to represent a digitized space-time expansion,

perhaps it would be beneficial to use its base value as a starting point in the search for

the value of the expansion quanta. The square root of five then should be the first

number evaluated. Furthermore, since the physical interactions studied in Quantum

Mechanics take place at very small scales, precision is imperative for the operation. As

such, the quotient from the division of GR by SR will be carried out to more than forty

decimal places. This can be done with most any online precision calculator. I have used

the one available on the Math is Fun website. This primary ratio given by the combined

comparison model is the defining parameter for the Ratio Space continuum:

GRSR

=(√5+12 )√2+1

=0.67021162252084234219570429995557018842430432754532

The first thing that one should notice about the primary ratio quotient is the

obvious rounding potential at the twenty-fifth decimal place; where 2.9995557 could be

rounded to 3.0000000. However, you can use precision to your benefit by focusing on

the difference between those two numbers instead of merely rounding. Here is the

difference as it occurs at the 29th decimal place:

4.442981157569567245468

As it turns out, this number is very close to a multiple of √5 that we were looking

for. Twice the square root of five is:

Revision 1, Page 9 of 75

Page 10: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

4.472135954999579392818

That difference from the ratio comparison is almost there but not quite; however,

that ‘almost’ is very important. It is the very root of the fuzziness that exists in the world

of quantum mechanics. The values that we measure or derive for the physical constants

are prime examples of the ‘almost’ of quantum fuzziness. Its effects are apparent in all

of them; once you know what you’re looking for. The comparison model of Ratio Space

shows this directly and the first piece of that puzzle is the Fine Structure Constant (α) as

it is the defining value of the quantum fuzziness caused by a digitally modeling

expanding space. The Ratio Space model for digitized space is defined as infinitely

many equal point potentials. This cannot work in the Space-Time model because a

differing treatment for those two separate components splits the potential so that the

percentage of that potential shared by each varies dependent upon local conditions.

The effects of that variation are described by General Relativity.

I must stop to reiterate a previously mentioned point in an effort to help the

reader avoid confusion. Time itself does not exist. It is simply an emergent property of

the Space-Time model. The property of time provides a necessary reference for any

creature capable of thought to impose an order to the physical world which it observes.

Any ‘clock’ that is used to gauge the passage of time requires a reference to some

‘before and after state’ process of a physical system. The is also true for any brain

perceiving the passage of time as the same before and after state requirement applies

to the physical processes within the brain, which enable the formation of said

perception. At any point in this text where time is referred to as having grown,

expanded, shifted or varied in any way, that will be in the context of the Space-Time

Revision 1, Page 10 of 75

Page 11: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

model. In the Ratio Space model the concept of time is not required. The greatest

advantage of this model is that it eliminates the split potential problem from our

analyses by combining the properties of both space and time into one value, thus

negating the effects of their shared variation present in the Space-Time model. That

allows us to avoid using measured values of either parameter since the very nature of

measurement introduces units, thus complicating the issue.

A point of Ratio Space is a point of expansion / growth potential. From the

perspective we live with every day in the Space-Time model, that potential is used to

either expand space or grow time. Even if a person were to be present where a shift

occurred between the expansion potentials of space and time they would not notice

because their physical body is subject to the effects of that shift. As Relativity makes

clear, we only notice the difference when the observer and the observed event exist in

different reference frames. For instance, if the expansion of space is displaced by the

presence of mass, then time near that mass must grow; (take longer,) in order to

maintain the balance between space and time expansions. Increasing the concentration

of mass in a region of space adds further resistance to spatial expansion in that region;

however, the same quanta of expansion potential must still be expressed. Therefore, a

greater percentage of the potential is shifted from spatial expansion to temporal growth.

For example; either a minute long event, occurring in a reference frame near a

high concentration of mass, as timed on a clock in that same frame, takes longer than a

minute to occur according to the clock of a distant observer. Or, the distant observer

and his clock are simply too fast as seen by another observer participating in the timed

event. Both possibilities evidence the same fact. Time in the reference frame near the

Revision 1, Page 11 of 75

Page 12: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

high concentration of mass has slowed, (which is the same as saying each unit of time

has increased in length.) But why does this happen? That will be explored later. Though

before that, we must see why quantum fuzziness exists.

Numbering systems are at the root of it all. Since √5 is the number base for Ratio

Space it must be maintained in all of Ratio Space in order for the system to be

continuous. “All of Ratio Space” necessarily means both space and time. Both are

merely different manifestations of the same potential. The Ratio Space model coalesces

them into a single purely numerical value requiring no physical reference. As such, the

terms ‘before’ and ‘after’ have no context when talking about the expansion of a Ratio

Space point potential. You cannot impose a unit of spatial measure upon the expansion,

without invoking an associated unit of temporal measure and vice versa. That is the

case with the Hubble Expansion constant. Its value shows how many meters of space

are added to a given length of space also measured in meters. Unfortunately, any value

you get for an answer is meaningless unless you know how long it took to attain that

value. As such the Hubble Expansion constant is measured in units of (meters per

second) per meter. Units which reduce to:

( MetersSecond )Meters

=( MetersSecond ) x ( 1

Meters )=( 1Seconds )

We are left with units of time, frequency to be precise. But Relativity tells us that

time is dependent on space. How can measuring the expansion of space in reference to

time have any validity when the passage of time is dependent on that very same space?

There is no independent physical reference to use. Luckily, a physical reference is not

Revision 1, Page 12 of 75

Page 13: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

needed in Ratio Space. A numerical one will suffice. Just as the real number line is

referenced to the value of zero; the expansion in Ratio Space is also referenced to zero;

although, due to the nature of Ratio Space, the mathematical rules are quite different.

Logically, in counting the values on any number line, moving from one value to

the next implies either the passage of some amount of time or at least a progression in

quantity of whatever is being counted, whether it be negative or positive. Either way,

change itself is the essence of counting. Firstly, in Ratio Space there is only one

mathematical operation, addition. It is repeated addition of ratio that represents

expanding space. Since it is a ratio addition, it applies equally on all scales. The ratio

step that is repeatedly added is the difference that was previously seen in the rounded

value of the primary ratio.

4.442981157569567245468 = ɋ (designated ɋ for the quantum step value)

Although this is an additive step in the Ratio Space model, the origin of the ratio

causes it to have a different manifestation in the Space-Time model. We currently view

space and time as two separate, but interlinked, properties of physical reality. This is

because our perceptive abilities are also dependent on the flow of time. Therefore,

instead of realizing the recursive addition of the digital quantum ratio step, we perceive

a time dependent expansion of the void of space. As such, we are left with the one

singular truth that Einstein gave us; everything is indeed relative. Space depends upon

time; meters depend upon seconds and vice versa. To the point, there is no measured

physical parameter of any unit which does not depend on some other parameter with

separate units.

Revision 1, Page 13 of 75

Page 14: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

But again, measured units introduce the before / after problem. Unit represented

values drawn from any number line which includes zero must conform to the zero

identity, which states:

X+0=X

In the reality disclosed by the Ratio Space model, we see that there can be no

zero identity in a changing space. The constituent ratios of the primary ratio are

representative of two non-zero properties; the void of space and the expansion of that

void. Hence, in the Ratio Space model, the role of the zero identity is taken by a

minimum value identity. That minimum value is the ratio ɋ. Though its application is

similar to the zero identity, its implementation is one of logic as opposed to being

numeric. The minimum value identity states:

X+q=X

The logic; if there exists a ratio X, representative of an interdependence between

two parameters, (space and time,) then when you add ɋ to X you must redefine the

interdependence. If that is not done, the continuum in which that interdependence

applies must change. And that is exactly what we see with our expanding Space-Time

Universe. Take an example from what we know about the Hubble expansion. We know

that a Mega-parsec (just over 3 x 1022 meters) of space will expand out by about 69,000

meters every second. Then for the expansion coming in the next second, do we

Revision 1, Page 14 of 75

Page 15: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

redefine the meter to include that newly added space into the same numeric value that

was used for length measure in the previous second? No, of course we don’t. That

being the case, one might conclude that every addition of the quantum ratio step merely

involves the addition of more space; be it defined by either of the two models covered

here. However, that is not a complete picture of reality.

Recall from earlier it was determined that Ratio Space has a base of √5. At the

time it may have been a bit confusing that the difference found between the primary

ratio and its rounded value at the 29th decimal place was just less than 2√5. So why is

not the value of 2√5 considered to be the base for Ratio Space? This will become very

clear soon enough as keeping it the way it is correlates more effectively to the physical

reality of a digital continuum. In Ratio Space with its √5 base you have:

X+2√ 5=X

This must remain true in order to preserve continuity of the number system. But

we have already seen that ɋ is actually slightly less than 2√5; as they both occur at the

29th decimal place:

(q=4.442981157569567245468 )≈ (4.472135954999579392818=2√5 )

Revision 1, Page 15 of 75

Page 16: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

And the minimum value identity for Ratio Space is:

X+q=X

Both cannot be true unless there is some sort of fudge factor in the mix. Well,

there is, it is the ‘almost’ in Quantum Physics, the fuzziness embodied in the Fine

Structure Constant (α). This is its representation in Ratio Space:

(X+q+4α )=(X+2√5 )=X

Where:

α = .007288699357503

Focusing on numbers only, the equation can be rewritten without the X since it

bears the same meaning for all elements of the equation. Hence we are left with:

q+4 α=2√5

Revision 1, Page 16 of 75

Page 17: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

And:

4 α=2√5−q

Or:

2α=√5−( q2 )

So how does this relate the Fine Structure Constant α to any structure? α is the

extra ratio additive required for the Ratio Space number system, and by extension any

system of unit measure employed in Space-Time, to maintain continuity. It is indicative

of the structure of the electron itself, which is why α remains the same regardless of

which atomic energy level the electron occupies or which level it transitions to within the

atom. Think about this, in quantum mechanics the electron is envisioned as a cloud of

the probability of its point location about the nucleus of the atom. The density of that

probability cloud decreases as distance from the nucleus increases. It is simply a

mathematical model used to demonstrate that the greatest possibility of finding a point

particle electron exists closer to the nucleus rather than farther away. The reality that

the Ratio Space model allows us to see is that the electron actually is a cloud, a cloud

of charge. It is a smudge spread across space. That makes it impossible to pinpoint any

particulate form. However, from our perspective, that smudge is also spread across

time. Thus we have the effects described by the Heisenberg Uncertainty Principle. The

tools available for use in Ratio Space allow us to see the mechanics of it all. In fact, it

helps us to understand most everything about the electron.

Revision 1, Page 17 of 75

Page 18: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

For an orbital electron of the atom, regardless of where it is, we know it has

motion. It is actually an oscillation of the electron / charge cloud caused by the very

expansion of space-time. The electron occupies precisely half of the Ratio Space

quantum, regardless of how that half is measured; recall the q/2 factor in the last

equation. This simple fact gives rise to the Pauli Exclusion Principle; indeed it is the very

reason that only two electrons may occupy any one energy level of an atom. The Ratio

Space value of q defines the properties of atomic electron shells and subshells. When

one energy level is full only electrons with higher energy content can fill higher energy

levels. Electrons can also transition from one atomic energy level to another. With either

transition we detect the slight aberration of α. In the equation last seen, why then does it

associate 2α with the q/2 electron energy content?

We are biased due to our linear perception of time. We start with the ‘before’ and

analyze the ‘after.’ But remember, the electron is spread across time as well as space.

Its occupied q/2 of space-time expansion is at the center of the available √5. Each

oscillation of the electron has one α at beginning and one and the end. Therefore, in

making an energy level jump, which takes only one oscillation / expansion step, there is

one α at the beginning of the jump, (past.) The other α is and the end of the jump,

(present.) The only one we notice is the one closest in time to us, what we perceive as

the present. Take note of the use of the term ‘occupy’ in describing the electron’s

relation to its Ratio Space half quantum. This will be very important in later sections. For

now ruminate on this. If the Ratio Space quantum of expansion is representative

specifically of the expansion of space and time; then what would happen if something

Revision 1, Page 18 of 75

Page 19: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

else were to occupy that half quantum of Ratio Space, displacing that expansion

potential?

Presently though, you might be wondering that if all of this is true; then how does

the previously discussed value for the Ratio Space quantum (q), relate to the value of

Planck’s constant (h)? It is a fact that the Planck constant governs the quantum

mechanical interactions of energy and matter. Surely there must be some relation

between the two. Actually, there is no need to derive any relation. The two are, in fact,

one and the same. Planck’s constant h represents the value of the quantum in Space-

Time and q represents the very same quantum in Ratio Space. Think about how we got

q. To get values in Ratio Space we performed this operation with ratios:

GRSR

=(√5+12 )√2+1

Then, we found q at the 29th decimal place of the quotient. So you simply take q

out of Ratio Space in the same manner. Just perform the reverse operation as such:

q x 1

(GRSR )

Revision 1, Page 19 of 75

Page 20: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

4.442981157569567245468 x 1

((√5+12 )√2+1 )

=6.629221291117491396275≈hx1034

Obviously that is not exactly the mantissa for the currently accepted value of the

Planck constant. But remember, we have that ‘almost’ factor and the shifting of

expansion potential between space and time. Both of these lend to preventing any of

the physical constants from maintaining a truly constant value. They are all forced to

fluctuate within a certain range determined by the nature of the properties they

represent. Furthermore, the values in that last equation are displayed at the 29 th decimal

place. But the Planck constant is at the 34th decimal place. These two aspects are not

mutually exclusive. It just depends on what you are using the values for.

Remember, we have seen that the Planck constant is nothing more than a ratio;

as it is just a different representation of the Ratio Space quantum. That quantum is at

the 29th decimal place of the primary ratio of Ratio Space. But, since they are just ratios,

they hold true at any scale. It’s simply that the additive quantum will always be 29

decimal places below the first digits of the primary ratio. So the zero reference can be

set wherever you’d like. The actual decimal position only matters when the ratios are

used to describe real physical phenomena. This is how Max Planck came up with his

value at the 34th decimal place. It is the scale value required to resolve the graph of the

spectrum of black body radiation. No other value will suffice. The evidence of this

concept is visible in the wide scale separation between the basic physical constants,

Revision 1, Page 20 of 75

Page 21: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

such as the19 orders of magnitude that exist between the speed of light (c) and the

gravitational constant (G); even though they both come directly from the same number.

Light and Dark

In further analyzing the concept of scale separation, consider this; space-time

expands but the things it contains do not. Space grows larger but stars don’t. Our Earth

stays the same size and atoms do not grow as their structure is governed by the

quantum rules that set the bases of all mass and matter. The latter is obviously true, so

by extension, the former must also be true. At larger scales, gravity’s effects take

precedence to hold everything together. So why does space itself grow but the things in

it do not? Let’s return to something we were just discussing; the speed of light or EM

radiation. It does not grow / change either. That is a simple point of deduction

overshadowed by the obvious quantum connection to EM radiation via the Planck

constant. But look at it from your everyday reference point, a scale of meters.

The speed of light (c) is 299792458 m/s. However, that number is a bit deceptive

about its origin because in actuality it is a composite of other values. The largest part of

it is a ratio:

GRSR

x2√5 x108=(√5+12 )√2+1

x 2√5 x108=299727749.453406488

The remainder of the value is an additive that originates from the EM energy’s

travel across space-time and propagation from its origin at the smallest scale.

299792458−299727749.453406488=64708.546593512

Revision 1, Page 21 of 75

Page 22: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

Take that remainder and divide it by four times the α gap as it occurs at the 29th

decimal of the primary ratio. Recall there is one gap at the front and one gap at the back

of every half quantum q/2.

64708.546593512(4 x 0.007288699357503 x 10−29 )

=2.219481948 x1035

What you get is a mantissa that, when removed from Ratio Space, is just less

than half the mantissa of the Planck constant. More importantly though is the associated

order of magnitude, 1035. This indicates how many orders of magnitude the EM radiation

has to propagate through. The digits of the Planck constant, (6.626….) are found 34

orders of magnitude below 1. However, since the primary ratio of Ratio Space has a

value less than one, what we see as the one’s place in the Space-Time model is

actually the ten’s place of the Ratio Space model. That makes it 34 + 1 = 35 orders of

magnitude which light must propagate through.

You may have noticed that the quotient of the last equation was only half of a

Ratio Space quantum q/2. But the divisor of 4α gaps is associated with a full quantum

of Ratio Space expansion. This is so because, if you’ll recall, even though light is

emitted and absorbed by orbital atomic electrons which each consist of q/2 of

expansion, the energy for the light wave comes from the energy jump made by the

electron. That jump from one orbital energy level to another must always be a multiple

of q.

With light then, what you have is an energy function generated on the quantum

level and pushed along through space-time by continuous quantum expansion;

Revision 1, Page 22 of 75

Page 23: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

essentially riding the expansion wave of the space through which it travels. With that in

mind, what push does expanding space-time exert on all the surrounding space-time?

Starting with the same quantum that defines EM velocity and in the same manner, think

about this; light originates at one point and travels to another point. So each quantum of

‘push’ used to perpetuate the propagation of the light wave comes from a point potential

of Ratio Space and is all directed along the line of travel of the light wave. If there is no

light wave to push, then what the point potential pushes on is all the other point

potentials around it. Along any chosen force line with the subject point at the line’s

midpoint, half of the available force is focused in one direction and the other half is

focused in the opposite direction. This is the Dark Energy that pushes space-time apart.

This push of Dark Energy is easily quantified into something more recognizable by

continuing the same example.

Moving outward along that force line, the half quantum of expansion force

impinges on next point potential. That next point potential then feels only a half quantum

of force from any one direction. Now, instead of that force being transitioned into the

propagation of an energy wave as with light, it simply stays a quantum force. And when

measured at any point in space, as coming from any direction, will have the same value.

It is twice that force that drives light to its ratio velocity. Therefore, its ratio value is the

inverse of half of the ratio velocity of light. Obviously we can ignore the additive that

gives the full velocity of light since we’re just concerned with the push force that is felt

and nothing is actually propagating. Let’s revisit the light velocity equation with a slight

modification to get answers as referenced to Ratio Space.

Revision 1, Page 23 of 75

Page 24: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

GRSR

x2√5 x109=(√5+12 )√2+1

x 2√5 x109=2997277494.53406488

You’ll notice that this time we multiplied by 109 instead of 108 since values

attained through operations in the Space-Time model are only one tenth of the actual

value as it originates in Ratio Space. Thus the answer we now get is 10 times the ratio

velocity of light. Now take that answer, divide it by 2 and take the reciprocal to get the

value of the push of Dark Energy.

299727749.4534064882

=1498638747.26703244

11498638747.26703244

=0.0000000006672722174=10G

As you can see, the previous operations yield 10 times the gravitational force

constant G. In short, Gravity and Dark Energy are one and the same. The next question

then should be, why does one repel and the other attract when they are the same

force? Gravitational attraction is nothing more than a side effect induced by the

presence of mass and it’s warping of space-time in accordance with General Relativity.

It is merely our flawed understanding of gravity that causes us to misinterpret the

effects. The root cause of gravity, the repulsion of an expanding space-time and every

effect that stems from them is something that we are all very familiar with already;

electric charge. But what exactly is electric charge?

Revision 1, Page 24 of 75

Page 25: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

Charge, Mass and Matter

Electric charge is the most basic of the natural forces. It is the one force from

which all the others originate. In reality it is the only force of nature. One of the most

vital aspects that we understand about electric charge is the tendency of opposite

charges to seek each other out and join, thus neutralizing themselves. However, if

charge is so rudimentary that it makes up the very fabric of space, then its neutralization

does not also imply its absence. The homogenous mixture of positive and negative

charge in the void of space is the one factor that prevents us from detecting it.

Therefore, we see space as a neutral reference when measuring the charges that we

can detect such as those carried by electrons, positrons and quarks. As for the question

of what charge actually is; that we may never know with any certainty. Furthermore, in

addition to being a moot point, it might also be completely irrelevant as long as we

understand its effects.

That being said, how can charge drive the expansion if, overall, space is neutral?

As with everything else covered thus far, it comes down to scale. At the very smallest

scale, the mixture of positive and negative charge must have an order or structure.

Actually, you’ve seen that structure already. It is the digital representation for space that

is embodied within the Silver Ratio analogy; depicted by the associated spheres. In

order to fill space as efficiently as possible when using only two constituents, you must

utilize two interwoven lattice structures, one with spheres of the first kind having a

diameter of 1 and the other lattice consisting of spheres of the second kind having a

diameter of (√2 + 1). The two constituents are positive and negative charge. One type of

charge fills the large spherical voids in this interwoven double lattice matrix and the

Revision 1, Page 25 of 75

Page 26: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

other type of charge fills the smaller spherical voids. Whichever positions the positives

and negatives actually occupy determines that the matter in our Universe is normal

matter. Had their positions been reversed, our Universe would be one of antimatter.

It remains to be seen how closely such a model correlates to physical reality.

Although, the implications that arise from it bear striking similarities to real, measurable

properties. Focus, for a moment, on the geometry that exists in this charge matrix

model. Since the opposite charges are evenly distributed, there is no net charge.

Notwithstanding, this does not mean that there is no net force. Though evenly

distributed, they are not evenly sized. The small spheres are attracted to the large

spheres and vice versa. This fact in itself is why space doesn’t just rip apart. The like

charges also repel with equal strength. However, if you recall from the initial analogy,

the small spheres aren’t actually touching each other. But the larger spheres do touch.

Using an isosceles right triangle as a reference in application to the geometry of the charge matrix, one of

the small spheres would be centered at the vertex of the right angle and two of the larger spheres are

then centered at the other two vertices and all spheres are touching.

Revision 1, Page 26 of 75

Page 27: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

The center to center distance of like spheres, (repulsive force) is larger by a

factor of exactly √2 than the center to center distance of unlike spheres, (attractive

force). That is in line with something I had noticed a long time ago.

The following is a description of a mathematical analysis which I have performed

several times previously, but due to its length I will leave it to the reader to attempt if

they wish. Take the median accepted value for the Hubble expansion constant, about

69,000 (meters per second) per Mega-parsec of distance. You’ll also need to know how

many atoms of atomic helium, lined up in a row side by side, will fit into that distance of

a Mega-parsec. The number is astronomical to say the least. For that, I divided the

Mega-parsec by the diameter of a helium atom, (approximately 2π x 10-11 meters).

Atomic helium was chosen as a reference because it has the smallest and most

complete atomic structure of all the elements. It has only one full electron energy level

and it is the lowest possible energy level. And since an electron occupies a volume of

Ratio Space of exactly q/2, then one electron of the two present should also take up

exactly half of the diameter or the helium atom. To continue, I divided the 69,000 meters

by the number of helium atoms in the Mega-parsec to find out how much spatial

expansion occurs over the diameter of a single helium atom. The number I got was very

close to √2 x 10-28 (meters per second). If you then divide the diameter of the helium

atom (2π x 10-11) by that √2 x 10-28 added to it from expansion you get a number which is

almost exactly the ratio value of the expansion quantum multiplied by 1046:

4.44288293816 x 1017≈ (4.44298115757 x10−29) x1046

Remember this example because we will return to it before this is done.

Revision 1, Page 27 of 75

Page 28: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

The conclusion is that even though the homogeneity of charge distribution in

space causes it to appear charge neutral, the sphere packing structure of the charge

matrix of space will never allow the internal forces to equalize. Thusly, space will never

stop expanding. This is exactly why it is impossible to determine a precise size for

atoms such as with helium. The electrons that make up their outer volume are in a state

of constant flux between the space-time expansions which they occupy and maintaining

continuity of the Ratio Space numbering system as that expansion progresses. They

expand with the Ratio Space to match the √5 base and then collapse back down to their

previous level consistent with their q/2 expansion displacement in the newly expanded

space. Space-Time then expands again and the cycle repeats endlessly. This causes

the radius of helium atoms to fluctuate between just greater than (π x 10-11) and (√10 x

10-11) meters, minus some slight overlap that occurs in the middle about the nucleus.

Using the Ratio Space model and its associated charge matrix, we can also

explain other properties of the electron such as its mass (me), and the elementary

charge (e). As mentioned before, the electron occupies a half quantum of Ratio Space

expansion potential. That being the case, when the potential present in that half

quantum of space tries to expand it can’t because the energy of the electron has

already taken up the space-time expansion allotted for it. Again, making use of the

charge matrix model, the electron has displaced half a quantum of positive Ratio Space

expansion. If indeed the large spheres of the charge matrix harbor the positive charge

then that makes positive charge the driving force of the expansion. It is the expansion of

those larger spheres which has been displaced by the electron’s energy field.

Revision 1, Page 28 of 75

Page 29: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

An electron cloud exists in a particular volume of Ratio Space. Inside that volume

the negative charge matrix expansion has been boosted by q/2 by the electron energy.

That forced q/2 of positive charge matrix expansion to be displaced from the volume.

The combined effect is that within the volume, negative charge is q/2 greater than the

neutral reference of normal space. And the positive charge is q/2 less than the neutral

normal space. So this volume now is missing q/2 positive making it q/2 negative. Also, it

has an extra q/2 of negative making a total of: -q/2 + -q/2 = -q of charge matrix

expansion. So how does this associate to the numerical value that we measure for

elementary charge?

Let’s take the primary ratio of Ratio Space and add that whole quantum, (q) of

charge difference to it:

0.67021162252084234219570429995557018842430432754532+(4.442981157569567245468 x10−29)=0.6702116225208423421957043

Next, we need to find out how this modified primary ratio relates to the neutral

space around it. Divide out the Golden Ratio expansion:

0.6702116225208423421957043√5+12

=0.4142135623730950488016887242371572122372

Then reciprocate the quotient and subtract it from the Silver Ratio representation

of space volume:

10.4142135623730950488016887242371572122372

=2.4142135623730950488016887240496545190798

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Revision 1, Page 29 of 75

Page 30: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

(√2+1 )−2.4142135623730950488015887240496545190798=1.600435595 x10−28

When this value is multiplied by the same 109 factor as was done with the other

Ratio Space values for light and gravity, we get a number that is just less than the

measured value for the elementary electric charge, e:

1.600435595 x10−28 x109=1.600435595 x 10−19≤e

If you then perform the same series of operations but this time include with the

additive expansion quantum the 2α factor associated with the half quantum of Ratio

Space occupied by the electron, you get a number that is slightly more than e:

0.67021162252084234219570429995557018842430432754532+(4.442981157569567245468 x10−29)+(2x 0.007288699357503 x10−29 )=0.67021162252084234219570430000014577398715

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

0.67021162252084234219570430000014577398715√5+12

=0.4142135623730950488016887242372473055159

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

10.4142135623730950488016887242372473055159

=2.4142135623730950488016887240491294169703

Revision 1, Page 30 of 75

Page 31: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

(√2+1 )−2.4142135623730950488016887240491294169703=1.605686616 x10−28

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

1.605686616 x10−28 x109=1.605686616 x 10−19≥e

Taking the average of the two values yields a value for the elementary charge

which matches the currently accepted value to within less than a tenth of a percent:

1.605686616 x 10−19+1.600435595 x 10−19

2=1.6030611055 x10−19≈e

Remember that this value is negative as it has displaced positive charge

expansion in the charge matrix.

−1.6030611055 x10−19≈e

If it were an opposite energy field which had caused the displacement of

expansion, it would have displaced q/2 of negative charge expansion, thus leading to a

charge of positive q with approximately the same numerical value. This is indicative of a

positron. It is also exactly why electrons and positrons annihilate each other when they

meet. It is a re-neutralizing of the charge matrix.

Revision 1, Page 31 of 75

Page 32: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

Determining the electron’s mass (me), is a bit more straightforward operation.

The electron occupies q/2 of Ratio Space. At that scale, 29 orders of magnitude below

our standard units of reference, its value is:

4.442981157572

=2.22149057878

Simply reciprocate that value and then cube it:

( 12.22149057878 )

3

=0.09121506516

This value has two more decimal places than the starting ratio value of q/2. That

makes it 31 orders of magnitude below the reference units in our systems of measure.

Nonetheless, it is still a ratio even though it represents the inverse of a volume. The

value attained matches the currently accepted value for the mass of the electron in

kilograms to within just over a tenth of a percent.

9.121506516 x 10-31 ≈ me

The important lesson to learn from this is something that I’ve implied with my

selection of wording several times previously. Many times I have used variations of the

words, “occupy” and “displace.” This is the primary origin of mass. Leptons exist

because their energy displaces the Ratio Space expansion that would normally occur.

From the perspective of the Space-Time model, some of the displacement is in space

and the remainder is in time. The more dense the mass, or space expansion

displacement, in a region becomes; it requires that a larger and larger percentage of the

allotted expansion potential is shifted into expanding time. This is the time dilation effect

Revision 1, Page 32 of 75

Page 33: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

described by General Relativity. The displaced Ratio Space expansion is forced

outward, away from the center of the mass. It then in turn displaces other Ratio Space

expansion. This continued cycle of expansion displacement continues further and

further out away from the center of mass, all the while decreasing in magnitude in

accordance with an inverse radius squared relation. Again, as described by General

Relativity, this is the actual warping of space-time that we witness when starlight is bent

as it just grazes the surface of the Sun. This will be covered in greater detail in the next

section.

Returning to the first generation of leptons and their associated neutrinos, we

saw how the electron’s mass originates. Positron mass is the same, simply coming from

negative charge matrix expansion displacement. Both of those first generation leptons

occupy q/2 of Ratio Space. Both of them also have their associated 2α adjustment from

the Fine Structure Constant. As Ratio Space expands, within its √5 number base a

lepton’s energy is shifting with every ‘tick’ of quantum expansion. The lepton’s energy

ratio is zero referenced in the old space. With the additive tick of Ratio Space expansion

the lepton’s energy must shift its zero reference to the newly expanded space. This

shifting is the oscillation of the lepton’s space-time wave function. When the lepton’s q/2

energy shifts toward the newly expanded space, it reaches the midpoint. This leaves a

newly opened α gap at the beginning of the oscillation and another α gap, which is just

starting to close, at the ending of the oscillation.

Essentially then, one α is always old and fleeting and the other is new and very

real. If a lepton is absorbed or decays, this real α ratio, a ‘footprint’ of the lepton still

Revision 1, Page 33 of 75

Page 34: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

remains. A good analogy; it’s like when you remove a piece of furniture from your living

room but you can still see its imprints in the carpet. At least for the first generation

lepton neutrinos the math is very easy. The only difference is the property of inverse

volume. The mass for a real particle is the cubed reciprocal of ratio for its charge matrix

expansion displacement. However, with a neutrino there is no actual displacement of

the charge matrix so you do not have to reciprocate the value. This is how the ratio

displacement for an electron neutrino compares to the 2.2 eV mass derived for it in the

Standard Model of particle physics:

(α )3 x 10−29 x ( c2e )≈ ve

(0.00728873 )x 10−29 x ( 2997924582

1.6021766 x10−19 )=2.1721

For second and third generation leptons the inverse volume property shows that

their greater mass means they actually have less ratio volume than an electron. This

can only be so because of two possibilities. Think back to the example of the Hubble

expansion for the helium atom. The ratio that came from that was linear length in the

numerator over additive length from expansion in the denominator. For that ratio

quotient to decrease either you get more expansion from the same length of space or

you get the same expansion from a shorter length of space. So although their ratio

volume is variable, all leptons contain q/2 worth of charge energy as evidenced by the

fact that all generations possess the same electric charge. However, having a greater

mass due to smaller ratio volume is not a stable state. The more massive second and

Revision 1, Page 34 of 75

Page 35: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

third generation leptons simply expand / decay to the most stable state where their

displacement ratio matches the expansion ratio of Ratio Space; an electron. The three

generations of quarks also adhere to a similar decay progression.

However, singular quarks possess less than a half quantum of charge energy. As

such, when a first generation quark decays it doesn’t cease to exist, it merely reverts to

its opposing counterpart quark; up becomes down and down becomes up. In addition,

singular quark mass seems to be a composite value; similarly to how a baryon’s mass is

additive of the individual quarks plus their binding energy. Most likely the greater part of

it is just basic charge matrix expansion displacement; negative or positive, respectively,

depending on whether it is an up quark or a down quark.

As you step up to more and more massive particles this means you need a

smaller and smaller displacement ratio. My guess is that this fact can only indicate one

thing. The actual structure of the charge matrix is changing. The number in the

denominator of the combined model’s primary ratio is shrinking. It started with the Silver

Ratio but it is diminishing as the lithium chloride structure of the charge matrix moves

towards a cesium chloride structure. This change of ratio and the resultant shifting of

the internal forces of the charge matrix together make a full understanding of quark

masses a bit more elusive. The overall effect is that greater mass is indicative of greater

ratio space expansion displacement in the immediate vicinity of the mass. That being

said, there are yet several other good clues about how quarks fulfill their role as the

primary constituents of atomic matter.

Revision 1, Page 35 of 75

Page 36: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

There are two proportions to be aware of in the first generation of quarks. The

first is their charge relation. Considering the absolute values of their charge, that of the

up quark is exactly twice that of the down quark. Inversely, as is appropriate with the

property of mass, the Standard Model mass of the up quark is approximately half that of

the down quark. With that in mind, recall the interwoven double lattice matrix model

example we used in discussing the GR/SR Ratio Space model. It had lattices of small

blue spheres and large red spheres with comparative diameters in the Silver Ratio.

Envision this then; if you had a small section of that matrix model centered on one of the

large red spheres, then that central red sphere would be touching twelve other red

spheres and six of the small blue spheres. That also is a proportion of two to one, just

like the mass and charge properties of the first generation quarks. There is another

aspect about the charge relation between the up and down quarks that provides some

further insight about how they interact.

Actually, ‘interact’ may too divisive of a term. It implies a separation where one

might not even exist. The difference between the up and down quarks may be more

akin to the relation between Dr. Henry Jekyll and Mr. Edward Hyde. Both exist

simultaneously as a single unbalanced entity whose exhibited properties are simply

switched back and forth from one to the other; much like a teeter-totter. Instead of our

current perception of them as separate particles; it is more correct to think of them as

simply two different manifestations of the same energy function. That is even more

logical when you think about the process of beta decay in atomic nuclei. As opposed to

particle decay in leptons where a neutrino is emitted, the more massive lepton ceases

to exist and the less massive one takes its place with a reversal of the process being

Revision 1, Page 36 of 75

Page 37: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

highly unlikely; reversal of beta decay, (β) with quarks is easily achievable if you could

disregard the effects of baryonic balance in the subject nuclei. The ‘decay’ part of beta

decay is the changing of a neutron into a proton by way of converting one of the

neutron’s constituent down quarks into an up quark. Though the overall effect is baryon

decay, it is a simple quark conversion that is the heart of it.

That conversion is so basic that there are actually two ways to do it and two ways

to undo it. The down quark could convert to an up by either emitting an electron or

absorbing a positron. The resultant up quark could then be converted back to a down by

the opposite of the same two means; either emitting a positron or absorbing an electron.

Although, the two quarks themselves each possess less than q/2 of charge, their back

and forth conversions involve a cyclic process with particles that do possess the q/2 of

charge, which after emission, will grow in ratio volume to displace q/2 of charge matrix

expansion. This then shows that what we detect as separate up and down quarks are

actually just two embodiments of the same thing. One might even argue that when the

ups and downs are observed separately, they may not even be deserving of the title,

“particle.” Below are some Ratio Space charge matrix displacement diagrams to help

see the relation between the first generation of leptons and quarks.

Revision 1, Page 37 of 75

Page 38: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

Neutral space contains equal amounts of positive and negative charge matrix expansion; q/2 on either

side of the zero reference.

The presence of an electron displaces q/2 of positive charge matrix expansion.

Revision 1, Page 38 of 75

Page 39: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

The presence of a positron displaces q/2 of negative charge matrix expansion.

Down quark manifestation displaces q/6 of positive charge matrix expansion resulting in q/3 of negative

charge which cannot be neutralized. Emission of an electron would remove q/2 of positive displacement

thus shifting the energy function q/2 to the positive and changing the down quark to an up quark.

Up quark manifestation displaces 2q/6 of negative charge matrix expansion resulting in 4q/6 of positive

charge which cannot be neutralized. Emission of a positron would remove q/2 of negative displacement

thus shifting the energy function q/2 to the negative and changing the up quark to a down quark.

Revision 1, Page 39 of 75

Page 40: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

Up quark negative displacement combined with the down quark positive displacement shows the q/2

separation between them which initiates / absorbs first generation leptons.

After studying the previous diagrams, perhaps it is easier to see why a down

quark conversion to an up results in the emission of an electron plus a positron neutrino

but not an electron neutrino. The electron’s neutrino is a part of the electron itself. They

can never be separated unless the electron is reabsorbed by another up / down quark

system. The positron neutrino was emitted because the energy of a positron, (negative

charge matrix expansion displacement,) was absorbed from the neutral space

surrounding the quark system in order to accomplish the conversion. This caused a

boost in the negative charge matrix expansion. That boosted negative expansion is

what displaced the positive expansion outward from the center. The resultant

phenomena are both the charge and the mass of an electron. So you see, both types of

beta decay (β⁻ and β⁺) create both a positron and an electron. In β⁻ decay the positron is

absorbed thus its neutrino and the electron with its neutrino go free. In β⁺ decay the

electron is absorbed then, while its neutrino and the positron with its neutrino go free.

That just leaves the more simplistic electron capture type β decay where an inner orbital

electron of an atom is absorbed by a proton thus converting itself to a neutron and

emitting only an electron neutrino. The quark conversion reactions just covered lead to

the next topic, baryons, nuclei and atomic matter.

In normal space, there are three principles that regulate atomic structure. They

are mass / energy state, charge balance and quark pairing. The arrangement of

elements that we see in the periodic table is determined by these principles. The

foundation of the periodic table is simple mass / energy content; ranging from lowest,

Revision 1, Page 40 of 75

Page 41: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

hydrogen, to the more massive, unstable elements. The other properties of the

elements such as nuclear stability, isotope abundance and radioactivity come from

processes at work to balance those three basic principles within each atom.

Based solely on the idea of an all pervasive charge matrix that makes up space

and everything in it, you should entertain the idea that all matter seeks to neutralize its

total charge above all else. That it true, but it cannot occur on the quark level for several

reasons. The first is that quarks seek to pair match with their counterparts so that their

combined expansion energy content matches the q/2 of the surrounding normal space.

Once a down and an up quark join to accomplish that, the next priority becomes charge

balance. That is done by introducing another down quark, this then constitutes a

neutron. Now this does bring about another imbalance to the quark pairing principle;

however, it is minimized because the two down quarks share energy pairing with the

single up quark much in the same way two atoms will share valence electrons in a

covalent bond. Undoubtedly though, this is not an ideally stable arrangement.

This innate instability from quark pair sharing in addition to the high total mass

generated by this composite arrangement making up the neutron leads to a very short

lifetime for free neutrons. Though charge balanced, the neutron must revert to a lower

mass / energy state. This is the β⁻ decay we saw previously. A free neutron decays to a

proton and either ensnares the electron it emitted or attracts another free electron to it

thus making a hydrogen atom. In this hydrogen atom system, quark pairing still has a

mild instability from the pair sharing between the down quark and two up quarks. But

overall the atom is stable due to the lower energy state and having total charge balance

neutralization. Ideally though, introducing another neutron into the nucleus would

Revision 1, Page 41 of 75

Page 42: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

eliminate the quark pair sharing. That gives us the heavier hydrogen isotope called

deuterium. The only remaining instability now is the half full electron energy shell S1.

That requires a second electron, which needs a second proton for charge

balance, which needs a second neutron for quark pair matching; all leading to a stable

helium atom when all is said and done. Beyond that, any larger elements require the

fusion power of larger stars and supernovae for their creation. Subsequently, the effects

of those same three principles work to trim down the aggregation of newly created

atoms to their most stable and thus most abundant isotopes. The diagram at the top of

the next page shows the pair matching stability in deuterium nuclei and how two of them

combine to make a single nucleus. This makes the least complex, most compact while

yet most stable singular atomic nuclear structure, helium. The diagram shows two

options for quark pair sharing between the deuterium nuclei. Alternating between these

two scenarios is what locks the two together to form a single nucleus.

Revision 1, Page 42 of 75

Page 43: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

Alternating pair sharing in the helium nucleus provides a mechanism for interlocking of its constituents.

As you progress to higher and heavier elements on the periodic table, the

neutron to proton ratio changes from the ideally stable 1:1 closer to 1.5:1. This is simply

due to the nucleus itself growing larger and the associated increase of charge repulsion

of the protons. That requires more neutrons to fill the gaps and helps hold the nucleus

together. However, moving from the 1:1 ratio, which is indicative of ideal quark pair

matching, towards the 1.5:1 ratio means that you introduce more and more quark pair

sharing. That obviously introduces a growing nuclear instability. As proton up quarks are

forced to pair share with greater numbers of neutron down quarks that means that the

neutron down quarks have a dwindling number of proton up quarks for pair sharing.

What you end up with is essentially a game of musical chairs. Sooner or later, the

overlapping wave functions of the baryons match up in a localized pocket of stability

where you have only two neutrons and two protons that are all interlinked with the ideal

1:1 quark pair matching. At that instant, what you have is a helium nucleus trapped

inside the heavier nucleus. That helium nucleus is ejected with high energy as a

radioactive decay particle called an alpha particle.

Unfortunately, that leads to an even more unstable quark pair matching ratio

within the heavy nucleus as the two ejected protons have a greater effect on proton

count than the two eject neutrons do on neutron count. The result is a continuing chain

of alpha and beta decays as the heavy nucleus works its way toward something more

stable. Eventually, all atoms heavier than lead end their decay chains with the most

stable, heavy atomic isotope, lead 208. Other less common decay methods would be

singular neutron or proton emission. There is also the process of spontaneous fission

Revision 1, Page 43 of 75

Page 44: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

where the heavy nucleus simply splits into two smaller and more stable nuclei with a

release of several free neutrons. The key to maintaining this amazing collection of

atomic elements represented by the periodic table is the one simple thing that I

mentioned at the beginning of the discussion; normal space. It is this normal space with

a Silver Ratio expanding charge matrix that allows for the existence of normal matter.

But why does this normal space not also give rise to a periodic table of antimatter

elements? The reason is that antimatter is very difficult to produce in normal space. It

takes very high energy processes to produce even just a few antimatter particles. That

is another clue that positrons are not true antimatter particles. The first pointer to that

fact is that both positrons and electrons are simultaneously produced in the relatively

low energy beta decay process. As such, the two are simply mirror images of each

other and by necessity should have opposing charges. In truth, respectively, they are

just positive and negative leptons. Positrons are not nearly as plentiful as electrons

because they are required to transform neutrons into protons. Electrons are just

remnants of that process. To have true antimatter elements requires anti-space.

So what is anti-space? Anti-space is merely space made from charge matrix with

the positive and negative charge positions transposed. Instead of the positive red

spheres being the larger ones and the negative blue spheres being the smaller ones as

in the original setup of the Ratio Space model back on page 4, their roles are reversed.

See below.

Revision 1, Page 44 of 75

Page 45: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

On the left is the charge matrix structure of normal space and on the right is that of anti-space. A mirrored

backdrop has been used to give an all-around view of the two models.

What you get with anti-space is particles that are identical to regular matter in

every way except for having an opposite charge. Anti-space would even have positive

and negative leptons as normal space does. However, the positive ones would be the

more plentiful and do the job of orbiting antimatter atomic nuclei because the negative

ones would be required to turn antineutrons into antiprotons. Furthermore, since

positive or negative charge is merely a sign convention, then it is that standard that

determines whether our Universe is made of matter or antimatter, and we can change it

with the stroke of a pen.

Revision 1, Page 45 of 75

Page 46: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

Now as you might imagine, it takes quite a powerful and violent interaction to

actually invert space in such a way. These levels of energy are exhibited in cosmic ray

interactions with Earth’s atmosphere. We can produce antimatter particles in our super

colliders; smashing normal space particles together at such high energies that the

violence of the interactions literally turns space inside out. Undoubtedly there are other

natural sources of anti-particle production, perhaps in stars, star collisions, super novae

and the accretion discs of black holes. Now, disregarding both the sign convention for

charge that makes up our normal space and the type of particles that constitute matter

in that normal space, all types of matter have mass.

Yet it is the mass / energy content of matter that threatens the normalcy of the

space in which it exists. Each added bit of mass concentrated in an atom’s nucleus

displaces more and more charge matrix expansion outward. That introduces an

anomaly to the homogeneity of the charge matrix thus causing it to push back ever

harder against the growing mass. We already saw that the strength of that push of the

charge matrix expansion, (Dark Energy,) is the value of the Gravitational constant, G. In

reality, gravity is always there, pushing from every direction. We just don’t see its effects

on a local scale until a concentration of mass warps the localized space thus boosting

the effects to measureable levels.

At moderate levels of expansion displacement we observe a familiar gravitational

force. However, when charge matrix expansion is displaced to ever increasing

magnitudes, matter is subject to more than just simple gravitational attraction. The

additive q of Ratio Space is diminished to a point that leptons literally have no room for

the excess expansion displacement of their makeup. Electrons are forced to retreat

Revision 1, Page 46 of 75

Page 47: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

inside the nearest proton in a process known as electron degeneracy. Any star so

massive that it attains that level of expansion displacement is doomed to become a

neutron star. Even further expansion displacement yields a black hole in space. This is

a place where the charge matrix has become so abnormal that no form of particulate

matter can be sustained, thusly any that enters said place is destroyed.

Most likely, at the core of a black hole the expansion displacement of all the

combined mass it contains has forced the charge matrix into a cesium chloride crystal

structure with a 1:1 size ratio of the charge spheres. That type of space may have its

own forms of particulate matter, which we will probably never be able to observe.

Nonetheless, it is the things that exist on this side of the event horizon that matter most

immediately; and we will always have questions to ask about them. For starters, why is

a black hole actually black? And how far away from it must you be so that you are not

pushed in?

Gravity Fields

Ever increasing concentrations of mass displace more and more charge matrix

expansion outward from the center of the mass. Obviously that displacement is evenly

distributed outward in all directions. This leads to concentric shells of equal charge

matrix expansion potential. Each subsequent shell increases in surface area in

accordance with a radius squared relation. Accordingly, the force of the expansion

potential along any vector normal to the shell surface decreases by an inverse radius

squared relation, creating a gradient field of distance dependent gravitational potential.

Revision 1, Page 47 of 75

Page 48: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

As mentioned before, this is the warping of space in the Space-Time model of General

Relativity. Time dilation is the key factor to explain the curvature of light with this model.

Light is forced into a geodesic path through this space due to the expansion potential

displaced by the mass at the center.

As we have already seen, it is the very expansion of space that drives the

velocity of light. As an electromagnetic wave travels through space, all parts of its wave-

front progress equally. However, if there are unequal expansion potentials encountered

across the wave-front, then the wave must adjust to equalize propagation velocity. The

easiest way to see this is as a time gradient across the wave-front. For example, the

energy for the portion of the wave-front closest to the mass can propagate one unit of

distance per one unit of time, (Tclose). But the portion of the wave-front farthest from the

mass, having the same available energy, can propagate two units of distance in that

same unit of time, (Tclose). Or, compared to the travel of the wave-front closer to the

mass, one unit of distance in one unit of time, (Tclose), the part of the wave front farther

from the mass travels the same unit of distance in only half of one of its own units of

time, (Tfar/2). Either perspective represents the same effect. The wave-front must curve

toward the mass. The curving effect continues as long as the time gradient exists

across the surface of the wave-front.

There are only two ways to eliminate the time gradient. The first is that the EM

wave exits the gravity field of the mass. The second is that the EM wave aligns to a

vector path normal to the surface of the concentric shells of expansion displacement.

The later of those two is the reason that light ‘falls’ into a black hole. In addition, light

cannot be emitted from inside the event horizon of a black hole because time has

Revision 1, Page 48 of 75

Page 49: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

stopped. For an EM wave that means that the positive crests align with the negative

troughs and the wave ‘shorts’ out before it can even be initiated. That is why a black

hole is actually black.

The same mechanisms apply in the Ratio Space model to produce the blackness

of a black hole. However, the Ratio Space model shows the real causes. In the case of

the curving EM wave, it is not a time gradient that exists across the wave-front. It is

actually a gradient of the expansion potential q. The wave is pushed in toward the mass

because a greater percentage of the available q at the farther part of the wave-front can

be expended than what is allowed at the part of the wave-front closer to the mass. The

‘thicker’ space from displaced charge matrix expansion is the inhibiting factor that

determines what percentage of q can be applied. In the same manner, the wave keeps

curving until it either exits the gravity field or it aligns to a path normal to the concentric

shell surfaces of expansion displacement.

What prohibits light emission beyond the event horizon is that all of the

expansion potential has been displaced. No expansion potential means there is no

expansion of space and no expansion means the velocity for EM waves is zero. It’s not

that light can’t escape the event horizon; light cannot even exist within that boundary.

Even if it could, any matter that would emit a light wave will be shredded long before it

gets to the event horizon. That’s why we get to see the bright light emitted by this matter

as it is destroyed before falling into the black hole. It is still outside the event horizon. So

how far away from this point of no return is completely safe? What is the minimum

distance to maximize your safety? Obviously you’d have to be outside the black hole’s

sphere of influence.

Revision 1, Page 49 of 75

Page 50: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

That means you have to be far enough away so that you are outside any of the

charge matrix expansion displacement caused by the mass of the black hole. That

expansion displacement can also be seen as a gradient along a vector between the

center of the mass and any other point. Once you’re far enough away that the gradient

has fallen to zero then you’re safe; conditionally. This is why Newton’s formula for

gravitation doesn’t always work. It assumes gravity to be an attractive force with infinite

range. In actuality it is a repulsive force with equal magnitude at all points in space that

is modified by mass to have a locally amplified force with a range dependent on the

amount of mass present. In the case of the Earth, local gravitation effected on a small

mass is formulated as follows:

g=−GMd2

At the surface of the Earth with a radial distance of approximately 6,370,000

meters, local gravitational acceleration (g) is:

g=−6.672722174 x 10−11( N m2

k g2 )x 5.97237 x1024 kg(6.37 x 106m)2

g=−9.82ms2

The magnitude of g obviously drops as the radial distance from the center of

Earth’s mass is increased. At a great enough distance, g falls to the value of G and

cannot become any smaller. This distance then is the full extent of the Earth’s

Revision 1, Page 50 of 75

Page 51: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

gravitational influence. Rearranging the equation and disregarding the vector sign of the

value we get:

gG

=1=Md2

This equation shows us that the gravity field ends where the mass of the Earth

(M) and the square of the distance from Earth (dEarth) are equal.

1= Md2

Thus:

d2=M

And:

dEarth¿√M

The Earth then has a gravity field extending out d meters from the center of its

mass.

dEarth¿√5.97237 x1024

dEarth¿2,443,843,284,664.54657meters

Certainly that looks like a pretty big number. It’s actually even bigger than you

think. Considering that the Earth is 150,000,000,000 meters from the Sun, (known as 1

Astronomical Unit or AU.)

Revision 1, Page 51 of 75

Page 52: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

2.44374328466454657 x 1012

1.5x 1011=16.2923 AU

That means the Earth’s gravity field has a radius greater than 16 times the

distance that separates it from the Sun. Essentially the Earth sits at the center of an

enormous gravity field ‘bubble,’ (more than 32 AU across,) floating in space. When the

‘bubble’ gravity fields of two masses begin to overlap, an attractive force is initiated. The

closer they get, the greater the attractive force. The overlap of these fields is the

gravitation described by General Relativity. Regardless of the effects or how they are

described, the root cause is the repelling force of Dark Energy.

Try to imagine then, our Milky Way galaxy constituted not by stars, planets and

clouds of gas and dust; but instead, giant overlapping bubble gravity fields interlinking

everything to everything else in a giant gravity web. This is the reality that holds

galaxies together and governs their motion. It is the mechanism that anchors everything

in the galaxy to one point; the black hole at the center. That brings us back to the other

of our two initial questions. How far away from a black hole is a safe distance? The

condition that I brought up earlier about the problem is that you must consider your own

mass as well since everything has its own gravity field and gravity field overlap is what

causes gravitational attraction. To simplify the example, let’s evaluate the gravity

between the Earth and our Milky Way’s galactic core black hole.

In order to have no gravitational attraction between the two they must be

separated by a distance greater than the added radii of their gravity fields. We must also

Revision 1, Page 52 of 75

Page 53: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

assume that there are no interfering gravity fields from adjacent masses. We already

know the radius of Earth’s gravity field. So what about the black hole, how massive is it?

It is estimated by many that this black hole, also known as Sagittarius A, has a mass of

approximately four million times the mass of our Sun. The Sun’s mass is:

MSun ¿1.98855 x1030 kg

MSagittarius A ¿4,000,000 x1.98855 x1030kg

MSagittarius A ¿7.9542x 1036 kg

The gravity field of Sagittarius A then has a radial distance of:

dSagittarius A ¿√7.9542X 1036

dSagittarius A¿2.82031913087863161415141 x1018meters

Adding the two for a combined distance:

dSagittarius A+dEarth¿ (2.82031913087863161415141 x 1018+2. 44374328466454657 x1012)

dSagittarius A+dEarth¿2.82032157462191627869798 x 1018meters

That incredible distance is almost 300 light years! Since our solar system is

about 26,000 light years from the galactic center we are safe from any direct effects of

the black hole. However, through the linked web of gravity fields between here and

there it still directs our path through space.

This black hole example demonstrates the one limiting condition to the direct

application of Newton’s law of universal gravitation. For two massive bodies, the center

Revision 1, Page 53 of 75

Page 54: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

to center distance between them must be less than the sum of the square roots of their

masses in order for Newton’s formulation to be applicable. For two masses (M1 and M2)

to attract gravitationally, the following must be true:

d<¿ (√M1 + √M2)

As done with the previous example, the units are standard SI units; meters and

kilograms. Of course, gravitation involving three or more bodies is increasingly

complicated since you must consider multiple overlapping gravity fields. We can actually

see the effects of overlapping gravity fields by viewing the background light that has

passed through galaxy clusters. This is known as gravitational lensing because it greatly

distorts the path of the light depending on the amount and orientation of mass creating

the distorting gravity fields. This happens in exactly the same manner described earlier

in this section as the wave-front of a light beam is deflected by the q gradient it

encounters. However, that process only helps to explain the how and why of Electro-

Magnetic propagation. We still need to know what mechanism accomplishes the actual

transport of energy in the propagation of an EM wave

Electro – Magnetism

It has already been shown that it is the expansion of space which determines the

velocity of electromagnetic radiation. Also previously revealed is that space is

composed of a charge matrix and it is that charge matrix which is expanding. Therefore

it is this expanding charge matrix that is the very medium for EM radiation and, more

generally, it is the canvass upon which all the physical world is painted. To stay on topic

though, how does an expanding charge matrix medium propagate an electromagnetic

Revision 1, Page 54 of 75

Page 55: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

field? It is not hard to imagine ways in which a medium made of charge could hold an

electric field, but what about a magnetic field? Science still doesn’t definitively describe

the makeup of magnetic fields. We can artificially produce magnetic fields by using

moving charged particles. However, that is more or less a backward manipulation of

nature.

With inductors, we use an artificial electric field to move charged particles along a

coiled wire in order to produce a linear magnetic field; the same type of natural field that

exists in a common bar magnet. Yet magnetic and electric fields propagate easily

through space with no charged particles present. Though not separately; one cannot

exist without the other. The reality is that they cannot be separated because they are

actually one and the same. Current electromagnetic theory describes that the

interdependent electric and magnetic fields of EM radiation are separated in time by 90

degrees of their sinusoidal wave. This makes sense because it is the growing magnetic

field flow that builds the electric field. Once all of the wave’s energy is stored in the

electric field, the magnetic flow stops. It then reverses and flows in the opposite

direction, linearly separated by half a wavelength from its previous position. This is the

point where you can begin to envision the true nature of magnetic fields.

Electric flow is purely electric only when it is composed of independent, like

charged particles. That necessarily implies one of the two types of charge, otherwise

there would be neutrality and thus no electric flow. Forcibly moving around charged

particles is how we artificially produce electric or magnetic fields. Nature does things a

bit more simply. The charge matrix of space naturally tends toward neutrality by

maintaining a homogenous mixture of positive and negative charge. That means that

Revision 1, Page 55 of 75

Page 56: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

magnetic fields are the true basis of any and all energy flow. There is a major clue to

this fact in the operation of man-made electronic inductors. It also helps to review how

we think of electromagnetic waves. This is done most easily from the perspective of the

Space-Time model.

EM waves are measured as a time changing function. And since there are no

charged particles needed to propagate EM waves that means they have no static

electric fields, regardless of how small you make the measured time interval. The reality

of a building or declining electric field in EM radiation is that it is actually magnetic flow.

An electro-magnetic wave is, in reality, a magnetic-magnetic wave. It is two

interdependent magnetic fields that propagate perpendicular to each other both in

space, (by one quarter rotation at the wave front,) and time, (one quarter of a wave

along its length.) The two field waves alternate their pole orientations as energy flows

from one field to the other and back. From whatever point, (in time or space,) you set as

the zero reference, after one wave, (in seconds or meters respectively,) all defining

parameters of both magnetic fields will be back to the values at which they started.

Analyzing the waveform as a whole is the best way to see it as two alternating

magnetic fields. However, if you could divide that wave up into infinitely many, infinitely

thin spatial cross-sections along its line of propagation you would see it as a long series

of a set, of varied static electric fields. From the cross-sections of one wavelength there

would be a total of four that show only one electric field. Two come from when the first

magnetic wave is at its maxima and the second is at its minima. And the other two come

from when the first magnetic wave is at its minima and the second is at its maxima.

Since the cross-sections are infinitely thin, they represent no passage of time. Without

Revision 1, Page 56 of 75

Page 57: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

time there can be no change. Thus you end up with a true static electric field, just as

you would have with two separated, opposing point charges. This is a digital

representation of a wave as opposed to the previous analysis which models an analog

representation. As you will see later on, both are accurate depending on the energy

content of the wave.

However, what exactly constitutes a magnetic field remains a mystery. In

addition, we now need to know how an electric field exists with no point charges

present. Just like with every other property and process previously covered, the

expansion of space is the key to resolve both problems. Think back to the very first

model we looked at for the structure of the charge matrix. It had large spheres of

positive charge that we called ‘reds’ and small spheres of negative charge that we

called ‘blues.’ Imagine how that structure would expand on the very smallest scale;

involving only one of each kind of the spheres. If those two adjacent spheres expand

and they maintain the same size ratio, then there is no net charge to measure. The next

step in the process comes from the characteristic homogeneity of the charge matrix.

The expansion of the two opposing spheres has created a concentrated

distortion in the charge matrix structure. The internal pressure, (Dark Energy,) of the

matrix naturally attempts to minimize any mechanical distortions such as this. That is

easily accomplished since there are two spheres that expanded. The internal pressure

of the matrix forces the two spheres apart. There is no change in the ratio of their size

difference as compared to the surrounding space so there is still no net charge to

measure. However, since one of the spheres is positive and the other is negative, an

electric field exists between the two when they are separated because they have both

Revision 1, Page 57 of 75

Page 58: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

expanded more than the surrounding space. The separating action of the two spheres is

the flow that constitutes a magnetic field. Thusly, a magnetic field and an electric field

are the same thing. That fact holds a special meaning for certain elementary particles.

On a further note, most likely it is not that the individual spheres have moved,

only that excess charge energy present in one has been transferred to another as the

process of expansion progresses. Then it is expansion that is responsible for creating

both electric and magnetic fields. When a wave enters a region of space, it is amplifying

the natural processes at work there. As the wave passes, the energy of those

processes settles back to original levels, thus returning the excess energy provided by

the wave, back to the wave. This is how wave radiation propagates. It is this dual

electric and magnetic process of expansion present in all of space that I was referring to

earlier in this section when I mentioned man-made inductors.

We make electronic inductors by coiling wire around a core of iron. Then an

electric field is applied along that wire by a battery or other power source. That field

causes excess point charges, (electrons) to move along the wire; in other words, it

causes an electric current. That current circling the core through the coiled wire builds a

magnetic flow in the core which flows outward only to dissipate in the space around it.

The opposing field lines match up with each another outside the core. Since they are

negative and positive; they recombine but appear to be continuously flowing back to the

core. This matching up process restores uniformity in the expansion of the surrounding

charge matrix. New expansion is continuously occurring in the core, (as it does

everywhere in space.) However, it is the increasing current in the wire that causes the

field to grow and maintain once the current has leveled off. In reality then, the core of a

Revision 1, Page 58 of 75

Page 59: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

magnetic field is simply an artificially collimated bidirectional beam of spatial expansion.

And by the right-hand-rule convention of electromagnetism, the negative charge matrix

expansion flows from the north pole of the magnet and the positive charge matrix

expansion flows from the south. That seems natural enough, but I did say that

electromagnetic inductors were a backward manipulation of nature.

Appropriately, the major clue to this that I mentioned is called a “backward” EMF.

As the field of an electromagnet builds up and the magnetic field lines expand outward

from the core, they cut across the coiled electric wire wrapped around the core. This

bisecting of the wire by the field lines induces another current in the wire. However, this

current induced by the building magnetic field is opposite or backwards compared to the

direction of the current initiated by the electric field. Therefore, it causes a resistance,

(the back EMF,) to the building of the electrically forced current flow. While this back

EMF serves to limit current flow of charged particles in man-made electronics, it is the

primary mechanism that nature uses to regulate and smooth the expansion of space.

Continuing with the example from before, when a single red charge sphere and

companion blue charge sphere expand more than the space around them, the charge

matrix forces them apart. That action of separation is a magnetic field flow. When the

magnetic field lines start to expand outward from the center they induce that same back

EMF in the surrounding space. But with no artificial forward current flow to compare it

to, it isn’t really backwards at all. The induced EMF begins to separate other positive

and negative charge matrix expansion in accordance with the right-hand-rule

convention. That separation action is a new and independent magnetic field that

propagates outward radially, centered on its axis of charge separation, its magnetic

Revision 1, Page 59 of 75

Page 60: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

core. Propagating this energy outward helps to disperse it evenly around the

immediately adjacent charge matrix, thus evening out the expansion. As the right-hand-

rule shows us, the first magnetic field and the second one that it initiated are

perpendicular to each other. And since both fields propagate outward they bisect one

another and each serves to boost the other in a continuous positive feedback loop.

This feedback mechanism of expansion is what gets a boost from radiating

waves. The wave’s energy is put into the feedback mechanism upon entering a region

of space and transferred back to the wave as it passes. More accurately, the passing of

excess expansion energy from one region of space to the next is actually what

comprises a radiating wave. But as I already explained, it is actually two perpendicular

and offset magnetic waves. Now doesn’t that sound exactly like the positive feedback

mechanism that was just covered? These positive feedback loops constitute every point

of the medium of space so a radiating wave merely has to make use of these energy

highways that already exist.

Speaking of magnetic-magnetic waves, let’s look at a specific range of wave

frequencies; those emitted and absorbed by orbital atomic electrons, light waves. We

have already seen that an electron consists of q/2 of expansion energy. But energy

levels in an atom’s orbital shell structure are separated by multiples of q. That is why it

is appropriate to use the Planck constant to find the energy of light waves by multiplying

it and the frequency of the wave. Before and after the jump an electron’s q/2 ratio must

be maintained. Recall that this ratio comes from length divided by (length per second of

additive expansion.) So if an electron’s energy (expansion) goes up, the total length

(diameter of the electron) must also go up to maintain the ratio. So a higher energy

Revision 1, Page 60 of 75

Page 61: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

orbital literally requires a bigger electron, volumetrically speaking. This changing of size

for an electron is how it emits and absorbs radiating wave energy.

Electrons transfer wave energy via the poles of their magnetic field. The more

energy levels an electron drops the more energy it must give up. That means greater

field energy that is given to the corresponding magnetic-magnetic wave. Greater

magnetic field energy equals a higher frequency for the EM wave. Now there are

obviously more frequencies in the electromagnetic spectrum than can be attributed to

the limited energy states of orbital atomic electrons. Gamma rays are at the highest end

of the spectrum. The most energetic of these are emitted from exotic cosmological

sources which are not yet fully understood. Although, some lower energy gamma rays

can be emitted by subatomic processes and electrons have one particular role in that.

That role begins with why electrons have a magnetic field in the first place.

Regardless of whether electrons are freely moving or if they are in an orbital

energy shell of the atom, they have an oscillatory motion around some discernable

point. Since an electron has charge and a rotating oscillation, it will create a magnetic

field with poles on an axis centered on that point. As shown earlier, electrons consist of

excess charge matrix expansion. That distorts the charge matrix; recall from earlier in

this discussion that the internal pressure of the matrix tries to eliminate mechanical

distortions to even out space. That is magnetic flow, separating of opposing charges. A

free electron jitters and oscillates because it has no opposing charge to be pushed

away from; but it used to. Think back to what was covered on the process of beta decay

of atomic nuclei on page 40.

Revision 1, Page 61 of 75

Page 62: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

Although it is either a positron or an electron that gets emitted, both are actually

created in the process. They came from the forced separation of normally expanding

neutral space by the excess energy of an atomic nucleus. This separation starts out as

the initiation of a 1.022 MeV gamma wave magnetic field. In β⁻ decay, the entire south

pole of that field gets absorbed by a neutron. The remaining north pole is emitted as a

511 KeV particle; an electron. Hence leptons and their electric fields are actually

isolated, magnetic half fields. Electric and magnetic fields are the same thing; therefore,

electric and magnetic monopoles are also the same thing. So what I previously said

about there being no static electric fields in EM waves actually applies to all forms of

electromagnetism.

Gamma waves have just helped form a better understanding of particle charge

and electromagnetism. Now, looking at the very highest frequencies of gamma waves

will reveal much more about our Universe. Just a few pages back, two perspectives of

wave radiation were covered; digital and analog. In the next several pages both will be

used analytically. However, keep in mind that both are much more than analytical tools.

They are very real methods in which nature accomplishes the same task. That task is

energy dissipation. Think about that a bit more. All forms of concentrated energy, most

commonly physical matter, seek to be at the lowest sustainable energy state.

For example, an orbital atomic electron drops to a lower available orbital energy

level and emits a light wave. Then another atom’s electron absorbs that light wave. The

light wave was not emitted by the first electron for the purpose of reabsorption by the

second. It was emitted in order to discard excess energy. The latter is a totally unrelated

event. This even distribution of energy is the ultimate goal of the entire Universe.

Revision 1, Page 62 of 75

Page 63: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

Complete and all pervasive energy neutrality is the purpose which drives the entropy

inherent to every natural physical process. For now though, the focus must remain on

radiating waves as the energy dissipation method.

Between the two models for waves, digital or analog, the digital wave model is of

particular importance for several reasons. Succinctly, the primary reason is that if waves

can be realistically modeled digitally and we have already seen that space itself

conforms nicely to a digital model, then there must be a mathematical point to some

parameter where the two models equate. That point sets the upper limit of the EM

spectrum. Cosmologists have seen the evidence that this point exists.

Certain distant astronomical sources emit massive amounts of energy called

gamma ray bursts or GRBs. The gamma rays that come from these sources are the

most energetic ever detected. These gamma rays top out in energy at almost 10 TeV.

That’s 1012 electron-volts of energy; almost ten million times the 1.022 MeV level that

splits electrons and positrons out of neutral charge matrix. All naturally occurring

radiating waves are initiated as alternating magnetic fields. Wave fields alternate

direction when the field has attained its maximum strength. For waves with greater

energy that obviously occurs more quickly than waves of lesser energy. And all waves

are driven by the quantum tick of space-time expansion so they all move with the same

velocity through space. Those two facts mean that differing waves must have different

frequencies based on their energy content. The gamma waves from GRBs have the

highest frequencies allowable for the medium of space-time because their magnetic

fields reach maximum strength at the same rate the space-time expansion clock ticks.

Revision 1, Page 63 of 75

Page 64: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

In other words, the wave is completely digitized. It is at the digital limit expressed in the

Ratio Space model for the structure of space.

The Silver Ratio is that digital limit and it is therefore, also the frequency for

maximum energy radiating waves. The maximum frequency is immediately less than:

(√2+1 ) x1027Hz

Insert that frequency into the Planck-Einstein equation with the Ratio Space

values that we have already covered:

E=hf=( 10q

(GRSR )x 106 ) f

¿

1.6004356 x 10−6 Joules

1.6021766 x 10−19 JouleseV

≈10TeV

You get the maximum energy of gamma waves detected from GRBs, 1012 eV.

The total energy emitted by this type of astronomical source is so great that it far

exceeds any frequency of the EM spectrum. In order to dissipate this energy it must

take the form of the first available frequency that will propagate it away. That is why

these blasts have such great intensity. Each wave can only take away 10 TeV of energy

so the number of total waves emitted is literally “off the chart.” This wave energy limit

Revision 1, Page 64 of 75

Page 65: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

does not apply to cosmic ray particles as a massive particle is defined by compacted

energy patterns.

But that is not all there is to be learned from wave radiation. Since these 10 TeV

waves from GRBs are fully digitized, they help us to understand certain physical limits to

the scale of the Universe. Obviously we can use their properties to deduce

characteristics of the quantum realm. However, when coupled with what we already

know about the Universe at the very largest of scales, we can open up a world of

possibilities that most might not ever imagine.

Age & Scale of the Universe

What can light possibly tell us about the age of the Universe; other than what we

glean through observation of distant galaxies? We can construct feasible timelines and

ages for the Universe simply by studying the evolutionary state of the galaxies we see,

referenced to how far away they are. However, all of this new information you’ve just

read gives rise to certain implications that might just make that the only way we have to

form any sort of conclusion about the possible age of the Universe. On the other hand,

you’ll recall that time has no place in the Ratio Space model. So even if we can

determine an age for the Universe, what is its real relevance? Is its age really 13.8

billion years? Perhaps less or is it more or maybe its’s ageless? Furthermore, why is the

unit of years so important to the question? Is it just that it is the most suitable unit for our

perception of time?

Once again, let’s focus on the real purpose of wave radiation. That is to dissipate

and disperse energy. But as is the case with visible light, if the wave gets reabsorbed

Revision 1, Page 65 of 75

Page 66: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

then the dispersion has failed and the energy has only been transferred from one

concentration of matter to another. The same is true for all wave radiation. So what

happens to a wave that never gets reabsorbed? Does it just travel off into infinity?

Counterintuitively, the answer is no. The wave stops. More correctly, it keeps traveling

but doesn’t go anywhere. This aspect of wave radiation is already incorporated into the

“Big Bang” theory of the Universe’s origin. Right now it is believed that we cannot see

light from the most distant astronomical sources because the space between here and

there is simply expanding faster than the speed of light and so that light can never go

fast enough to overtake the expansion and reach the Earth for us to see it. Although the

essence of the idea is correct, it is not quite that simple.

The reality is that space simply has a limited amount of expansion energy to

drive wave radiation and thus waves can only go so far. Adding more energy to a wave

adversely affects the maximum distance to which that wave can be propagated. Our

observations of the cosmos have shown us that lower frequency gamma waves from

distant sources will arrive at Earth before higher frequency gamma waves emitted

simultaneously from the same source. This is due to the real difference of digital and

analog waves. Space is digitized by its own expansion and thus everything in it is

digitized as well. Waves are also composed of the same digital bits of space. Those bits

expand equally according to the rules and equations covered previously. Obviously,

waves of greater length are composed of more digital bits. There are more bits and thus

greater resolution of a true sinusoidal wave form. Shorter wave lengths have fewer

digital bits and thus less resolution and a less accurate sinusoidal wave form. Then it

stands to reason that with each tick of quantum expansion, a wave of lower frequency

Revision 1, Page 66 of 75

Page 67: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

will stretch more and be propagated farther than a simultaneously emitted wave of

higher frequency.

That fact introduces several obstacles for observations of deep space. The first is

that the farther into space we look, we observe a growing discrepancy in the arrival

times of simultaneously emitted waves. That means that the ability to focus light from

more distant objects is greatly restricted because the many different frequencies you

are trying to focus have come from differing times and thus differing positions as the

observed object moves.

The second problem is that it is true that all waves reach a point where the space

they are propagating through expands faster than they can be propagated. However,

the greater length a wave has the more it can be stretched and thus driven well past the

points at which the shortest length waves have stopped. All waves of the spectrum

emitted simultaneously from a single source, will reach this point at which they

essentially become a standing wave in expanding space and make no more forward

progress as referenced to their point of origin. Though all of those waves will reach said

point simultaneously, the point for each frequency of wave will be different. The highest

frequencies have the shortest distance from source to stopping points and the lowest

frequencies have the greatest distance from source to their stopping points. Logically

then, there is a distance beyond which the further away an observed object is, the less

wave spectrum there will be available for you to observe. As waves from a source travel

towards us, the higher frequency waves disappear first and so one down the spectrum

until the entire spectrum is gone. That unobservable region of space lies behind the veil

of the Cosmic Microwave Background (CMB) radiation.

Revision 1, Page 67 of 75

Page 68: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

The third obstacle to observing the cosmos is what we can’t see in that region

just in front of the CMB. There exists a dark region of space between the farthest visible

galaxies and the CMB. This region is not dark because it is empty. It simply lies at a

distance farther than visible light wavelengths can travel. There are still early galaxies

there; simply unobservable to us. This fact robs us of the earliest, most important

evidence in the timeline of large scale matter structure evolution and induces error into

our extrapolations for the age of the Universe. These same types of early galactic

structures probably exist beyond the CMB as well. Or more correctly ‘existed’ since by

now they have evolved to be similar to galaxies we see closer to home.

There is great probability that the size of the Universe is infinite and always has

been. Any amount of space has limited expansion energy; the ratio q. That does not

mean that the amount of space which can exist has any limit; although, that observation

is subject to the varied definitions of the term ‘space’ and what that space is relative to.

But remember, the time parameter of the Space-Time model arises from that limited

expansion energy of space. Equivalently, the Ratio Space model shows that time does

not exist. Time is merely the ratio measure of the expansion energy of space; again, the

ratio q. Those two facts mean that the ‘age’ of the Universe is also nothing more than a

ratio for the expansion energy in what is probably just our little patch of space; a patch

which we refer to as ‘the observable Universe.’ The best current extrapolation for the

age of the observable Universe is 13.8 billion years. If we continue to base future

analyses solely on how far we can ‘see’ via even the lowest frequencies of wave

radiation, those analyses will yield values ever closer to (√2 x 1010) years for this age.

This is due to how we define the units of seconds and years.

Revision 1, Page 68 of 75

Page 69: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

We start with one second as the base unit that interrelates values between the

Space-Time and Ratio Space models. Therefore, it is not surprising that calculations of

the time parameter have the same numeric value as ratios of the Ratio Space model

when seconds are the unit of choice. What is a bit odd, is that the Earth’s yearly orbit of

the Sun is of appropriate length so that when measured in seconds, it yields a number

(31556925) that falls in between two other important ratio values. They are:

(q x 1046)/(√2 x 1010), or just greater than (π x 107)…………31416621

and:

√10, also at the scale of 107…………31622777

All of those numbers should seem familiar. They come from the example of

expansion derivation on page 27 and corresponding helium atom size on page 28; the

example that I said you would see again. Since extrapolations for the age of the

observable Universe are done back to when it was the size of an atom and even

smaller, let’s scrutinize some dimensional parameters that correlate between a helium

atom and the observable Universe.

First, atomic helium‘s diameter fluctuates because it is determined by the S1

electron shell that surrounds it and we’ve already seen that electron size itself fluctuates

because space expands. Back on page 28 we found that this means a helium atom’s

diameter is slightly greater than (2π x 10-11) meters at its smallest and (2√10 x 10-11)

meters at its largest. Notice also that the mantissa of these two numbers is exactly twice

Revision 1, Page 69 of 75

Page 70: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

the mantissa of the important ratio values from the previous page in the discussion

about the length of a year in seconds.

Either of those two values concerning time, multiplied by (√2 x 1010) give the

exact same numbers as when you take the two above values for the diameter of atomic

helium and divided them by (√2 x 10-28). If you’re putting these pieces together properly,

you should be noticing the obvious connection by now. It means that the Universe is the

size we perceive it to be now, after having expanded from the size of a helium atom

because it took a specified amount of time to do that; the same amount of time

represented by the expansion energy present in the space occupied by a helium atom.

Study these equations to clear it up a bit. Starting with the larger diameter of atomic

helium divided by the expansion energy present in the space within the helium atom:

(2√10 ) x10−11meters

(√2 ) x 10−28( meterssecond )

=((2√5 )x 1017 )seconds=Expansion (Ratio /Energy)of Space

((2√5 )x 1017 )seconds

((√10 )7( secondsyear ))=((√2 ) x1010≅ 14.14Billion years )

If you use the smaller number for atomic helium diameter and the actual seconds

per year ratio, you get:

((4.44298115755 ) x 1017 )seconds

(31556925.216 ( secondsyear ))≅ 14.08 Billion years

Revision 1, Page 70 of 75

Page 71: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

And, without taking into account the part of the observable Universe that we can’t

actually see with visible light, immediately in front of the CMB, a backward extrapolation

of expansion would yield a number even closer to the 13.82 billion years as in our most

current analyses. So do we merely perceive the ratio structure of the Universe when we

peer through our telescopes and microscopes? Has the Universe really expanded or is

its size simply the way it is regardless of past, present or future? There is another

logical possibility that we will explore shortly. But first, let’s put all of these equations

together.

Other than my crude mathematical derivation of space expansion from the

Hubble constant, I never demonstrated where (√2 x 10-28) meters per second comes

from. It comes right after the 999 in the primary ratio for the Ratio Space model.

GRSR

=(√5+12 )√2+1

=0.6702116225208423421957042 9995557018842430432754532

From our everyday Space-Time, meters and seconds perspective, placing the

decimal after the 999 is 10-28 orders of magnitude away. And, recalling the picture of the

triangle in the Silver Ratio sphere model on page 26, its hypotenuse is exactly √2 times

as long as either of its sides in the charge matrix structure. Then taking a helium atom

at its greatest diameter due to space expansion of the electrons in the S1 shell and

dividing it by the amount of self-expansion which it experiences, you get the expansion

ratio representing the energy content of space:

Revision 1, Page 71 of 75

Page 72: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

(2√10 ) x10−11meters

(√2 )x 10−28( meterssecond )

=((2√5 ) x 1017 )seconds

As is the case in that numerator, we can also impose a distance representation

onto the primary ratio of Ratio Space. That primary ratio represents spatial expansion

as measured from one point to another. But the fluctuating diameter of atomic helium

represents expansion outward in two directions referenced to the central point. In order

to make the primary ratio represent this type of expansion, it must be multiplied by two.

We also need to multiply it by 1027 in order to start with the appropriate decimal position.

(GRSR )x 2x 1027=1.3404232450416846843914085999 x 1028

Now divide that distance ratio by the expansion ratio:

1.3404232450416846843914085999 x1028(meters10 )

(2√5 ) x1017 seconds=2997277494.53406488((meters

10 )second )

What you get is a ratio explanation of how the expansion energy of space

determines the velocity of light and the strength of gravity. That number is the one we

got previously as a ratio base of light speed, though via a more indirect equation, back

on page 21. Also recall that the scale is ten times too large since it is a Ratio Space

value. If you divide it by two and reciprocate you then have the Ratio Space value for

the gravitational constant, G. All of this evidence points to the fact that the Universe and

all it contains are defined by ratios.

Revision 1, Page 72 of 75

Page 73: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

These ratios even relate the size of the observable Universe to the size of the

atoms within it. We just used a correlation between the sizes of the observable Universe

and the size of atomic helium. But the value of a ratio remains the same regardless of

the perspective of the observer. Imagine that perhaps the real Universe is infinite and

that it either has no need to expand or cannot expand. However, if everything inside the

Universe were contracting, including us, then from our perspective it would still seem

that space is expanding. Go back and review the discussion on the length of the meter

and the Mega-parsec on page 14. The possibility of Universal contraction does not

negate any of the ideas or mechanisms I’ve covered. It merely requires changes of the

wording to match the perspective. In fact, some of those mechanisms are better

explained if expansion is in actuality, contraction. One property of natural physical laws

even demands it.

Conservation; even though the elementary charge is just a ratio, it still originates

from the charge matrix. Accordingly, if the Universe were expanding, where does all of

the additional charge for new charge matrix come from? That is not a problem if the

spheres of the charge matrix are subdividing into smaller and smaller pieces. The same

total charge is maintained. If the digital bits that define energy and matter are getting

smaller, then matter itself must also get smaller with each quantum tick. That

exemplifies a much more logical scenario. Instead of new space being made

somewhere out there in the distant cosmos, it is being made everywhere, all the time.

Though in some places, that process is hindered.

Anywhere matter exists, its energy patterns displace some degree of potential for

the normal subdividing of the charge matrix. This displacement of potential is mass. A

Revision 1, Page 73 of 75

Page 74: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

greater degree of this displacement equates to a greater mass. Counterintuitively, that

means that a smaller, more concentrated energy pattern has more mass than one that

occupies a larger volume of space. That is precisely why the majority of an atom’s

mass, the nucleus, is such a small point at the center. Highly concentrated energy

patterns of matter already produce a lot of disturbance at their normal scales of size. It

then becomes fairly clear just why we get such violent reactions when we smash

massive particles together in our supercolliders. Antiparticles are produced in these

types of collisions.

Most naturally occurring antiparticles result from natural high energy collisions.

But just as I described earlier, antiparticles must have anti-space in which to exist.

Understanding that new space is continuously created at the very smallest scale makes

it easier to understand just how a very high energy particle collision could disrupt that

process. Instead of building new bits of normal space, a localized high energy

disturbance can cause the negatives and positives of the charge matrix to transpose

position as they reconstitute the charge matrix structure after the collision. If that

transposed / inverted space carries away any extra energy from the collision, the

resultant particle it initiates will be an antiparticle.

Conservation, mass and antimatter are just three indicators to the effectiveness

of the Ratio Space model. Further research into its applications will only serve to bolster

its validity. Undoubtedly, the simplicity of the model will help to bring about many more

advancements in all fields of human endeavor. Personally, I find comfort in the

realization that if the mechanics of space and time are so much simpler than once

Revision 1, Page 74 of 75

Page 75: Rev 1 - The Physical Constants of Digital Space

Original release date: February 15, 2016

thought, then just maybe some of the greater social dilemmas that plague humanity are

not nearly as complex either.

Revision 1, Page 75 of 75