## Monday, May 23, 2016

## Wednesday, December 23, 2015

### The Three Angels of Truth

**How does one know that he knows?**

- One of the most relevant questions in all Philosophy.

The default presumption is to presume based upon intuitive probabilities. That is how people knew that the Earth was flat floating in a bowl of water supported by an elephant riding on the back of a turtle... until Atlas came along.

An "angel" in scriptural lingo refers to an idea, thought, or strategy (similar to a con man's "angle"). And as it turns out there are three thoughts that provide proof of truth. There must be a unanimous vote of the three angels. The word "true" merely means "accurately aligned", usually referring to the alignment between a statement or claim and objective, physical reality.

__Given a specific ontology__(and avoiding the common presumption that everyone is using the same ontology), one can certainly know that he knows with certainty if he can confirm that his thoughts have the following attributes;

**A)**Consistency/Coherence

**B)**Comprehensiveness

**C)**Relevancy

But then, how do I know that with certainty?

Because nothing else is Relevant concerning a proposed truth (aka "

*I don't care about anything proposed as truth if it doesn't meet that standard*") and I maintain that concern consistently and comprehensively. It is my Definition of "a truth" (thus "true by definition").

Years ago, I was surprised to see those angels appear in a small booklet given to a set of churches regarding the proper method for interpreting the Bible. The author expressed them as a means to check one's presumption of interpretation. I recognized them a little differently as not merely a means to know of a proper interpretation, but a means to know of the truth within any given ontology (the Bible being merely one and a different one than Science and thus one cannot intermix the elements).

How can you know if what Physics says to be true, really is true? Faith in what you are told by a media service (a mediator)?

Look merely for the definitions of the elements they propose and verify a unanimous vote of the "Three Angels of Truth". You will find that they don't know those angels very well and espouse some truths that aren't. And you can know it with certainty even without being a physicist ("don't mess with a good metaphyscist" ).

## Friday, October 3, 2014

### Double-Slit Hypothesis

In the double-slit experiment concerning

__singularly generated particles__, a double-slit screen is positioned between a particle source and a detection screen or device. The screen then displays what is seen as an “interference pattern” revealing the location of where each particle struck the screen after passing through the slits. For more than a century people have been confounded as to exactly why an interference pattern would appear when it seems impossible that any interference could exist.
Using a new theory producing method, dubbed “RM”, and a new ontology, Affectance Ontology, I
hypothesize that if the inner surface of the double-slit screen was altered to
a specific surface shape, particles would no longer create a significant
interference pattern, but waves still would. Since a photon seems as a particlized
wave, I suspect that photons would show little difference from their typical
interference pattern, as their inherent wave properties would still have
predominate effect. But if they also stopped showing the interference pattern, it
would indicate that photons really are strictly particles.

Note that the inner walls of the screen must be randomized
in height such that its greatest height is equal to or greater than the
expected largest interference pattern wavelength. All inner walls should have a
random surface. It would be good if the display screen was also shaped
similarly, but such should not be necessary.

## Monday, July 14, 2014

### Why The Universe Exists

To exist means to have affect. Thus the substance of the universe can be aptly named "affectance". An affect is a change upon something

To have infinite homogeneity or infinite similarity, there must be infinite similarity between every point in the universe. Using a Cartesian system, there are 3/4 * Pi * infinity^6 points in the entire universe. To have absolutely zero affectance in the universe (zero existence) would require that all of those points be infinitely similar.

If we assign an affectance value of X to a point in space, every other point must be exactly equal to X. Each point has the possibility of being anywhere from 0 to infinite in its value. So the possibility of another point being that same X is 1/infinity. "1/infinity" is one infinitesimal, "0+", not zero. So the possibility of merely two points being exactly similar still isn't zero. So at this point, we can't say that there is no possibility of the universe being infinitely homogeneous.

If we consider another point, our possibility of all 3 of them being exactly similar is one 1/infinity times 1/infinity, or;

P = 0+^2, an infinitely smaller possibility of the 3 points being exactly similar... but still not exactly zero.

But then, the universe isn't made of merely a few points. The Cartesian model allows for 3/4 * Pi * infinity^6 points. So the possibility becomes;

P = 0+^(3/4 * Pi * infinity^6 - 1), an infinitely, unimaginably smaller possibility than before... but still not exactly zero.

So far, we used the standard Cartesian model of a universe to define our infinitesimal. But the truth is that even within the space of one infinitesimal, there is yet another infinite number of points. So a dimensional line would actually have, not infinity^2 points as the standard would imply, but rather infinity^3 points and 3/4 * Pi * infinity^6 points throughout. That changes our possibility considerably;

P = 0+^(3/4 * Pi * infinity^9 - 1), an infinitely, unimaginably smaller possibility than before... but still not exactly zero.

But why stop at merely allowing a line to have infinity^3 points?. Why not infinity^4 or infinity^78? The truth is that there is no limit to how many points we can assign to a line, so lets just call it "n", yielding;

P = 0+^(3/4 * Pi * infinity^n - 1), where "n" can be anything.

But as long as n is any number, the possibility will still not be absolutely zero. And the truth is that n can be all but "absolute infinity". So, let's limit n to "the largest possible number" and call it "Largest".

Now we have the equation;

P = 0+^(3/4 * Pi * infinity^Largest - 1), as the possibility of all points being exactly similar.

And since "0+" merely means "1/infinity", we can rewrite the equation as;

P = 1/infinity^(3/4 * Pi * infinity^Largest - 1)

But how can we have infinity raised to the Largest possible number without it being larger than the Largest possible? It is an impossible number. So what we have deduced is that in order to get the possibility of all points in the universe having exactly similar affect value there must be a number that is larger than the Largest possible. And there isn't one.

Thus, the possibility of all points in the universe being exactly similar is;

P = 1/(an impossibly large number) = Absolute Zero

And that is how you discover that the universe has absolutely zero possibility whatsoever of not existing at any time. The universe could never have begun to exist because it could never have not existed in the first place. It is a mathematical impossibility. Nor can the universe suffer "entropy death" or "heat death" and the thought of such is merely a mild form of terrorism.

*else*. Thus to exist, there must be distinction in the substance of the universe. If the universe was totally homogeneous, void of distinction, nothing could affect anything else to any more degree than it was being affected by all else and thus all would remain as it was, an infinitely vast nothingness, never actually changing at all. Nothingness and total homogeneity are the same thing.To have infinite homogeneity or infinite similarity, there must be infinite similarity between every point in the universe. Using a Cartesian system, there are 3/4 * Pi * infinity^6 points in the entire universe. To have absolutely zero affectance in the universe (zero existence) would require that all of those points be infinitely similar.

If we assign an affectance value of X to a point in space, every other point must be exactly equal to X. Each point has the possibility of being anywhere from 0 to infinite in its value. So the possibility of another point being that same X is 1/infinity. "1/infinity" is one infinitesimal, "0+", not zero. So the possibility of merely two points being exactly similar still isn't zero. So at this point, we can't say that there is no possibility of the universe being infinitely homogeneous.

If we consider another point, our possibility of all 3 of them being exactly similar is one 1/infinity times 1/infinity, or;

P = 0+^2, an infinitely smaller possibility of the 3 points being exactly similar... but still not exactly zero.

But then, the universe isn't made of merely a few points. The Cartesian model allows for 3/4 * Pi * infinity^6 points. So the possibility becomes;

P = 0+^(3/4 * Pi * infinity^6 - 1), an infinitely, unimaginably smaller possibility than before... but still not exactly zero.

So far, we used the standard Cartesian model of a universe to define our infinitesimal. But the truth is that even within the space of one infinitesimal, there is yet another infinite number of points. So a dimensional line would actually have, not infinity^2 points as the standard would imply, but rather infinity^3 points and 3/4 * Pi * infinity^6 points throughout. That changes our possibility considerably;

P = 0+^(3/4 * Pi * infinity^9 - 1), an infinitely, unimaginably smaller possibility than before... but still not exactly zero.

But why stop at merely allowing a line to have infinity^3 points?. Why not infinity^4 or infinity^78? The truth is that there is no limit to how many points we can assign to a line, so lets just call it "n", yielding;

P = 0+^(3/4 * Pi * infinity^n - 1), where "n" can be anything.

But as long as n is any number, the possibility will still not be absolutely zero. And the truth is that n can be all but "absolute infinity". So, let's limit n to "the largest possible number" and call it "Largest".

Now we have the equation;

P = 0+^(3/4 * Pi * infinity^Largest - 1), as the possibility of all points being exactly similar.

And since "0+" merely means "1/infinity", we can rewrite the equation as;

P = 1/infinity^(3/4 * Pi * infinity^Largest - 1)

But how can we have infinity raised to the Largest possible number without it being larger than the Largest possible? It is an impossible number. So what we have deduced is that in order to get the possibility of all points in the universe having exactly similar affect value there must be a number that is larger than the Largest possible. And there isn't one.

Thus, the possibility of all points in the universe being exactly similar is;

P = 1/(an impossibly large number) = Absolute Zero

And that is how you discover that the universe has absolutely zero possibility whatsoever of not existing at any time. The universe could never have begun to exist because it could never have not existed in the first place. It is a mathematical impossibility. Nor can the universe suffer "entropy death" or "heat death" and the thought of such is merely a mild form of terrorism.

## Wednesday, October 9, 2013

### RM: Cubic Time Dilation - Corrected Lorentz Factor

The universe is made of spinning things, not bouncing things. Although that is not 100% true, when it comes to sub-atomic particles and atoms, it is certainly true. The Lorentz time dilation factor assumes the opposite. Considering a space ship traveling through space at 50,000 mph, how many particles and activities within the ship are spinning versus bouncing up and down?

If you are not familiar with the Lorentz factor for time dilation, Wiki has an accurate article on it (warning: not everything in Wiki is).

Time is merely the measure of relative change but an accurate measure of it can be tricky. It has always been assumed that it doesn't matter in what manner something is changing, but merely how fast. The problem is that change comes with directions of change as an inherent property. Thus when someone speaks of traveling in a direction, the relative changing within the object involves that direction of travel.

The Lorentz transformation concerning time dilation for traveling objects assumes a photon bouncing between two mirrors. It is assumed that the speed of the light must be measured as the same for both an observer watching the traveling mirrors pass by as well as any observer on or in the traveling object. The issue is simply that the total distance of travel for the light will be perceived as different. The observer on the traveling object merely sees light bouncing directly up and down whereas the observer watching the object pass by sees that same light zigzagging up and down.

Speed is merely distance over time. And due to the presumption that light must be observed as traveling at the same speed regardless of one's travel and the distance being traveled is seen as different, the measurement of time itself must change in order to compensate the distance difference and yield the same speed. The Lorentz factor was derived merely by calculating a factor that would provide such compensation. And a reflecting photon was used to produce that factor. If the universe was made of bouncing photons, it would have been a good model.

But if one realizes that the universe is much more accurately modeled by spinning things rather than bouncing things, that compensating factor changes. In the following, I use a "square clock" to represent a spin rather than a round clock merely to simplify the mathematics. Whether round or square should not alter the compensation factor. What is important is that complete rotations are considered rather than linear reflections.

Fig 1. Square Clock versus Lorentz Clock

In figure 1, a comparison is made concerning the observed distance of travel for the photon in a linear light-clock and a square light-clock. The green dot represents a photon traveling. If the square clock is observed passing by and the speed of that photon is to be constant, one revolution of the traveling clock must take longer than a stationary clock would have taken. The same is true for the linear clock. But the dilation factors for the two types of clocks are different.

Linear, Lorentz Light-Clock Dilation Factor;

Square Light-Clock Dilation Factor;

But the story doesn't stop there. Note that the square clock is turning in one particular orientation with respect to its travel. All rotations aren't aligned to the direction of travel. And given any spaceship type of scenario, within the ship, particles and atoms will involve rotations in all three dimensions regardless of the direction of motion.

If a square-clock is facing the direction of travel, its dilation will be the same as the linear clock because there is no forward and back motion for the photon. And for any one direction of travel, two out of three orthogonal clocks will have square-clock dilation while one remains with linear dilation. Simply by taking the average of the dilation factors for each of the three directions so as to account for any direction of travel, we get;

So for example, the Lorentz time dilation factor for an object traveling at 0.5 the speed of light is 0.8660. The Cubic dilation factor yields 0.8236 as a more accurate figure for real applications.

For extremely precise calculations, the exact method for measuring time must be considered.

If you are not familiar with the Lorentz factor for time dilation, Wiki has an accurate article on it (warning: not everything in Wiki is).

Time is merely the measure of relative change but an accurate measure of it can be tricky. It has always been assumed that it doesn't matter in what manner something is changing, but merely how fast. The problem is that change comes with directions of change as an inherent property. Thus when someone speaks of traveling in a direction, the relative changing within the object involves that direction of travel.

The Lorentz transformation concerning time dilation for traveling objects assumes a photon bouncing between two mirrors. It is assumed that the speed of the light must be measured as the same for both an observer watching the traveling mirrors pass by as well as any observer on or in the traveling object. The issue is simply that the total distance of travel for the light will be perceived as different. The observer on the traveling object merely sees light bouncing directly up and down whereas the observer watching the object pass by sees that same light zigzagging up and down.

Speed is merely distance over time. And due to the presumption that light must be observed as traveling at the same speed regardless of one's travel and the distance being traveled is seen as different, the measurement of time itself must change in order to compensate the distance difference and yield the same speed. The Lorentz factor was derived merely by calculating a factor that would provide such compensation. And a reflecting photon was used to produce that factor. If the universe was made of bouncing photons, it would have been a good model.

But if one realizes that the universe is much more accurately modeled by spinning things rather than bouncing things, that compensating factor changes. In the following, I use a "square clock" to represent a spin rather than a round clock merely to simplify the mathematics. Whether round or square should not alter the compensation factor. What is important is that complete rotations are considered rather than linear reflections.

Fig 1. Square Clock versus Lorentz Clock

In figure 1, a comparison is made concerning the observed distance of travel for the photon in a linear light-clock and a square light-clock. The green dot represents a photon traveling. If the square clock is observed passing by and the speed of that photon is to be constant, one revolution of the traveling clock must take longer than a stationary clock would have taken. The same is true for the linear clock. But the dilation factors for the two types of clocks are different.

Linear, Lorentz Light-Clock Dilation Factor;

Square Light-Clock Dilation Factor;

But the story doesn't stop there. Note that the square clock is turning in one particular orientation with respect to its travel. All rotations aren't aligned to the direction of travel. And given any spaceship type of scenario, within the ship, particles and atoms will involve rotations in all three dimensions regardless of the direction of motion.

If a square-clock is facing the direction of travel, its dilation will be the same as the linear clock because there is no forward and back motion for the photon. And for any one direction of travel, two out of three orthogonal clocks will have square-clock dilation while one remains with linear dilation. Simply by taking the average of the dilation factors for each of the three directions so as to account for any direction of travel, we get;

**Cubic Light-Clock Dilation Factor;**So for example, the Lorentz time dilation factor for an object traveling at 0.5 the speed of light is 0.8660. The Cubic dilation factor yields 0.8236 as a more accurate figure for real applications.

For extremely precise calculations, the exact method for measuring time must be considered.

## Wednesday, September 25, 2013

### Phantom Photons

The effect of "phantom photons" in the Mach-Zehner Interferometer.

Phantom photons are formed whenever a photon is either blocked or reflected by any material. They are the result of extremely low energy affectance waves that cannot stop proceeding in a linear direction and are normally undetectable by photo-effect detectors.

A "positive phantom" (shown as light green) is formed and passes through the material when the photon is reflected, such as with a mirror in the Mach-Zehner setup.

"Negative phantoms" are formed whenever the photon passes through the material, such as glass or a beam-splitter (shown as white).

The existence of these phantoms explains the paradox involved in the single-photon Mach-Zehner experiment.

And I think I figured out a means to prove the existence of phantom photons.

1) Align the Mach-Zehner interferometer for normal single photon use

2) Place a complete block, CB, in the northern route

3) Place an adjustable mass, AM, in the shape of an extremely slim-shim as shown (as sharp edged as possible);

4) Use as narrow a constant coherent photon beam as can be obtained (laser, not the single photon emitter shown in the diagram)

5) Adjust AM gradually toward the first point where no photons can be detected

6) Remove CB;

According to both QM and TEW, both A and B should receive 50% of the photon stream.

But according to JSSRM, B should receive slightly more than A and the amount is adjustable with AM.

In addition;

As AM is adjusted from no blockage to complete blockage of the southern route, the following detection pattern should become apparent at B;

The hump in the graph noted as "Phantom Effect" should come about due to the phasing effects of the phantoms going through AM. I can't provide any measurement predictions for a variety of reasons, but that general pattern should become apparent if instructions are followed carefully.

Phantom photons are formed whenever a photon is either blocked or reflected by any material. They are the result of extremely low energy affectance waves that cannot stop proceeding in a linear direction and are normally undetectable by photo-effect detectors.

A "positive phantom" (shown as light green) is formed and passes through the material when the photon is reflected, such as with a mirror in the Mach-Zehner setup.

"Negative phantoms" are formed whenever the photon passes through the material, such as glass or a beam-splitter (shown as white).

The existence of these phantoms explains the paradox involved in the single-photon Mach-Zehner experiment.

And I think I figured out a means to prove the existence of phantom photons.

1) Align the Mach-Zehner interferometer for normal single photon use

2) Place a complete block, CB, in the northern route

3) Place an adjustable mass, AM, in the shape of an extremely slim-shim as shown (as sharp edged as possible);

4) Use as narrow a constant coherent photon beam as can be obtained (laser, not the single photon emitter shown in the diagram)

5) Adjust AM gradually toward the first point where no photons can be detected

6) Remove CB;

According to both QM and TEW, both A and B should receive 50% of the photon stream.

But according to JSSRM, B should receive slightly more than A and the amount is adjustable with AM.

In addition;

As AM is adjusted from no blockage to complete blockage of the southern route, the following detection pattern should become apparent at B;

The hump in the graph noted as "Phantom Effect" should come about due to the phasing effects of the phantoms going through AM. I can't provide any measurement predictions for a variety of reasons, but that general pattern should become apparent if instructions are followed carefully.

## Monday, May 27, 2013

Subscribe to:
Posts (Atom)