*— “how it all fits together” —*

*The contents of this website are a lot to digest and one might think “where does all this come from?” **So for those who are interested in knowing the background; I put together a very, very, edited version of how “one thought led to another”…*

In October of 2006, my partner and I set up a company to build mathematical models of how financial markets synchronized buyers and sellers to a fair-value equilibrium. These models are based on the idea that in order to truly identify the *“fair price”* (for both buyer and seller), financial markets rely heavily on the *“Law of Large Numbers” *(LLN) to *“fine-tune”* its way to an non-biased equilibrium (i.e. lots of independent traders — *no herding behavior*). We believe that this does not always happen and that markets, in reality, often demonstrate a form of ** “coarse damping”** to equilibrium.

In January 2013, I personally began a search for the common ground that unites ** Chaos Theory** and

**, and set up this blog to record my findings.**

*Complexity Theory*By September, after much *“**experimental mathematics”*, I had gradually come to realize that *Mathematical Chaos* was just another form of *coarse damping to equilibrium*.

Just as herding could Reverse the LLN and weaken the synchronization of a market equilibrium, so too the *“incremental step-size”* of an iterative mathematical loop could weaken the synchronization of a mathematical equilibrium. With a few special synchronized exceptions (i.e. windows of order), *“chaos”* emerges when you iterate a recursive procedure ** “coarsely”**…

In early 2014, I started to extend the concept of the *“Reverse Law of Large Numbers”* (RLLN) to *Complex Adaptive Systems* in general, and the *economy* in particular; and began to see that it is the *evolutionary interplay* between fine and coarse damping, at every level of scale, that brings about the emergence of a **“ Complex Adaptive Economy”**.

In February 2015, I watched a Khan Academy video on the mathematical beauty of Euler’s Identity (e^{iπ }= -1), and it struck me that the reason that all the pieces (e, i, and π) fitted together so neatly was that the infinitely small step-size inherent in (e^{i }) allowed the mathematics to ** “fine-tune”** itself to the circumference of the circle.

Later, I began to think about what would happen if it was I were to walk around a circle taking *“coarser steps”;* say steps of size 2 degrees per step. Well then I would rotate around the plane as before, only this time not in perfect circular form. Instead of taking an almost infinite number of tiny steps to walk around the circle, a coarse step size of 2 degrees would only require 180 steps. If i made the step-size 5 degrees, then I would only need 72 steps; 10 degrees required 36 steps etc.

Furthermore only certain coarse step-sizes would fit _{(non harmonic step-sizes such as 7, 11, and 13 degrees, for instance, brought about the type of symmetry-breaking that we see in both chaos and complexity)}, and these *coarse sizes that did fit* were identical in nature to the previously encountered synchronized exceptions to chaos.

Moreover I thought if this applies to rotations, it probably also applies to *“oscillations”*. Since a *step-size per step* is effectively like a *step-size per unit time*, this led me to the idea that ** maybe** angular frequencies represented the NUMBER of some fundamental

*“unit of oscillation”*per unit time.

The idea was basically that, perhaps how many *“units of oscillation”* a particle had, determined how much energy it would have. And so in the formula for an oscillation (Ae^{i(}^{ωt) }) the amplitude (A) could actually represent the amount of these fundamental units, or in other words the maximum strength of (ħω), and the term (e^{i(}^{ωt) }) could represent the *changing phase of the oscillation over time*. And that was the genesis of the idea that led me to ** “Euler’s Secret Identity”**…

In June 2015, I started to think about the difference between an *“oscillating particle”* and a *“probability wave”*. An oscillating particle would carry with it its own phase information, but a wave equation has to split that same information up into 2 separate terms.

In the classical world *“waves”* travel through a medium, because classical waves involve the transport of energy without the transport of matter. In a water wave for instance, particles oscillate in situ while the wave moves from one particle to the next .

The equation for a wave can be written as Ae^{i(kx–}^{ω}^{t)}. Since wave equations were formulated to deal with multiple particles; (ωt) represents a particle’s phase over time and (kx) represents the phase difference between particles. I began to think about the implications of having these 2 separate terms within the overall phase term of a wave.

The Schrodinger Wave Equation is a wave equation for *“matter waves”* and the bedrock of Quantum Mechanics. It is well known that Schrodinger did not derive his famous equation, he effectively invented it. It was the concept of *“Wave-Particle Duality”* (which had been around at that stage for some 20 years) that led Schrodinger to think about how to formulate an equation that incorporated both.

Wave equations are designed to deal with waves that move through a medium. The equation has to account for multiple particles all oscillating at the same time, at the same oscillation speed, but each at different phases in their cycle. This however becomes a bit of a problem if you don’t have a medium.

Schrödinger got around this little problem by slotting a wave function inside a conservation of energy equation. However in doing so he caused the emergence of a completed different type of problem. In doing so Schrodinger unfortunately gave birth to the concept of *“particle superposition”*; a concept which was to become the ultimate source of all the celebrated *“weirdness”* of quantum mechanics.

So, if Quantum Mechanics is really all about oscillating particles rather than probability waves, then how can Schrodinger’s Wave Equation (SWE) produce such phenomenally accurate results?

For three reasons. The SWE is an excellent fit with the concept of an oscillating particle because: firstly its wave function contains both the phase-over-time (angular frequency) and phase-over-distance (wave-number); secondly this function is incorporated inside a conservation of energy equation; and lastly, but most importantly, this formulation only truly works because of *“The Law of Large Numbers (LLN)”*.

The SWE, in reality and almost inadvertently, basically describes the probabilistic AGGREGATE behavior of multiple oscillating particles (or the probabilistic behavior of a single oscillating particle on AVERAGE); * but* it relies on the LLN to convert this

*“probabilistic behaviour”*into

*“accurate predictions”*. This means that the SWE does not work in the singular, it only works in the collective (thus leading to the need for the superposition principle); but with quantum mechanical systems (as with thermal systems), understanding collective behaviour of billions of tiny particles (or the average behaviour of a single particle over time) is usually all that we need.

So ** perhaps** the reason Schrodinger’s Wave Equation is so successful is simply because it bears all the hallmarks of the behaviour of an

*average oscillating particle*. The equation for a

*single moving & oscillating particle*also exhibits conservation of energy, but requires that we merge (kx) and (ωt) together into a single term (kvt). This equation could be written in the form Ae

^{i(kvt)}and would therefore be an equation for a moving & oscillating particle which carries with it

*its own phase information*.

Furthermore, merging the two terms offers the added benefit of potentially explaining the wave-like behaviour of the famous double-slit experiment without the need for the concepts of *“particle superposition”* or a *“probability wave”* and all the other crazy stuff that goes with it. And that was the idea that led me to the concept of ** “Oscillating Photons”**.

Moreover, in this paradigm, the higher the frequency, the coarser the incremental step-size of oscillation. And the interesting thing about this, is that most higher frequency *“coarse oscillations”* cannot synchronize regular, tick-tock-like symmetrical behavior about equilibrium. On the contrary, most high energy coarse oscillations appear to produce asymmetrical behavior about equilibrium causing *“matter-like structures”* to emerge – as we can see from an examination of *“Coarse Harmonic Motion”…*

In October 2015, it became obvious to me _{(from feedback)} that *“Coarse Damping”* was too contorted a term for most people to grasp, and it would be better to highlight how *“Chaos and Complexity are simply dynamics that are **difficult to compress”…*

So, I expanded out the explanation of how *“Chaos and Complexity”* are different manifestations of systems that *“coarse damp to equilibrium”* by explaining how this* “coarse damping”* is always the result of ** “Incompressible Dynamics”**.

Furthermore, in order to quantify the unpredictability of these incompressible dynamics, I realized that it was possible to borrow from Computer Science the concept of *“Information Entropy”*…