The exponential function *(y=e ^{x}) *is a definable amount of incremental change – a very useful function for defining the

**“unit step-size of incremental change”.***“e”* itself is a specific value of the above exponential function; 2.718281828…… is the value of y when x=1.

^{[If you hate any form of maths, what follows is not for you. But if you can track some basic algebra and understand a little mathematical terminology, what follows is worth the read; because as it turns out, the concept of “incremental step-size” is quite an important idea if you want to understand how to build a universe…]}

**A Quick Intro**

It is often, considered amazing that *“e ^{x}”* appears to crop up virtually everywhere in science, (from physics to biology, from economics to climate change), but in truth it is not amazing at all! The one constant in this universe, is

*“change”*; and the reason

*“e*appears everywhere is that

^{x}”*“e*is all about change!

^{x}”Mathematically, *“y = e ^{x}”* is an exponential function. An exponential function is any function of the form

**y = f(x) = a ^{x}**

For practical purposes we can think of exponential functions as a mathematical tool that allows us to define a *multiplicative step-size of change* (i.e. they allows us *to define a macroscopic step-size of change using a base microscopic unit.*). Thus y = 2^{x} is an amount of change define using a base step-size of “2”, with “x” being the required number of steps.

### Part 1 – A Few Histories

**A Quick History of “e”**

At the heart of virtually all of modern-day science is the ability to model the behaviour of real events using mathematics. For this we can thank both Isaac Newton and Gottfried Leibniz who, around about the same time in the late 1600’s, both independently developed the mathematics of “Calculus”.

Calculus is essentially about “Amounts of Change” and “Rates of Change”. Calculating rates of change is known as “Differential Calculus”. In the simplest possible terms, if we know the amount of change that occurs over time, then we can calculate the underlying rate of change. So for instance, if we know that we have travelled 150 miles in 3 hours, we can easily figure out that our average speed (i.e. average rate of change of location) has been 50mph.

“Differential Equations” are the objects of differential calculus. Differential equations are mathematical equations, which contain mathematical rates of change.

In the years following the work of Newton and Leibniz, many mathematicians applied calculus to compute the derivative (i.e. the rate of change) of many different types of mathematical function. They struggled however with the exponential function f(x) = a^{x}

An exponential function is one that takes a constant (a) and raises it to the power of a variable(x). So imagine we are trying to figure out the derivative of y=2^{x}. Calculus tells us that the instantaneous rate of change can be expressed as the *“limit”* of an ever decreasing amount of incremental change (Δx). Thus the derivative of y=2^{x} is the

**Limit _{( as }_{Δ}_{x} _{→ 0 )} of (2^{x+}^{Δ}^{x} – 2^{x}) / Δx**

^{[Note: Going forward we will simply use “Limit” as shorthand for “Limit as Δx decreases towards zero”.]}

A little bit of algebra converts the above into

** Limit of ((2 ^{x})(2^{Δ}^{x}) – 2^{x}) / Δx**

A little bit more converts this into

**Limit of (2 ^{x})(2^{Δ}^{x} – 1) / Δx**

And finally since (2^{x}) does not have a Δx term we can take it out of the limit, which means the derivative of y=2^{X} becomes

**(2 ^{x})(Limit of (2^{Δ}^{x} – 1) / Δx)**

So the derivative (^{dy}/_{dx}) of the function y=2^{x} is in fact just the function itself multiplied by some number (which is given by the limit). So this seems nice and easy; all we really have to do to compute the derivate is calculate a limit to find the appropriate number. However calculating these limits turned out to be a surprisingly difficult thing to do…

=======

We can get an approximate result using a simple guess of Δx (say Δx = .00001)

**(2 ^{.00001} – 1) / (.00001) = 0.6931**

But approximate results were considered far from ideal; mathematicians of the day wanted to find a consistent methodology. It was Swiss mathematician Leonhard Euler (1707-1783) who eventually solved the problem sometime around 1730. Euler reasoned that if

**Limit of ((2 ^{Δ}^{x} – 1) / Δx ) ≈ (2^{.00001} – 1) / (.00001) = 0.6931**

and

**Limit of (10 ^{Δ}^{x} – 1) / Δx) ≈ (10^{.00001} – 1) / (.00001) = 2.3026**

then surely there must be a number (to which Euler assigned the symbol “e”) in between 2 and 10 such that

**Limit of ((e ^{Δ}^{x} – 1) / Δx) ≈ (e^{.00001} – 1) / (.00001) = 1.0000**

It turns out that that number is approximately equal to 2.718281828459 – a number, curiously enough, previously stumbled upon by Jacob Bernoulli when studying compound interest some 50 years earlier…

=======

So the limit of ((2.71828…^{Δ}^{x} – 1) / Δx) as Δx goes to zero is 1; which means **the derivative of e ^{x} is (1)(e^{x}) = e^{x}.**

Now this made computing the derivative of any function involving “e” remarkable easy. If the derivative of e^{x} is (1)(e^{x}) and, by use of the so-called chain rule, the derivative of e^{ax} is (a)(e^{ax} ). But what was really so great about this discovery was that it simplified computing the derivatives of ALL exponential functions!

So for example, to differentiate y=2^{x}, mathematicians would now first convert the number 2 into a function of *“e”*. To do so they firstly needed to answer the question, what power do we raise *“e”* to, in order to make it equal to *“2”? *

Fortunately this question, it turns out, was actually a simple logarithm question; and logarithms had been around for over 100 years by that time…

Using *“e”* as the base for the logarithm, the log of *“2”* to the base *“e”* — written as ln(2) — gives us the power to which to raise *“e”*, in order to get to *“2”*; and it turns out that that power is approximately 0.6931. So

**y = 2 ^{x} = (e^{0.6931})^{x} = e^{(0.6931)(x)}**

**⇒ ^{dy}/_{dx} = (0.6931)( e^{(0.6931)(x)}) = ( ln(2) )(2^{x})**

Similarly

**y = 10 ^{x} = (e^{2.3026})^{x} = e^{(2.3026)(x)}**

**⇒ ^{dy}/_{dx} = (2.3026)( e^{(2.3026)(x)}) = ( ln(10) )(10^{x})**

So the derivative of any exponential function y = a^{x} is the function itself (a^{x} ) multiplied by ln(a). In general form we write ^{dy}/_{dx} = ln(a)(a^{x}). Thus “ln” became known as the *“natural logarithm” *of all exponential functions, and “e” became known as the *“base of the natural logarithm”*.

**A Quick History of Logarithms**

In the latter part of the 16^{th} century the “Scientific Revolution” was really just beginning to take off, but the nuts and bolts of this Scientific Renaissance involved a lot of data and an enormous amount of mathematical calculations. These calculations, more often than not, involved multiplication and division of some very big numbers, which, using only pen and paper, would have been very tedious, very time consuming, and very easy to get wrong.

Scottish Mathematician John Napier (1550-1617) set out to simplify these calculations by converting much of the laborious multiplication and division, into straightforward addition and subtraction.

Napier’s idea was really rather simple; he used a very basic feature of “Geometric Progressions”.

A geometric progression is a progression by a common ratio, it is a base number raised to successive powers. It had long been known that multiplying any two numbers in a geometric progression is the same as adding their powers. Napier’s idea was to relate (in table form) each number in a geometric progression with its associated power written out as a simple arithmetic progression of 1.

For example: below is a simple table which relates the geometric *“doubling”* progression, with its successive powers.

So for example if we want to multiply 8 by 64, this is the same as adding the powers 3 and 6 and then looking up what number in the table that corresponds with the power of 9.

**8 * 64 = 2 ^{3} * 2^{6} = 2^{9} = 512**

Using this methodology, is great if one is looking to multiply numbers that happen to come in steps of 2, but of no use for any numbers in-between (or for that matter, numbers less than 1). To solve this “problem of gaps” Napier decided to make the ratio of the incremental progression much much smaller…

Napier choose a ratio of 1 over 10 million (or 0.0000001). He constructed a new table, which started off at 10,000,000 and worked its way backwards. *[Why Napier choose this backwards progression is a longer story, but long story short it is likely he did so because he did not want to deal with the then newish concept of decimal fractions.]* The construction of Napier’s table took almost two decades, but eventually in 1614 John Napier published his work in a book titled *“Description of the Wonderful Rule of Logarithms”*.

_{[Napier coined the word logarithm from the fusion of the Greek words “logos” (meaning ratio) and “arithmos” (meaning number). So logarithms, as Napier conceived them, are basically the “ratio of numbers”.]}

A year after publication Henry Briggs, a younger contemporary of Napier, suggested some modifications to Napier’s work. With Napier’s agreement Briggs set to work changing the base ratio to the new “decimal” system (i.e. changing the base ratio from 0.0000001 to 10 such that log_{10}(10) = 1), and changing the starting point from 10,000,000 to 0 such that log_{10}(1) = 0; and with these changes in place he then proceeded to work his logarithms * forward*. Briggs Published his work in 1624 and in doing so log to the base 10 was born!

This new mathematical concept proved incredible useful to renaissance scientists and quickly grew in popularity such that by the end of the 17^{th} century the use of the decimal system and logarithms was widespread…

**Napier very nearly found “e”**

It is a pity Napier concocted the term *“logarithm”*, as this convoluted word gives the impression of something quite complicated; when in reality the idea behind logarithms is something really quite simple.

^{[* As with so many things in maths and physics, naming really has let the side down. Badly named things – like Chaos Theory, Imaginary Numbers, and Logarithms – have only ever served to confuse] }

What Napier effectively did – although he might not have thought about it this way himself – was to build a simple relationship between *“incremental step-size”* and *“number of steps”*. A geometric progression progresses by the base ratio, and so this base ratio is effectively the size of each incremental step. And since the exponents/powers progress in a simple counting fashion, this means that the exponent is simply the number of incremental steps.

Simple though this idea was, the real juice came in Napier’s execution of the idea; Napier understood that to create a practical and usable table, the incremental step-size needed to be *very very small*.

So although Napier is famous for the introduction of logarithmic tables, history might be better served by recognizing that his major contribution to mathematics was in fact the introduction of *the infinitesimally small increment of change. *

In effect Napier had sought to reduce the coarseness of a geometric progression, by reducing the size of incremental change. Had he been willing to deal with decimal fractions and worked his way forwards, rather than backwards, he could conceivably have easily done something quite like this

This progression fills in the gaps, as Napier had wanted, by making the progression very ** fine-grained**. But the interesting thing about starting at 1 and using this tiny incremental step, is that if we take exactly 10 million of these steps (each of size

^{1}/

_{10million}), the corresponding number in the underlying geometric progression turns out to be 2.718281828459045……

So we see that a very small incremental change applied an exact number of times generates the number “e”…

**Continuous Compound Interest**

*“e”* is often thought of as related to “*continuous”* compound interest but this is misleading! In reality *“e”* could more accurately be thought of as simply an infinitesimally small growth rate applied an infinite number of times; and this process can happen either continuously __or__ non-continuously. The only reason that it is generally perceived that the compounding needs to happen continuously is because of the way it was originally formulated.

In the late 1600s the Swiss Mathematician Jacob Bernoulli discovered “e” when he compared the difference between a simple interest payment after a single fixed term, versus the continuous payment of interest and the subsequent compounding effect over the entirety of the fixed term.

Bernoulli found that, an interest rate of 100% per annum compounded annually grows $100 into $200 in one year, but if it is compounded every 6 months then the $100 grows into $225 after one year, and compounding every 3 months grows the $100 into $244.14. This process — of reducing the compounding period while simultaneously increasing the number of periods — can effectively be taken to infinity; but what interested Bernoulli the most was that in doing so the number doesn’t grow and grow; but instead it ultimately converges on $271.83. Bernoulli found that continuous compound interest follows the formula

**A = P( 1 + ^{r}/_{n })^{n}**

^{Where “A”= the dollar amount after n periods. “P” is the original principle amount (in this case $100). “r” is the original rate of interest. And “n” is the number of compounding periods.}

If n is taken to infinity the number 2.71828459045…. emerges; and thus this is where *“e”* became associated with *“continuous”* compound interest. But this is misleading, because it would be equally valid to say that…

This annual compounding (over 1024 years) yields the same dollar amount (to 2 decimal places) as continuous compounding within one year. So it doesn’t matter whether the interest payments are compounded continuously or non-continuously – in fact it doesn’t even matter if all the steps are clustered together or equally spaced out. All that matters is that ** ALL** the incremental-steps are taken for a given step-size; and when this happens we always get the same final result, the same total return!

Thus we see that *“e”* has nothing explicitly to do with continuous time! But we also see that once again an infinitesimally small growth rate applied the __exact__ inverse number of times generates the number *“e”*….

But why $271.83 rather than any other amount???…

**Part 2 – Transcendental ***“e”*

*“e”*

__2.718281828459… Messy Macro (but very Neat Micro)__

Any large number of compounding steps applied over a set period of time will ultimately yield a final total return. This final total however is effectively the same as if we had taken one single large geometric step.

Conversely, any single large geometric step can be broken down into a larger number of smaller equally sized incremental steps. For instance a single geometric step of size 2 is just a doubling of the principal amount, (i.e. it is a 100% return on your investment) but this same return on investment can be generated by compounding a number of smaller incremental returns. For methods below each generate a doubling of the original principal amount.

Similarly the following compounding methods are also equivalent…

For each case above we can write the single *“macro-step”* as a function of 1000 *“micro-steps”*

Of each of the macro steps above however, only *“e”* has a micro step-size which is the exact inverse of the number of steps. (i.e. for *“e”* the step-size is ^{1}/_{1000} ; and number of steps is 1000)

In other words, if we divide any whole unit (i.e. 1) into a large number (n) of smaller sub-units (i.e. ^{1}/_{n}) and then multiply all the n sub-units together (i.e. (1 + ^{1}/_{n})^{n} ) and the answer we get will always be some approximation (dependent on the size of n) of *“e”….*

**Step-Size vs. No. of Steps**

To arrive at the macro-number *“e”* the number of steps (n) must be the exact inverse of the micro step-size (^{1}/_{n}); so if any other number of steps were to be applied to the step-size (^{1}/_{1000}) it would bring us to a different macro number. For instance 693 micro steps of size (^{1}/_{1000}) brings us to 2; and 2304 micro steps of size (^{1}/_{1000}) brings us to 10…

In truth these macro-numbers are only correct to two decimal places. If we want to increase the precision, we need to reduce the step-size (^{1}/_{n}) and increase the number of steps (n) by a proportional amount.

As we take (n) to infinity, we tend towards infinite precision of the macro-number 2.

The same of course holds true for the macro-number *“e”…*

**Manufacturing “e”**

In the everyday speak we could say that *“e”* is a number we converge to when we take an infinitesimally small growth rate and applies it an infinite number of times.

Mathematically however *“e”* is defined as a limit:

*e* = Limit _{as n →} _{∞} (1 + ^{1}/_{n})^{n} = 2.718281828459045……

For practical purposes, *“e”* can be difficult to pin down because the question arises as to what size *“n”* do we want to use to calculate *“e”?*

If, for Instance, we set n=1000 we find that the exact value of (1+ (^{1}/_{1000}))^{1000} is **2.71***692393223552*. So even with only 1000 steps of 1.001 *“e”* is correct to 2 decimal places; with 10,000 steps of 1.0001 *“e”* is correct to 3 decimal places; and with 100,000 steps of 1.00001 *“e”* is correct to 4 decimal places; etc etc and so on to infinity…

Making the number of steps smaller and smaller, will of course give more accurate trailing digits of *“e”*; but since there is no limit to how much a number can be divided, there is therefore no limit to the number of trailing digits we can add to our calculation of *“e”*. This why *“e” *is described as *“transcendental”…*

That *“e”* is transcendental is simply a consequence of the fact that *“e”* is actually a manufactured quantity: defined by using limits to infinity, and infinity is an idea not a number!

However although the concept of infinity might be vague, ** the concept of “e”**, on the other hand,

*is actually all about very finely-grained precision…***Part 3 – Non-Transcendental***“e”*

*“e”*

**Micro- e**

*“e”* is said to be *“transcendental”* but this is only because *“e”* is defined with a variable (n) that has no ceiling, and thus is said to go to infinity. Infinity, is to my mind, the bastard child of mathematics; useful in terms of defining mathematical proofs, but detrimental to intuitive understanding.

As we have seen the concept and formula for *“e”* is neat, and very precise, because *the number of steps (n) is always the exact inverse of the tiny step-size **(*^{1}*/ _{n}*

*)**.*So in order to discuss the

*of*

__preciseness__*“e”*, it is better to limit the influence of infinity and define

*“e”*with very precise numbers.

To do so, we will use both a very large number and a very small number which are exact inverses of the other. We will use the symbol Ħ for the very large number, and we will use the symbol ħ for the very small number.

**Ħ = 10,000,000,000,000,000,000,000,000,000,000,000 = 10 ^{+34}**

**ħ = 0.000,000,000,000,000,000,000,000,000,000,000,1 = 10 ^{-34}**

Since ħ = ^{1}/ _{Ħ} we can write a precision equation for* “e”* without *“limits”*

** e = ( 1 + ħ ) ^{Ħ}**

This is ** a precise non-transcendental version of “e”** which describes how a growth rate of ( 1 + ħ ) is applied precisely Ħ number of times. This definition of

*“e”*can be thought of as

**; and moreover we can use this same definition to also define the concept of a**

*“Macro-e”*

*“Micro-e”* ** e ^{1} = ( 1 + ħ )^{Ħ}**

**e ^{1/}^{Ħ} = ( 1 + ħ )**

**e ^{ħ} = ( 1 + ħ )**

*“ d”*

__— a__*“Fundamental Unit”*So here, at last, we finally get to what is so important about *“e”**.*

Micro-e is what’s important about the concept of *“e”* . Micro-e is what makes the exponential function (e^{x}) so useful to “the *calculus of infinitesimals”* (as calculus was historically known).

In early calculus the symbol *“d”* was used to represent an infinitesimally small unit of change. So in keeping with history let us assign *“d”* to be equal to micro-e

**d = e ^{ħ} = ( 1 + ħ )**

In this incarnation *“d”* (or *“micro-e”*) becomes is a unit of fine-grained precision, the smallest possible linear step-size; we could think of *“d”* as a *Fundamental Unit of Linear Change.*

0

__Defining the “Step-Size of Change” __

So ** (1+ħ)** is a

*“Unit Step-Size”***of linear change**; a finely-grained unit of mathematical change. We will use this very fine-grained step-size to define other, more coarsely-grained mathematical steps!

** ( 1 + ****ħ ****)**^{x}^{Ħ}** = ****d**^{x}^{Ħ}** = ****(****e**^{ħ}**) ^{x}**

^{Ħ}**=**

**(**

**e**

^{ħ}

^{Ħ}**)**

^{x}=**e**

^{x}

Thus(1+ħ)applied xĦnumber of times yields the exponential function (e^{x}).

So *(1+ħ)* allows us to define, *with precision*, any step-size *(e ^{x})* in terms of this smallest, almost infinitesimal, unit of change. Using

*“e*we can easily jump back and forth between various macro step-sizes; in effect we can define any step-size we like from infinitesimally small to infinitely large using an appropriate power (positive for growth or negative for decay) of

^{x}”*“e*. Thus

^{x}”**– it is a defined amount of change that occurs in a defined amount of time.**

*“e*^{x}” is a definable step-size__So now that we have 1) used the precision unit __*“d”* to define the minimum step-size per unit time, and 2) used the precision step *“e ^{x}”* to define a desired step-size in terms of the precision unit

*“d”,*we are finally in a position to define a function to apply the desired step-size (e

^{x}) over time (t).

**Part 4 – ***“e”* over Time

*“e”*over Time

__Step-Size per Unit Time__

An exponential function is any function of the form.

**y = f(x) = a ^{x}**

We can think of all exponential functions as step functions, with the base being the step-size, and the power being the number of steps. Exponential functions are particularly useful for dealing with growth and decay over time. Thus more often than not we find that exponential functions are in fact functions of time.

**y = f(t) = a ^{t}**

Let’s say a bacterium population doubles in size every 6 minutes. This is a simple doubling function and can be written as y=2^{t} _{[where y is the population, 2 is the multiplicative step-size, and t is the time period in units of 6 minutes]}. Using this function, we can say that after 1 hour _{(i.e. after 60 minutes or 10 time periods of 6 minutes each)} the population is y = 2^{10 } = 1024.

We can, of course, generalize this idea to any exponential growth (and decay) over time by expressing the step-size in terms of e^{x} and thus the doubling function becomes

**y = 2 ^{t} = (e^{0.693})^{t} = e^{0.693t}**

So by combining step-size expressed in term of *“e ^{x}”* with the number of steps expressed in term of

*“t”*, we arrive at a

*generalized expression for*

**incremental change over time**…**y = f(x,t) = e ^{xt}**

**Part 5 – Conclusion**

When it comes to using mathematics in the real world, “*e ^{x }*” stands head and shoulders above every other function. “

*e to the power of x*” is the mathematical function that allows us to define the coarseness of incremental change — by defining with precision

**. Doing so allows us to use “**

*the size of a unit step of incremental change**e to the power of xt*” to examine all types of

^{ }**.**

*incremental behaviour over time*So forget about the misleading connection with *“continuous”* compound interest; continuous change is just one end of a spectrum of incremental change, which can range from very coarse to very smooth (so smooth it is virtually continuous).

“*e*” is a mathematical tool to define, with precision, an incremental step-size of change. And this ability to mathematically adjust the incremental step-size of change, allows us to easily jump back and forth between coarse-grained and fine-grained behaviour over time.

Furthermore (and this is the real prize in truly understanding e), just as “*e to the power of x ^{ }*” is the mathematical tool that allows us to define the incremental behaviour of growth (or decline), “

*e to the power of ix*” is the equivalent mathematical tool that gives us the ability to define the incremental behaviour of rotations and oscillations; and this little mathematical gem means that being able to define an infinitesimally small unit of

*“angular change”*will result in what is generally perceived to be the most beautiful equation in all of mathematics