This article will be dealing with how computers calculate trigonometric ratios, logarithms, and exponents. We will be exploring the mathematics behind these functions and shall end with a proof for the famous e^πi = -1. The article would be a pretty light read for anyone familiar with basic differentiation formulas such as those for cos(x), sin(x), and e^x. Even if the reader isn’t aware of these formulas, I’ve tried my best to make the article approachable for a general audience.

Let’s start by talking about polynomial. A polynomial is any function of a variable that involves only multiplication, subtraction, and addition. Polynomials are of different degrees and the degree of the polynomial is the highest power of the variable in the function. We denote the function by f(x) and it represents the mathematical processes we are carrying out on our variable x. Now our n-degree polynomial is given by :

Imagine you are struck by the particularly brilliant thought which makes you ask if you can represent any function f(x) as one of these polynomials. For whatever reason, you decide that you shall first try to express sin(x) and cos(x) as one of these polynomials. You enthusiastically write down your first equation

You cleverly come up with the idea of plugging in x as zero to eliminate all the x terms as zero to any power is zero in our polynomial.

Now that we’ve gotten the constant out of the way, you now get down to the task of figuring out each of the coefficients for this polynomial. You learned somewhere that the derivative of sin(x), represented by d(sin(x))/dx = cos(x) and also conveniently learned that the derivative of ax^n, given by d(ax^n)/dx = (n)(a)(x^n-1) and know that the derivative of a constant c is zero. You write them down to remember these results along with a few other things you learned that you think might be useful.

Since you know that cos(0) = 1, you go ahead and differentiate the equation f(x) and write it as f ‘(x) to get a new equation you can work with:

You go ahead and continue differentiation multiple times and get :

You notice that this is an infinite process, but the coefficients of every term with an even power are zero and that only the terms with odd powers of x remain. The odd powers of x seem to be the ones remaining and their coefficients seem to be of the form 1/(the power’s factorial) or- 1/(the power’s factorial ).The plus and minus alternate with every second term being negative. Factorial is the multiplication of all the natural numbers( in this case ) from one up to the number itself and is represented as the number followed by an exclamation mark. For example 1! = 1, 2! = 2(1), 3! = (3)(2)(1) , 4! = 4(3)(2)(1), and in general k! = k(k-1)(k-2)….(1). So you write down your observations:

With that, you have converted sin(x), a function seemingly related only to triangles and circles into an infinite polynomial in which substituting any x will get you closer and closer to the value of sin(x) with the more terms you choose to add.

Following a similar process for cos(x), we can obtain it’s polynomial and with some knowledge of limits of a function and such, we can also obtain the polynomial for e^x.

Indeed, these exact formulas were the one’s calculators used to compute sin(x), cos(x), tan(x), or any number to another power. For exponentiation:

The calculator calculates the value of log(k) and then substitutes xlog(k) into the e^x expansion.

The above approach should also help you better grasp the fact that exponentiation isn’t just repeated multiplication and how raising numbers to the power of a fraction like 1/2 might not make sense under repeated multiplication, but makes sense when we think about the number as an input to our polynomial, which we then know how to work with. The precision of your calculator naturally depends on how many of these terms in the expansion it adds up but the infinite sum can be approximated pretty well with just a few terms as these terms get exponentially smaller and converge at a value.

Now that we have managed to make exponentiation a polynomial, it would seem less absurd to input a complex number as the exponent due to the fact that the solutions/zeroes of several polynomials are often complex numbers. Flowing with this train of thought, let’s try raising e to the power of i, where i is the square root of -1.

Now stare at our final expression for a while and try to notice some patterns and try simplifying this into two other infinite polynomials that we have discussed below.

If you spotted it then great, but if not, here’s how it breaks down:

With that, we have just defined a way to raise any number to a complex number.

Now, let me prove what was promised in the title:

We have just proved a result that many argue is the most beautiful result in all of mathematics, but we have more important things to think about.

Let us look at what this means for any complex number z and it’s representation in the argand plane, where the usual y-axis is replaced by an imaginary axis which tells us the value b if z = x + yi. Much the same way we plot any point (x,y), a complex number x + yi can be represented by a line from the origin to the point (x,y). We call the length of this line the modulus of the complex number and the angle it makes with the x-axis it’s argument.

The modulus of z is represented as |z| and its argument is written as arg(z) = 𝞱.

This means that any complex number z can be written as |z| e^i𝞱 which would imply that the complex number z is the radius of a circle centered at the origin, with the length of the radius being |z| and the angle it’s the radius at angle 𝞱 with the x-axis. The x value of the complex number is |z|(cos(𝞱 )) and the y value is |z|(sin(𝞱 )). This leads us to z = |z| cos(𝞱 ) + |z| i sin(𝞱 ).

This greatly simplifies the multiplication of complex numbers as:

This shows us that if we take any line which represents a complex number, and multiply it by another number, it gets scaled( stretched or squished ) by the modulus of the second number and then rotated by an angle equal to the argument of the second complex number. This can be used in the scaling and rotation of objects or images by assigning each point or pixel a specific complex number and then multiplying it by a complex number whose argument is the angle you want to rotate by and whose modulus is the desired resizing scaling amount. These results are also quite significant for 2-D rotational motion in Newtonian mechanics, and the development of vectors and vector analysis, in fact, comes from complex numbers and higher dimension complex number systems called quaternions.

I encourage the reader to try and code functions, recursive, or otherwise to compute the sin, cos, or log values using the polynomials I have mentioned today. You might also have several useful and key insights by thinking about the rotation properties I mentioned above and how they might help you calculate the nth-roots of real numbers by thinking of the real numbers as having arg(nπ) where n belongs to the integers. The same line of reasoning will also help you understand why complex roots to polynomials always come in pairs of two with both being conjugates. I encourage you to also go through the links I have provided below for better depth and understanding of the results I have used today and they will certainly help you see the bigger picture when it comes to the importance of these formulas. I would like to cover more serious topics by talking about things like quantum computing, the Fourier series, Fermat’s little theorem, and other crucial mathematical results that play a big role in modern computers. Hence, the articles I plan on writing will be pretty long and technical so please let me know if those are some topics you might be interested in. I mentioned that these polynomials were used in calculators initially, nowadays there are optimizations and matrices that can be used for computations, topics that I might cover in future articles.

~ Koka Sathwik

Links:

https://www.qc.cuny.edu/Academics/Degrees/DMNS/Faculty%20Documents/Sultan1.pdf

https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula

http://people.math.sc.edu/girardi/m142/handouts/10sTaylorPolySeries.pdf

https://arxiv.org/pdf/1509.00501.pdf

https://www.youtube.com/watch?v=d4EgbgTm0Bg

https://en.wikipedia.org/wiki/Newton%E2%80%93Euler_equations

https://math.stackexchange.com/questions/706282/how-are-the-taylor-series-derived

Visits: 87

Leave a Reply

Your email address will not be published. Required fields are marked *