Uncomputable decimals and measure theory: is it nonsense?

Modern Measure Theory has something of a glitch. It asserts, as a main result, something which is rather obviously logically problematic (I am feeling polite this New Year’s morning!) Let’s talk a little about this subject today.

Modern measure theory studies, for example, the interval [0,1] of so-called real numbers. There are quite a lot of different ways of trying to conjure these real numbers into existence, and I have discussed some of these at length in many of my YouTube videos and also here in this blog: Dedekind cuts, Cauchy sequences of rationals, continued fractions, infinite decimals, or just via some axiomatic wishful thinking. In this list, and in what follows, I will suppress my natural inclination to put all dubious concepts in quotes. So don’t believe for a second that I buy most of the notions I am now going to talk about.

Measure theory texts are remarkably casual about defining and constructing the real numbers. Let’s just assume that they are there, shall we? Once we have the real numbers, measure theory asserts that it is meaningful to consider various infinite subsets of them, and to assign numbers that measure the extent of these various subsets, or at least some of them. The numbers that are assigned are also typically real numbers. The starting point of all this is familiar and reasonable: that a rational interval [a,b], where a,b are rational numbers and a is less than or equal to b, ought to have measure (b-a).

So measure theory is an elaborate scheme that attempts to extend this simple primary school intuition to the rather more convoluted, and logically problematic, arena of real numbers and their subsets. And it wants to do this without addressing, or even acknowledging, any of the serious logical problems that people (like me) have been pointing out for quite a long time.

If you open a book on modern measure theory, you will find a long chain of definitions and theorems: so-called. But what you will not find, along with  a thorough discussion of the logical problems, is a wide range of illustrative examples. This is a theory that floats freely above the unpleasant constraint of exhibiting concrete examples.

Your typical student is of course not happy with this situation: how can she verify independently that the ideas actually have some tangible meaning? Young people are obliged to accept the theories they learn as undergraduates on the terms they are given, and as usual appeals to authority play a big role. And when they turn to the internet, as they do these days, they often find the same assumptions and lack of interest in specific examples and concrete computations.

Here, to illustrate, is the Example section of the Wikipedia entry on Measure, which is what you get when you search for Measure Theory (from Wikipedia at https://en.wikipedia.org/wiki/Measure_(mathematics) ):

Examples 

___________________________

Some important measures are listed here.

Other ‘named’ measures used in various theories include: Borel measure, Jordan measure, ergodic measure, Euler measure, Gaussian measure, Baire measure,Radon measure, Young measure, and strong measure zero.

In physics an example of a measure is spatial distribution of mass (see e.g., gravity potential), or another non-negative extensive property, conserved (seeconservation law for a list of these) or not. Negative values lead to signed measures, see “generalizations” below.

Liouville measure, known also as the natural volume form on a symplectic manifold, is useful in classical statistical and Hamiltonian mechanics.

Gibbs measure is widely used in statistical mechanics, often under the name canonical ensemble.

_______________________________

(Back to the regular channel) Now one of the serious problems with theories which float independent of examples is that it becomes harder to tell if we have overstepped logical bounds. This is a problem with many theories based on real numbers.

Here is a key illustration: modern measure theory asserts that the real numbers with which it is preoccupied actually fall into two types: the computable ones, and the uncomputable ones. Computable ones include rational numbers, and all irrational numbers that (supposedly) arise as algebraic numbers (solutions of polynomial equations), definite integrals, infinite sums, infinite products, values of transcendental functions; and in fact any kind of computer program.

These include sqrt(2), ln 10, pi, e, sqrt(3+sqrt(5)), Euler’s constant gamma, values of the zeta function, gamma function, etc. etc. Every number that you will ever meet concretely in a mathematics course is a computable number. Any kind of decimal that is conjured up by some pattern, say 0.1101001000100001000001…, or even by some rule such as 0.a_1 a_2 a_3 … where  a_i is 1 unless i is an odd perfect number, in which case  a_i=2, is a computable number.

And what is then an uncomputable real number?? Hmm.. let’s just say this rather quickly and then move on to something more interesting, okay? Right: an uncomputable real number is just a real number that is not computable.

Uhh.. such as…? Sorry, but there are no known examples. It is impossible to write down any such uncomputable number in a concrete fashion. And what do these uncomputable numbers do for us? Well, the short answer is: nothing. They are not used in practical applications, and even theoretically, they don’t gain us anything. But they are there, my friends—oh yes, they are there — because the measure theory texts tell us they are!

And the measure theory texts tell us even more: that the uncomputable real numbers in fact swamp the computable ones measure-theoretically. In the interval [0,1], the computable numbers have measure zero, while the uncomputable numbers have measure one.

Yes, you heard correctly, this is a bona-fide theorem of modern measure theory: the computable numbers in [0,1] have measure zero, while the uncomputable numbers in [0,1] have measure one!

Oh, sure. So according to modern probability theory, which is based on measure theory, the probability of picking a random real number in [0,1] and getting a computable one is zero. Yet no measure theorist can give us even one example of a single uncomputable real number.

This is modern pure mathematics going beyond parody. Future generations are going to shake their heads in disbelief that we happily swallowed this kind of thing without even a trace of resistance, or at least disbelief.

But this is 2016, and the start of a New Year! I hope you will join me in an exciting venture to expose some of the many logical blemishes of modern pure mathematics, and to propose some much better alternatives — theories that actually make sense. Tell your friends, spread the word, and let’s not be afraid of thinking differently. Happy New Year.

 

 

Let alpha be a real number

PM (Pure Mathematician): Let alpha be a real number.

NJ (Me): What does that mean?

PM: Surely you are joking. What do you mean by such a question? Everyone uses this phrase all the time, probably you also.

NJ: I used to, but now I am not so sure anymore what it means. In fact I suspect it is nonsense. So I am asking you to clarify its meaning for me.

PM: No problem, then. It means that we are considering a real number, whose name is alpha. For example alpha = 438.0457897416622849… .

NJ: Is that a real number, or just a few decimal digits followed by three dots?

PM: It is a real number.

NJ: So a real number is a bunch of decimal digits followed by three dots.

PM: I think you know full well what a real number is, Norman. You are playing devil’s advocate. Officially a real number is an equivalence class of Cauchy sequences of rational numbers. The above decimal representation was just a shorthand.

NJ: So the real number alpha you informally described above is actually the following: {{32/141,13/55234,-444123/9857,…},{-62666626/43,49985424243/2,7874/3347,…},{4234/555,7/3,-424/55,…},…}?

PM: Well obviously that equivalence class of Cauchy sequences you started writing here is just a random collection of lists of rational numbers you have dreamed up. It has nothing to do with the real number alpha I am considering.

But now that I think about it for a minute, I suppose you are exploiting the fact that Cauchy sequences of rationals can be arbitrarily altered in a finite number of places without changing their limits, so you could argue that yes, my real number does look like that, although naturally alpha has a lot more information.

NJ: An infinite amount of more information?

PM: If you like.

NJ: What if I don’t like?

PM: Look, there is no use you quibbling about definitions. Modern pure mathematicians need real numbers for all sorts of things, not just for analysis, but also modern geometry, algebra, topology, you name it. Real numbers are not going away, no matter what kind of spurious objections you come up with. So why don’t you spend your time more fruitfully, and write some papers?

NJ: Have you heard of Wittgenstein’s objections to the infinite shenanigans of modern pure mathematics?

PM: No, but I think I am about to.

NJ: Wittgenstein claimed that modern pure mathematicians were trying to have their cake and eat it too, when it came to specifying infinite processes, by bouncing around between believing that infinite sequences could be described by algorithms, or they could be defined by choice. Algorithms are the stuff of computers and programming, while choice is the stuff of oracles and slimy intergalactic super-octopi. Which camp are you in? Is your real number alpha given by some finite code or by the infinite musings of a god-like creature?

PM: I think you are trying to ensnare me. You want me to say that I am thinking about decimal digits given by a program, but then you are going to say that that repudiates the Axiom of Choice. I know your strategy, you know! Don’t think you are the first to try to weaken our resolve or the faith in the Axioms. Mathematics has to start somewhere, after all.

NJ: And your answer is?

PM: Sorry, my laundry is done now, and then I have to finish my latest paper on Dohomological Q-theory over twisted holographic pseudo-morphoids. Cheers!

NJ: Cheers. Don’t forget to take alpha with you.

 

The Banach-Tarski paradox: is it nonsense?

How can you tell when your theory has overstepped the bounds of reasonableness? How about when you start telling people your “facts” and their faces register with incredulity and disbelief? That is the response of most reasonable people when they hear about the “Banach-Tarski paradox”.

From Wikipedia:

The Banach–Tarski paradox states that a ball in the ordinary Euclidean space can be doubled using only the operations of partitioning into subsets, replacing a set with a congruent set, and reassembly.

The “theorem” is commonly phrased in terms of two solid balls, one twice the radius of the other, in which case it asserts that we can subdivide the smaller ball into a small number (usually 5) of disjoint subsets, perform rigid motions (combinations of translations and rotations) to these sets, and obtain a partition of the larger ball. Or a couple of balls the same size as the original. It is to be emphasized that these are cut and paste congruences! This was first stated by S. Banach and A. Tarski in 1924, building on earlier work of Vitali and Hausdorff.

Doubling_of_a_sphere,_as_per_the_Banach-Tarski_Theorem (1)

This “theorem” contradicts common sense. In real life we know that it is not easy to get something from nothing. We cannot take one dollar, subtly rearrange it in some clever fashion, and end up with two dollars. It doesn’t work.

That is why most ordinary people, when they hear about this kind of result, are at first disbelieving, and then, when told that the “proof” involves “free groups of rotations” and the “Axiom of Choice”, and that the resulting sets are in fact impossible to write down explicitly, just shake their heads. Those pure mathematicians: boy they are smart, but what arcane things they get up to!

This theorem is highly dubious. It really ought to be taken with a grain of salt, or at least generate some controversy. This kind of logical legerdemain probably should not go unchallenged for decades.

The logical flaws involved in the usual argument are actually quite numerous. First there are confusions about what “free groups” are and how we specify them. The definition of a finite group and the definition of an “infinite group” are vastly different kettles of fish. An underlying theory of infinite sets is assumed, but as usual a coherent theory of such infinite sets is missing.

Then there is a claim that free groups can be found inside the group of rotations of three dimensional space. This usually involves some discussion involving real numbers and irrational rotations. All the usual difficulties with real numbers that students of my YouTube series MathFoundations will be familiar with immediately bear down.

And then finally there is an appeal to the Axiom of Choice, from the ZFC axiomfest, which claims that one can make an infinite number of independent choices. But this contradicts the Law of (Logical) Honesty that I put forward several days ago. I remind you that this was the idea:

Don’t pretend that you can do something that you can’t.

You cannot make an infinite number of independent choices. Cannot. Impossible. Never could. Never will be able to. No amount of practice will help. Whistling while you do it won’t make it happen. You cannot make an infinite number of independent choices.

So we ought not to pretend that we can; that is what the Law of (Logical) Honesty asserts. We can’t just say: and now let’s suppose that we can make an infinite number of independent choices. That is just an empty phrase if we cannot support it in ways that people can observe and validate.

The actual “sets” involved in the case of transforming a ball of radius 1 to a ball of radius 2 are not sets that one can write down in any meaningful way. They exist only in a kind of no-mans land of speculative thinking, entirely dependent on these set-theoretic assumptions that pin them up. Ask for a concrete example, and explicit specifications, and you only get smiles and shrugs.

And so the Banach-Tarski nonsense has no practical application. There is no corresponding finite version that helps us do anything useful, at least none that I know of. It is something like a modern mathematical fairy tale.

Shouldn’t we be discussing this kind of thing more vigorously, here in pure mathematics?

 

 

The Alexander Horned Sphere: is it nonsense?

Modern topology is full of contentious issues, but no-one seems to pay any notice. There are many weird, even absurd, “constructions” and “arguments” which really ought to generate vigorous debate. People should have differences of opinions. Alternatives ought to be floated. The logical structure of the entire enterprise ought to be called into question.

But not in these days of conformity and meekness, amongst pure mathematicians anyway. Students are indoctrinated, not by force of logic, clarity of examples and the compelling force of rigorous computations, but by being browbeaten into thinking that if they confess to “not understanding”, then they are tacitly admitting failure. Why don’t you understand? Don’t you have what it takes to be a professional pure mathematician?

Let’s have a historically interesting example: the so-called “Alexander Horned Sphere”. This is supposedly an example of a “topological space” which is “homeomorphic”… actually do you think I could get away with not putting everything in quotes here? Pretty well everything that I am now going to be talking about ought to be in quotes, okay?

Right, so as I was saying, the Alexander Horned sphere is supposedly a topological space which is homeomorphic to a two-dimensional sphere. It was first constructed (big quotation marks missing on this one!) by J. W. Alexander in 1924, who was interested in the question about whether it was possible for the complement of a simply-connected surface to not be simply connected.

Simply-connected means that any loop in the space can be continuously contracted to a point. The two-dimensional sphere is simply connected, but the one-dimensional sphere (a circle) is not. Alexander’s weird construction gives a surface which is topologically a two-sphere, but its complement is like the complement of a torus: if we take a loop around the main body of the sphere, then we cannot contract it to a point. And why not? Because there is a nested sequence, an infinitely nested sequence of entanglements that our contracting loop can’t get around.

Alexander-horn-sphere

This image was made by Ryan Dahl, Creative Commons license.

Here is a way of imagining what is (kind of) going on. Put your two arms in front of you, so that your hands are close. Now with both hands, make a near circle with thumb and index finger, almost touching, but not quite, and link these two almost loops. Now imagine each of your fingers/thumbs as being like a little arm, with two new appendage finger/thumb pair growing from the end of each, also almost enclosing each other. And keep doing this, as the diagram suggests better than I can explain.

At any finite stage, none of the little almost loops is quite closed, so we could still untangle a string that was looped around say one of your arms, just by sliding it off your arm, past the finger and thumb, around the other arms finger and thumbs, and also navigating around all the little fingers and thumbs that you have grown, something like Swamp Thing.

Yes…but Alexander said “Let’s go to infinity!” And most of the topologists chorused” Yes, let’s go to infinity!” And most of their students dutifully repeated: “Yes, let’s go to infinity, … I guess!” And lo… there was the Alexander Horned Sphere!

But of course, it doesn’t really make sense, does it? Because it blatantly contravenes a core Law of Logic, in fact the one we enunciated two days ago, called the Law of (Logical) Honesty:

Don’t pretend that you can do something that you can’t.

The construction doesn’t work because it requires us to grow, or create, or construct, an infinite number of pairs of littler and littler fingers, and you just can’t do that!! All that we can logically contemplate is a finite version, as shown actually in the above diagram. And for any finite version, the supposed property that Alexander thought he constructed disintegrates.

The Alexander Horned Sphere: but one example of the questionable constructs that abound in modern pure mathematics.

 

A new logical principle

We are supposed to have a very clear idea about the `laws of logic’. For example, if all men are mortal, and Socrates is a man, then Socrates is mortal.

Are there in fact such things as the “laws of logic”? While we can all agree that certain rules of inference, like the example above, are reasonably evident, there are a whole lot of more ambiguous situations where clear logical rules are hard to come by, and things amount more to clever arguments, weight of public opinion and the authority of people involved.

It is not dissimilar to the situation with moral codes, where we can all agree that certain rules are self-evident in abstract ideal situations, but when we look at real-life examples, we often are faced with moral dilemmas characterized by ambiguity rather than certainty. One should not kill. Okay, fair enough. But what about when someone threatens one’s loved ones? What moral law guides us as to when we ought to flip from passivity to aggression?

Similar kinds of logical ambiguities surface all the time in mathematics with the modern reliance on axioms, limits, infinite processes, real numbers etc.

Let’s consider here the situation with “infinity”. Most modern pure mathematicians believe, following Bolzano, Cantor and Dedekind, that this is a well-defined concept, and indeed that it rightfully plays a major role in advanced mathematics. I, on the other hand, claim that it is a highly dubious notion; in fact not properly defined; unsupported by explicit examples; the source of innumerable controversies, paradoxes and indeed outright errors; and that mathematics can happily do entirely without it. So we have a major difference of opinion. I can give plenty of reasons and evidence, and have done so, to support my position. By what rules of logic is someone going to convince me of the errors of my ways?

Appeals to authority? That won’t wash. A poll to decide things democratically? No, I will not accept public opinion over clear thinking.

Perhaps they could invoke the Axiom of Infinity from the ZFC axiomfest! According to Wikipedia this Axiom is:

\exists X \left [\varnothing \in X \land \forall y (y \in X \Rightarrow S(y)  \in X)\right ]..

In other words, more or less: an infinite set exists. But I am just going to laugh at that. This is supposed to be mathematics, not some adolescent attempt to create god-like structures by stringing words, or symbols, together.

As a counter to such nonsense, I would like to propose my own new logical principle. It is simple and sweet:

Don’t pretend that you can do something that you can’t.

This principle asks us essentially to be honest. To not get carried away with flights of fancy. To keep our feet firmly planted in reality.

According to this principle, the following questions are invalid logically:

If you could jump to the moon, then would it hurt when you landed?

If you could live forever, what would be your greatest hope?

If you could add up all the natural numbers 1+2+3+4+…, what would you get?

As a consequence of my new logical principle, we are no longer allowed to entertain the possibility of “doing an infinite number of things”. No “adding up an infinite number of numbers”. No creating data structures by “inserting an infinite number” of objects. No “letting time go to infinity and seeing what happens”.

Instead, we might add up 10^6 numbers, or insert a trillion objects into a data set, or let time equal t=883,244,536,000. In my logical universe, computations finish. Statements are supported by explicit, complete, examples. The results of arithmetical operations are concrete numbers that everyone can look at in their entirety. Mathematical statements and equations do not trail “off to infinity” or “converge somewhere beyond the horizon”, or invoke mystical aspects of the physical universe that may or may not exist.

In my view, mathematics ought to be supported by computations that can be made on our computers.

As a consequence of my way of thinking, the following is also a logically invalid question:

If you could add up all the rational numbers 1/1+1/2+1/3+1/4+…, what would you get?

It is nonsense because you cannot add up all those numbers. And why can you not do that? It is not because the sum grows without bound (admittedly not in such an obvious way as in the previous example), but rather because you cannot do an infinite number of things.

As a consequence of my way of thinking, the following is also a logically invalid question:

If you could add up all the rational numbers 1/1^2+1/2^2+1/3^2+1/4^2+…, what would you get?

And the reason is exactly the same. It is because we cannot perform an infinite number of arithmetical operations.

Now in this case someone may argue: wait Norman – this case is different! Here the sum is “converging” to something (to “pi^2/6” according to Euler). But my response is: no, the sum does not make sense, because the actual act of adding up an infinite number of terms, even if the partial sums seems to be heading somewhere, is not something that we can do.

And this is not just a dogmatic or religious position on my part. It is an observation about the world in which we live in. You can try it for yourself. To give you a head start, here is the sum of the first one hundred terms of the above series:

(1589508694133037873 112297928517553859702383498543709859 889432834803818131 090369901)/(972186144434381030589657976 672623144161975583 995746241782720354705517986165248000)

Please have a go, by adding more and more terms of the series: the next one is 1/101^2. You will find that no matter how much determination, computing power and time you have, you will not be able to add up all those numbers. Try it, and see! And the idea that you can do this in a decimal system will very likely become increasingly dubious to you as you proceed. There is only one way to sum this series, and that is using rational number arithmetic, and that only up to a certain point. You can’t escape the framework of rational number arithmetic in which the question is given. Try it, and see if what I say is true!

There are many further consequences of this principle, and we will be exploring some of them in future blog entries. Clearly this new logical law ought to have a name. Let’s call it the law of (logical) honesty. Here it is again:

Don’t pretend that you can do something that you can’t.

As Socrates might have said, it’s just simple logic.

 

 

 

Infinity: religion for pure mathematicians

Here is a quote from the online Encyclopedia Britannica:

The Bohemian mathematician Bernard Bolzano (1781–1848) formulated an argument for the infinitude of the class of all possible thoughts. If T is a thought, let T* stand for the notion “T is a thought.” T and T* are in turn distinct thoughts, so that, starting with any single thought T, one can obtain an endless sequence of possible thoughts: T, T*, T**, T***, and so on. Some view this as evidence that the Absolute is infinite.

Bolzano was one of the founders of modern analysis, and with Cantor and Dedekind, initiated the at-the-time controversial idea that the `infinite’ was not just a way of indirectly speaking about processes that are unbounded, or without end, but actually a concrete object or objects that mathematics could manipulate and build on, in parallel with finite, more traditional objects.

A multitude which is larger than any finite multitude, i.e., a multitude with the property that every finite set [of members of the kind in question] is only a part of it, I will call an infinite multitude. (B. Bolzano)

Accordingly I distinguish an eternal uncreated infinity or absolutum which is due to God and his attributes, and a created infinity or transfinitum, which has to be used wherever in the created nature an actual infinity has to be noticed, for example, with respect to, according to my firm conviction, the actually infinite number of created individuals, in the universe as well as on our earth and, most probably, even in every arbitrarily small extended piece of space. (G. Cantor)

One proof is based on the notion of God. First, from the highest perfection of God, we infer the possibility of the creation of the transfinite, then, from his all-grace and splendor, we infer the necessity that the creation of the transfinite in fact has happened. (G. Cantor)

The numbers are a free creation of human mind. (R. Dedekind )

I hope some of these quotes strike you as little more than religious doggerel. Is this what you, a critical thinking person, really want to buy into??

From the initial set-up by Bolzano, Cantor and Dedekind, the twentieth century has gone on to enshrine the existence of `infinity’ as a fundamental aspect of the mathematical world. Mathematical objects, even simple ones such as lines and circles , are defined in terms of “infinite sets of points”. Fundamental concepts of calculus, such as continuity, the derivative and the integral, rest on the idea of “completing infinite processes” and/or “performing an infinite number of tasks”. Almost all higher and more sophisticated notions from algebraic geometry, differential geometry, algebraic topology, and of course analysis rest on a bedrock foundation of infinite this and infinite that.

This is all religion my friends. It is what we get when we abandon the true path of clarity and precise thinking in order to invoke into existence that which we would like to be true. We want our integrals, infinite sums, infinite products, evaluations of transcendental functions to converge to “real numbers”, and if belief in infinity is what it takes, then that’s what we have collectively agreed to, back somewhere in the 20th century.

What would mathematics be like if we accepted it as it really is? Without wishful thinking, imprecise definitions and reliance on belief systems?

What would pure mathematics be like if it actually lined up with what our computers can do, rather than with what we can talk about?

Let’s take a deep breath, shake away the cobwebs of collective thought, and engage with mathematics as it really is. Down with infinity!

Or somewhat less spectacularly: Up with proper definitions! 

The truth about polynomial factorization

In yesterday’s blog, called The Fundamental Dream of Mathematics, I started to explain why modern mathematics is occupying a Polyanna land of wishful dreaming, with its over-reliance on the FTA — the cherished, but incorrect, idea that any non-constant polynomial p(x) has a complex zero, that is there is a complex number z satisfying p(z)=0. As a consequence, we all happily believe that every degree n polynomial has a factorization, over the complex numbers.

What a sad piece of delusional nonsense this is: a twelve year old ought to be able to see that we are trying to pull the wool over our collective eyes here. All what is required is to open up our computers and see what really happens!

Let’s start by looking at a polynomial which actually is a product of linear factors. This is easy to cook up, just by expanding out a product of chosen linear factors:

p(x)=(x-3)(x+1)(x+5)(x+11)= x⁴+14x³+20x²-158x-165.

No-one can deny that this polynomial does have exactly four zeroes, and they are x=3,-1,-5, and -11. Corresponding to each of these zeroes, there is indeed, just as Descartes taught us, a linear factor. If I don’t tell my computer where the polynomial is coming from, and just ask it to factor p(x)=x⁴+14x³+20x²-158x-165, then it will immediately inform me that

x⁴+14x³+20x²-158x-165= (x+11)(x-3)(x+5)(x+1).

Now let’s modify things just a tad. Let’s change that last coefficient of -165 to -166. So now if I ask my computer to factor q(x)=x⁴+14x³+20x²-158x-166, then it will very quickly tell me that

x⁴+14x³+20x²-158x-166= -158x+20x²+14x³+x⁴-166.

This is the kind of thing that it does when it cannot factor something. Did I tell you that my computer is very, very good at factoring polynomial expressions? I have supreme confidence in its abilities here.

But wait a minute you say, clearly your computer is deluded Norman! We know this polynomial factors, because all polynomials do. That is what the Fundamental Theorem of Algebra asserts, and it must be right, because everyone says so. Why don’t you find the zeroes first?

Okay, let’s see what happens if we do that: if I press solve numeric after the equation

x⁴+14x³+20x²-158x-166=0

the computer tells me that: Solution is: {[x=-1. 006254],[x=-4. 994786],[x=-11. 00119],[x=3. 002230]}

But these are not true zeroes, as we saw in yesterday’s blog, they are only approximate zeroes. True zeroes for this polynomial in fact do not exist.

True zeroes for this polynomial in fact do not exist.

True zeroes for this polynomial in fact do not exist.

Probably I will have to repeat this kind of mantra another few hundred times before it registers in the collective consciousness of my fellow pure mathematicians!

Let us check if we do get factorization: we ask the computer to expand

(x+1.006254)(x+4.994786)(x+11.00119)(x-3.002230)

and it does so to get

x⁴+14.0x³+20. 00000x²- 158.0x-166.0.

Hurray! We have factored our polynomial successfully! [NOT]

Here is the snag: the coefficients are given as decimal numbers, not integers. That means there is the possibility of round-off. Let me up the default number of digits shown in calculations from 7 to 20 and redo that expansion. This time, I get

x⁴+14.0x³+19. 999999656 344x²- 158. 000005080 13515776x-166. 000016519 11547081.

Sadly we see that the factorization was a mirage. Ladies and Gentlemen: a polynomial that does not factor into linear factors: q(x)=x⁴+14x³+20x²-158x-166.

Here is the true story of rational polynomial factorization: polynomials which factor into linear factors are easy to generate, but if you write down a random polynomial with rational coefficients of higher degree, the chances of it being of this kind are minimal. There is a hierarchy of factorizability of polynomials of degree n, whose levels correspond to partitions of n. For example if n=4, then there are five partitions of 4, namely 4, 3+1, 2+2, 2+1 and 1+1+1+1. Each of these corresponds to a type of factorizability for a degree four polynomial.

Here are example polynomials that fit into each of these kinds:

x⁴+14x³+20x²-158x-166=(x⁴+14x³+20x²-158x-166)

x⁴+14x³+21x²-147x-165= (x³+3x²-12x-15)(x+11)

x⁴+14x³+18x²-172x-216= (x²-2x-4)(x²+16x+54)

x⁴+14x³+19x²-156x-162= (16x+x²+54)(x-3)(x+1)

x⁴+14x³+20x²-158x-165= (x+11)(x-3)(x+5)(x+1).

So the world of polynomial factorizability is much richer than we pretend. But that also means that the theory of diagonalization of matrices is much richer than we pretend. The theory of eigenvalues and eigenvectors of matrices is much richer than we pretend. And many other things besides.

In distant times, astronomers believed that all celestial bodies moved around on a fixed celestial sphere centered on the earth. What a naive, convenient picture this was. In a thousand years from now, that is the way people will be thinking about our view of algebra: a simple-minded story, useful in its way, that ultimately just didn’t correspond to the way things really are.