Category Archives: Uncategorized

The Green-Tao theorem on arithmetical sequences of primes: is it true?

In 2004 Ben Green and Terence Tao ostensibly proved a result which is now called the Green-Tao theorem. It asserts that there are arbitrarily long arithmetical sequences of prime numbers.

That is, given a natural number n, there is a sequence of prime numbers of the form p+mk, k=0,1,…,n-1 where p and m are natural numbers. For example 5, 11, 17, 23, 29 is a sequence of 5 primes in arithmetical progression with difference m=6, while 199, 409, 619, 829, 1039, 1249, 1459, 1669, 1879, 2089 is a sequence of 10 primes in arithmetical progression, with difference m=210.

Up to now the longest sequences of primes in such an arithmetical sequence that I know about was found in 2010 by Benoãt Perichon: it is

43,142,746,595,714,191 + (23,681,770)( 223,092,870), for k = 0 to 25.

The proof of Green and Tao is clearly a tour-de-force of modern analysis and number theory. It relies on a result called Szemeredi’s theorem along with other results and techniques from analytical number theory, combinatorics, harmonic analysis and ergodic theory. Measure theory naturally plays an important role.

Both Green and Tao are brilliant mathematicians, and Terence Tao is a Fields Medal winner. Terence is also originally Australian, and spent half a year at UNSW some time ago, where I had the pleasure of having some interesting chats over coffee with him.

Is the Green-Tao theorem true? This is actually quite an interesting question. The official proof was published in Annals of Math. 167 (2008), 481-547, and has been intensively studied by dozens of experts. No serious problems with the argument has been found, and it is now acknowledged that the result is firmly established. By the experts.

But is the Green-Tao theorem true? That depends not only on whether the arguments hang together logically when viewed from the top down, but also crucially on whether the underlying assumptions that underpin the theories in which those arguments take place are correct. It is here that one must accept that problems might arise.

So I am not suggesting that any particular argument of the Green-Tao paper is faulty. But there is the more unpleasant possibility that the whole edifice of modern analysis on which it depends is logically compromised, and that this theorem is but one of hundreds in the modern literature that actually don’t work logically if one descends right down to the fundamental level.

Let me state my position, which is rather a minority view. I don’t believe in real numbers. The current definitions of real numbers are logically invalid in my opinion. I am of the view that the arithmetic of such real numbers also has not been established properly.

I do not believe that the concept of a set has been established, and so consequently for me any discussion involving infinite sets is logically compromised. I do not accept that there is a completed set of natural numbers. I find fault with analysts’ invocation of limits, as often this brazenly assumes that one is able to perform an infinite number of operations, which I deny. I don’t believe that transcendental functions currently have a correct formulation, and I reject modern topology’s reliance on infinite sets of infinite sets to establish continuity. I believe that analysts are not being upfront in their approach to Wittgenstein’s distinction between choice and algorithms when they discuss infinite processes.

Consequently I find most of the theorems of Measure Theory meaningless. The usual arguments that fill the analysis journals are to me but possible precursors to a more rigorous analysis that may or may not be established some time in the future.

Clearly I have big problems.

But as a logical consequence of my position, I cannot accept the argument of the Green-Tao theorem, because I do not share the belief in the underlying assumptions that modern experts in analysis have.

But there is another reason why I do not accept the Green-Tao theorem, that does not depend on a critical analysis of their proof. I do not accept the Green-Tao theorem because I am sure that it is not true. I do not believe that there are arbitrarily long arithmetical progressions of prime numbers.

Let me be more specific. Consider the number z=10^10^10^10^10^10^10^10^10^10+23 that appeared in my debate last year with James Franklin called Infinity: does it exist?

My Claim: There is no arithmetical sequence of primes of length z.

This claim is to be distinguished from the argument that such a progression exists, but it would be just too hard for us to find it. My position is not based on what our computers can or cannot do. Rather, I assert that there is no such progression of prime numbers. Never was, never will be.

I do not have a proof of this claim, but I have a very good argument for it. I am more than 99.99% sure that this argument is correct. For me, the Green-Tao argument, powerful and impressive though it is, would be better rephrased in a more limited and precise way.

I do not doubt that with some considerable additional work, they, or others, might be able to reframe the statement and argument to be independent of all infinite considerations, real number musings, and dubious measure theoretic arguments. Then some true bounds on the extent and validity of the result might be established. That would be a lot of effort, but it might then be logically correct– from the ground up.

Uncomputable decimals and measure theory: is it nonsense?

Modern Measure Theory has something of a glitch. It asserts, as a main result, something which is rather obviously logically problematic (I am feeling polite this New Year’s morning!) Let’s talk a little about this subject today.

Modern measure theory studies, for example, the interval [0,1] of so-called real numbers. There are quite a lot of different ways of trying to conjure these real numbers into existence, and I have discussed some of these at length in many of my YouTube videos and also here in this blog: Dedekind cuts, Cauchy sequences of rationals, continued fractions, infinite decimals, or just via some axiomatic wishful thinking. In this list, and in what follows, I will suppress my natural inclination to put all dubious concepts in quotes. So don’t believe for a second that I buy most of the notions I am now going to talk about.

Measure theory texts are remarkably casual about defining and constructing the real numbers. Let’s just assume that they are there, shall we? Once we have the real numbers, measure theory asserts that it is meaningful to consider various infinite subsets of them, and to assign numbers that measure the extent of these various subsets, or at least some of them. The numbers that are assigned are also typically real numbers. The starting point of all this is familiar and reasonable: that a rational interval [a,b], where a,b are rational numbers and a is less than or equal to b, ought to have measure (b-a).

So measure theory is an elaborate scheme that attempts to extend this simple primary school intuition to the rather more convoluted, and logically problematic, arena of real numbers and their subsets. And it wants to do this without addressing, or even acknowledging, any of the serious logical problems that people (like me) have been pointing out for quite a long time.

If you open a book on modern measure theory, you will find a long chain of definitions and theorems: so-called. But what you will not find, along with  a thorough discussion of the logical problems, is a wide range of illustrative examples. This is a theory that floats freely above the unpleasant constraint of exhibiting concrete examples.

Your typical student is of course not happy with this situation: how can she verify independently that the ideas actually have some tangible meaning? Young people are obliged to accept the theories they learn as undergraduates on the terms they are given, and as usual appeals to authority play a big role. And when they turn to the internet, as they do these days, they often find the same assumptions and lack of interest in specific examples and concrete computations.

Here, to illustrate, is the Example section of the Wikipedia entry on Measure, which is what you get when you search for Measure Theory (from Wikipedia at ):



Some important measures are listed here.

Other ‘named’ measures used in various theories include: Borel measure, Jordan measure, ergodic measure, Euler measure, Gaussian measure, Baire measure,Radon measure, Young measure, and strong measure zero.

In physics an example of a measure is spatial distribution of mass (see e.g., gravity potential), or another non-negative extensive property, conserved (seeconservation law for a list of these) or not. Negative values lead to signed measures, see “generalizations” below.

Liouville measure, known also as the natural volume form on a symplectic manifold, is useful in classical statistical and Hamiltonian mechanics.

Gibbs measure is widely used in statistical mechanics, often under the name canonical ensemble.


(Back to the regular channel) Now one of the serious problems with theories which float independent of examples is that it becomes harder to tell if we have overstepped logical bounds. This is a problem with many theories based on real numbers.

Here is a key illustration: modern measure theory asserts that the real numbers with which it is preoccupied actually fall into two types: the computable ones, and the uncomputable ones. Computable ones include rational numbers, and all irrational numbers that (supposedly) arise as algebraic numbers (solutions of polynomial equations), definite integrals, infinite sums, infinite products, values of transcendental functions; and in fact any kind of computer program.

These include sqrt(2), ln 10, pi, e, sqrt(3+sqrt(5)), Euler’s constant gamma, values of the zeta function, gamma function, etc. etc. Every number that you will ever meet concretely in a mathematics course is a computable number. Any kind of decimal that is conjured up by some pattern, say 0.1101001000100001000001…, or even by some rule such as 0.a_1 a_2 a_3 … where  a_i is 1 unless i is an odd perfect number, in which case  a_i=2, is a computable number.

And what is then an uncomputable real number?? Hmm.. let’s just say this rather quickly and then move on to something more interesting, okay? Right: an uncomputable real number is just a real number that is not computable.

Uhh.. such as…? Sorry, but there are no known examples. It is impossible to write down any such uncomputable number in a concrete fashion. And what do these uncomputable numbers do for us? Well, the short answer is: nothing. They are not used in practical applications, and even theoretically, they don’t gain us anything. But they are there, my friends—oh yes, they are there — because the measure theory texts tell us they are!

And the measure theory texts tell us even more: that the uncomputable real numbers in fact swamp the computable ones measure-theoretically. In the interval [0,1], the computable numbers have measure zero, while the uncomputable numbers have measure one.

Yes, you heard correctly, this is a bona-fide theorem of modern measure theory: the computable numbers in [0,1] have measure zero, while the uncomputable numbers in [0,1] have measure one!

Oh, sure. So according to modern probability theory, which is based on measure theory, the probability of picking a random real number in [0,1] and getting a computable one is zero. Yet no measure theorist can give us even one example of a single uncomputable real number.

This is modern pure mathematics going beyond parody. Future generations are going to shake their heads in disbelief that we happily swallowed this kind of thing without even a trace of resistance, or at least disbelief.

But this is 2016, and the start of a New Year! I hope you will join me in an exciting venture to expose some of the many logical blemishes of modern pure mathematics, and to propose some much better alternatives — theories that actually make sense. Tell your friends, spread the word, and let’s not be afraid of thinking differently. Happy New Year.



A new logical principle

We are supposed to have a very clear idea about the `laws of logic’. For example, if all men are mortal, and Socrates is a man, then Socrates is mortal.

Are there in fact such things as the “laws of logic”? While we can all agree that certain rules of inference, like the example above, are reasonably evident, there are a whole lot of more ambiguous situations where clear logical rules are hard to come by, and things amount more to clever arguments, weight of public opinion and the authority of people involved.

It is not dissimilar to the situation with moral codes, where we can all agree that certain rules are self-evident in abstract ideal situations, but when we look at real-life examples, we often are faced with moral dilemmas characterized by ambiguity rather than certainty. One should not kill. Okay, fair enough. But what about when someone threatens one’s loved ones? What moral law guides us as to when we ought to flip from passivity to aggression?

Similar kinds of logical ambiguities surface all the time in mathematics with the modern reliance on axioms, limits, infinite processes, real numbers etc.

Let’s consider here the situation with “infinity”. Most modern pure mathematicians believe, following Bolzano, Cantor and Dedekind, that this is a well-defined concept, and indeed that it rightfully plays a major role in advanced mathematics. I, on the other hand, claim that it is a highly dubious notion; in fact not properly defined; unsupported by explicit examples; the source of innumerable controversies, paradoxes and indeed outright errors; and that mathematics can happily do entirely without it. So we have a major difference of opinion. I can give plenty of reasons and evidence, and have done so, to support my position. By what rules of logic is someone going to convince me of the errors of my ways?

Appeals to authority? That won’t wash. A poll to decide things democratically? No, I will not accept public opinion over clear thinking.

Perhaps they could invoke the Axiom of Infinity from the ZFC axiomfest! According to Wikipedia this Axiom is:

\exists X \left [\varnothing \in X \land \forall y (y \in X \Rightarrow S(y)  \in X)\right ]..

In other words, more or less: an infinite set exists. But I am just going to laugh at that. This is supposed to be mathematics, not some adolescent attempt to create god-like structures by stringing words, or symbols, together.

As a counter to such nonsense, I would like to propose my own new logical principle. It is simple and sweet:

Don’t pretend that you can do something that you can’t.

This principle asks us essentially to be honest. To not get carried away with flights of fancy. To keep our feet firmly planted in reality.

According to this principle, the following questions are invalid logically:

If you could jump to the moon, then would it hurt when you landed?

If you could live forever, what would be your greatest hope?

If you could add up all the natural numbers 1+2+3+4+…, what would you get?

As a consequence of my new logical principle, we are no longer allowed to entertain the possibility of “doing an infinite number of things”. No “adding up an infinite number of numbers”. No creating data structures by “inserting an infinite number” of objects. No “letting time go to infinity and seeing what happens”.

Instead, we might add up 10^6 numbers, or insert a trillion objects into a data set, or let time equal t=883,244,536,000. In my logical universe, computations finish. Statements are supported by explicit, complete, examples. The results of arithmetical operations are concrete numbers that everyone can look at in their entirety. Mathematical statements and equations do not trail “off to infinity” or “converge somewhere beyond the horizon”, or invoke mystical aspects of the physical universe that may or may not exist.

In my view, mathematics ought to be supported by computations that can be made on our computers.

As a consequence of my way of thinking, the following is also a logically invalid question:

If you could add up all the rational numbers 1/1+1/2+1/3+1/4+…, what would you get?

It is nonsense because you cannot add up all those numbers. And why can you not do that? It is not because the sum grows without bound (admittedly not in such an obvious way as in the previous example), but rather because you cannot do an infinite number of things.

As a consequence of my way of thinking, the following is also a logically invalid question:

If you could add up all the rational numbers 1/1^2+1/2^2+1/3^2+1/4^2+…, what would you get?

And the reason is exactly the same. It is because we cannot perform an infinite number of arithmetical operations.

Now in this case someone may argue: wait Norman – this case is different! Here the sum is “converging” to something (to “pi^2/6” according to Euler). But my response is: no, the sum does not make sense, because the actual act of adding up an infinite number of terms, even if the partial sums seems to be heading somewhere, is not something that we can do.

And this is not just a dogmatic or religious position on my part. It is an observation about the world in which we live in. You can try it for yourself. To give you a head start, here is the sum of the first one hundred terms of the above series:

(1589508694133037873 112297928517553859702383498543709859 889432834803818131 090369901)/(972186144434381030589657976 672623144161975583 995746241782720354705517986165248000)

Please have a go, by adding more and more terms of the series: the next one is 1/101^2. You will find that no matter how much determination, computing power and time you have, you will not be able to add up all those numbers. Try it, and see! And the idea that you can do this in a decimal system will very likely become increasingly dubious to you as you proceed. There is only one way to sum this series, and that is using rational number arithmetic, and that only up to a certain point. You can’t escape the framework of rational number arithmetic in which the question is given. Try it, and see if what I say is true!

There are many further consequences of this principle, and we will be exploring some of them in future blog entries. Clearly this new logical law ought to have a name. Let’s call it the law of (logical) honesty. Here it is again:

Don’t pretend that you can do something that you can’t.

As Socrates might have said, it’s just simple logic.




Infinity: religion for pure mathematicians

Here is a quote from the online Encyclopedia Britannica:

The Bohemian mathematician Bernard Bolzano (1781–1848) formulated an argument for the infinitude of the class of all possible thoughts. If T is a thought, let T* stand for the notion “T is a thought.” T and T* are in turn distinct thoughts, so that, starting with any single thought T, one can obtain an endless sequence of possible thoughts: T, T*, T**, T***, and so on. Some view this as evidence that the Absolute is infinite.

Bolzano was one of the founders of modern analysis, and with Cantor and Dedekind, initiated the at-the-time controversial idea that the `infinite’ was not just a way of indirectly speaking about processes that are unbounded, or without end, but actually a concrete object or objects that mathematics could manipulate and build on, in parallel with finite, more traditional objects.

A multitude which is larger than any finite multitude, i.e., a multitude with the property that every finite set [of members of the kind in question] is only a part of it, I will call an infinite multitude. (B. Bolzano)

Accordingly I distinguish an eternal uncreated infinity or absolutum which is due to God and his attributes, and a created infinity or transfinitum, which has to be used wherever in the created nature an actual infinity has to be noticed, for example, with respect to, according to my firm conviction, the actually infinite number of created individuals, in the universe as well as on our earth and, most probably, even in every arbitrarily small extended piece of space. (G. Cantor)

One proof is based on the notion of God. First, from the highest perfection of God, we infer the possibility of the creation of the transfinite, then, from his all-grace and splendor, we infer the necessity that the creation of the transfinite in fact has happened. (G. Cantor)

The numbers are a free creation of human mind. (R. Dedekind )

I hope some of these quotes strike you as little more than religious doggerel. Is this what you, a critical thinking person, really want to buy into??

From the initial set-up by Bolzano, Cantor and Dedekind, the twentieth century has gone on to enshrine the existence of `infinity’ as a fundamental aspect of the mathematical world. Mathematical objects, even simple ones such as lines and circles , are defined in terms of “infinite sets of points”. Fundamental concepts of calculus, such as continuity, the derivative and the integral, rest on the idea of “completing infinite processes” and/or “performing an infinite number of tasks”. Almost all higher and more sophisticated notions from algebraic geometry, differential geometry, algebraic topology, and of course analysis rest on a bedrock foundation of infinite this and infinite that.

This is all religion my friends. It is what we get when we abandon the true path of clarity and precise thinking in order to invoke into existence that which we would like to be true. We want our integrals, infinite sums, infinite products, evaluations of transcendental functions to converge to “real numbers”, and if belief in infinity is what it takes, then that’s what we have collectively agreed to, back somewhere in the 20th century.

What would mathematics be like if we accepted it as it really is? Without wishful thinking, imprecise definitions and reliance on belief systems?

What would pure mathematics be like if it actually lined up with what our computers can do, rather than with what we can talk about?

Let’s take a deep breath, shake away the cobwebs of collective thought, and engage with mathematics as it really is. Down with infinity!

Or somewhat less spectacularly: Up with proper definitions! 

The truth about polynomial factorization

In yesterday’s blog, called The Fundamental Dream of Mathematics, I started to explain why modern mathematics is occupying a Polyanna land of wishful dreaming, with its over-reliance on the FTA — the cherished, but incorrect, idea that any non-constant polynomial p(x) has a complex zero, that is there is a complex number z satisfying p(z)=0. As a consequence, we all happily believe that every degree n polynomial has a factorization, over the complex numbers.

What a sad piece of delusional nonsense this is: a twelve year old ought to be able to see that we are trying to pull the wool over our collective eyes here. All what is required is to open up our computers and see what really happens!

Let’s start by looking at a polynomial which actually is a product of linear factors. This is easy to cook up, just by expanding out a product of chosen linear factors:

p(x)=(x-3)(x+1)(x+5)(x+11)= x⁴+14x³+20x²-158x-165.

No-one can deny that this polynomial does have exactly four zeroes, and they are x=3,-1,-5, and -11. Corresponding to each of these zeroes, there is indeed, just as Descartes taught us, a linear factor. If I don’t tell my computer where the polynomial is coming from, and just ask it to factor p(x)=x⁴+14x³+20x²-158x-165, then it will immediately inform me that

x⁴+14x³+20x²-158x-165= (x+11)(x-3)(x+5)(x+1).

Now let’s modify things just a tad. Let’s change that last coefficient of -165 to -166. So now if I ask my computer to factor q(x)=x⁴+14x³+20x²-158x-166, then it will very quickly tell me that

x⁴+14x³+20x²-158x-166= -158x+20x²+14x³+x⁴-166.

This is the kind of thing that it does when it cannot factor something. Did I tell you that my computer is very, very good at factoring polynomial expressions? I have supreme confidence in its abilities here.

But wait a minute you say, clearly your computer is deluded Norman! We know this polynomial factors, because all polynomials do. That is what the Fundamental Theorem of Algebra asserts, and it must be right, because everyone says so. Why don’t you find the zeroes first?

Okay, let’s see what happens if we do that: if I press solve numeric after the equation


the computer tells me that: Solution is: {[x=-1. 006254],[x=-4. 994786],[x=-11. 00119],[x=3. 002230]}

But these are not true zeroes, as we saw in yesterday’s blog, they are only approximate zeroes. True zeroes for this polynomial in fact do not exist.

True zeroes for this polynomial in fact do not exist.

True zeroes for this polynomial in fact do not exist.

Probably I will have to repeat this kind of mantra another few hundred times before it registers in the collective consciousness of my fellow pure mathematicians!

Let us check if we do get factorization: we ask the computer to expand


and it does so to get

x⁴+14.0x³+20. 00000x²- 158.0x-166.0.

Hurray! We have factored our polynomial successfully! [NOT]

Here is the snag: the coefficients are given as decimal numbers, not integers. That means there is the possibility of round-off. Let me up the default number of digits shown in calculations from 7 to 20 and redo that expansion. This time, I get

x⁴+14.0x³+19. 999999656 344x²- 158. 000005080 13515776x-166. 000016519 11547081.

Sadly we see that the factorization was a mirage. Ladies and Gentlemen: a polynomial that does not factor into linear factors: q(x)=x⁴+14x³+20x²-158x-166.

Here is the true story of rational polynomial factorization: polynomials which factor into linear factors are easy to generate, but if you write down a random polynomial with rational coefficients of higher degree, the chances of it being of this kind are minimal. There is a hierarchy of factorizability of polynomials of degree n, whose levels correspond to partitions of n. For example if n=4, then there are five partitions of 4, namely 4, 3+1, 2+2, 2+1 and 1+1+1+1. Each of these corresponds to a type of factorizability for a degree four polynomial.

Here are example polynomials that fit into each of these kinds:


x⁴+14x³+21x²-147x-165= (x³+3x²-12x-15)(x+11)

x⁴+14x³+18x²-172x-216= (x²-2x-4)(x²+16x+54)

x⁴+14x³+19x²-156x-162= (16x+x²+54)(x-3)(x+1)

x⁴+14x³+20x²-158x-165= (x+11)(x-3)(x+5)(x+1).

So the world of polynomial factorizability is much richer than we pretend. But that also means that the theory of diagonalization of matrices is much richer than we pretend. The theory of eigenvalues and eigenvectors of matrices is much richer than we pretend. And many other things besides.

In distant times, astronomers believed that all celestial bodies moved around on a fixed celestial sphere centered on the earth. What a naive, convenient picture this was. In a thousand years from now, that is the way people will be thinking about our view of algebra: a simple-minded story, useful in its way, that ultimately just didn’t correspond to the way things really are.




The Fundamental Dream of Algebra

According to modern pure mathematics, there is a basic fact about polynomials called “The Fundamental Theorem of Algebra (FTA)”. It asserts, in perhaps its simplest form, that if p(x) is a non-constant polynomial, then there is a complex number z which has the property that p(z)=0 . So every non-constant polynomial equation p(x)=0 has at least one solution. [NOT!]

When we combine this with Descartes’ Factor Theorem, we can supposedly deduce that a degree n polynomial can be factored into exactly n linear factors. [NOT!]

What a useful and crucially important theorem this is! It finds immediate application to integration, allowing us to integrate rational functions by factoring the denominators and using partial fractions to reduce all such to a few canonical forms. In linear algebra, the characteristic polynomial of a matrix has zeroes which are the eigenvalues. In differential equations or difference equations, these eigenvalues allow us to write down solutions. [NOT!]

Unfortunately, the theorem is a mirage. It is not really true. The current belief and dependence on the “Fundamental Theorem of Algebra” is a monumental act of collective wishful dreaming. In fact the result is not even correctly formulated to begin with. It is a false and idealized shadow of a much more complex, subtle and (ultimately) beautiful result.

Wait a minute Norman! Do we not have a completely explicit and clear formulation of the FTA? Are there not watertight proofs? Surely we have lots of concrete and explicit verifications of the theorem!?

Sadly the answers/responses are no in each case. The theorem is not correctly stated, because it requires a prior theory of complex numbers, which in turn require a prior theory of real numbers, and there is no such theory currently in existence, as you can get a pretty clear idea from by opening up a random collection of Calculus or Analysis texts. Or you could go through the fine-toothed comb dismantling of the subject in my MathFoundations YouTube series. And there are, contrary to popular belief, no watertight proofs. All current arguments rest on continuity assumptions that are not supported by proper theories, but ultimately only by appeals to physical intuition. And when we look at explicit examples, it quickly becomes obvious that the FTA is not at all true!

This issue confounded both Euler and Gauss–both struggled to give proofs, and both were defeated. Gauss returned over and over to the problem, each time trying to be more convincing, to be tighter, to overcome the subtle assumptions that can so easily creep into these arguments. He was not in fact successful.

Every few years some one comes up with a new “proof”, often unaware that the real difficulty is in framing the statement precisely in the first place! Unless you have a rock solid foundation for “real numbers”, all attempts to establish this result in its current form are ultimately doomed.

Most undergraduates learn from an early age to accept this theorem. Suspiciously, they are rarely presented with a proper proof. Sometime in a complex analysis course, if they get that far, they are exposed to a rather indirect argument involving Cauchy’s theory of integration. Are they convinced by this? I hope not: why should something so elementary have such a complicated proof? How do we know that some crucial circularity is not built into the whole game, if we only get around to “proving” a result in third year university that we have been happily assuming in all prior courses for years earlier?

And what about explicit examples? Is this not the way to sort out the wheat from the chaff?  Yes it is, and all we need to do is open our eyes clearly and look beyond our wishful dreaming to see things as they really are, not the way we would like them to be in our alternative Polyanna land of modern pure mathematics!

We write down a polynomial equation of some low degree. Let’s say to be explicit x^5-2x^2+x-7=0. Now I open my computer’s mathematical software package (in my case that is Scientific Workplace, which uses MuPad as its computational engine).

This program is no slouch– it has innumerably many times performed what are essentially miracles of computation for me. It can factor polynomials in several variables of high degree with hundreds, indeed thousands of terms. I have just asked it to factor the randomly typed number 523452545354624677876876875744666. It took about 3 seconds for it to determine that the prime factorization is

2 x 3 x 191 x 2033466852397 x 224623712 574400693 .

So let’s see what happens if I ask it to solve


Actually that depends what kind of solving I ask for. If I ask for a numeric solution, it gives

{[x=-1. 166+1. 097i],[x=-1. 166-1. 097i],[x=0.3654-1. 254i],[x=0.3654+1. 254i],[x=1. 601]}.

Indeed we get 5 “solutions”, four consisting of two complex conjugate pairs, and one “real solution”. The reason we only get 4 digit accuracy is because of the current settings. Suppose I up the significant digits to 7. In that case, solve numeric returns

{[x=-1. 166064+1. 09674i],[x=-1. 166064-1. 09674i],[x=0.3654393-1. 253958i],[x=0.3654393+1. 253958i],[x=1. 601249]}.

Are these really solutions? No they are NOT. They are rational (in fact finite decimal) numbers which are approximate solutions, but they are not solutions. Let us be absolutely clear about this. Here for example is

p(-1. 166064+1. 09674i)=4. 164963586 717619097 5×10⁻⁶+9. 468394675 149678997 9×10⁻⁶ i

where I have upped the significant digits to 20, the maximum that is allowed. Do we get zero? Clearly we do NOT.

How about if I ask for solve exact solutions? In that case, my computer asserts that

Solution is: ρ₁ where ρ₁ is a root of Z-2Z²+Z⁵-7.

In other words, the computer is not able to find true exact solutions to this equation. The computer knows something that most modern pure mathematicians seem to be unaware of. And what is that? Come closer, and I will whisper the harsh truth to you..

This .. equation .. does .. not .. have .. exact .. solutions.




Maths for Humans: Linear, Quadratic and Inverse Relations

The final week of my Future Learn MOOC called: Maths for Humans: Linear, Quadratic and Inverse Relations is about to go live tomorrow. This feels like all I have been doing for the last three or four months, and I am very glad that it is now coming to a close.

A MOOC is a Massive Open Online Course. This is free online education, run potentially on a large scale. Future Learn is a relatively new MOOC platform run by the Open University in the UK, and they have vast experience with distance education over many decades. Maths for Humans is a course that Daniel Mansfield and I have put together at UNSW, supported by UNSW Learning and Teaching Funds from UNSW L&T.

Currently we have about 8500 people registered, but as is typical in such MOOCs only some fraction of that are actively learning — perhaps around a third or so. Which is still a decent number.

And why is a pure mathematician who is re-configuring modern geometry, and also trying to steer the Ship of Mathematics to safer, more placid waters, spending his energies this way? Well one reason is that I have got some grant funding for this, and so some teaching relief. So actually I have not been teaching this semester, because I also have another big project going on to revamp some of our first year tutorials, which I must tell you about some other time.

I happen to think mathematics is far and away the most interesting subject. I reckon a lot of people would be both pleased and enriched to have an opportunity to learn more mathematics in a systematic, structured, and thought-out way. Courses like the one Daniel and I have put together are exactly in this direction. And also they should help high school students and teachers, which is always a good thing from my point of view.

But you might be surprised that, in addition, I actually learn a lot of important things by trying to figure out how best to present material to people who are not necessarily very advanced in mathematics. That motivates a lot of my YouTube channel too. It turns out that having to explain something to someone who perhaps really has no idea about the subject forces me to think hard about what the essence of the matter is, what the key examples are, what to say and what not to say. And invariably I learn something.

What did I learn putting this MOOC together? A lot. I learnt about power laws in biology, about allometry, the study of scaling in biology, about Zipf’s law, thought some more about Benford’s law (which I have mused on from time to time), and reviewed some basic supply and demand kind of elementary economics that I had more or less forgotten. I had a chance to review lots of things that officially I know, but that it is good to solidify. And I also learnt that the bowhead whale is the longest living mammal, with a record lifespan of 211 years.

And I believe that I also learnt the right way to think about the quadratic formula. Let me share with you what I would like to call al Khwarizmi’s identity:


This is the heart of the matter as far as I am now concerned. The usual quadratic formula is just a sloppy consequence that results if one is cavalier about taking “square roots”, which I hope none of you are any more. Geometrically this formula allows us to identify the vertex of the parabola ax^2+bx+c. It is this identity, I’ll bet, that students will learn when they study quadratic equations, one thousand years from now.

The course is lasting only another week, but if you register before the end, then the course contents will be available to you after it closes. So check it out! This link ought to work:

Some thanks: Laura Griffin has a been a big help as our project manager, and Iman Irannejad has done a cracker job with the videos. Ruslan Ibragimov has been splendid with technical assistance, and Joshua Capel and Galina Levitina have both been a big help running the course. Thanks to them, and to the folks at UNSW L&T who have supported the project from afar.