Upcoming talk on the Goldbach Conjecture

Some exciting news, I will next month be giving a talk which, amongst other things, will resolve the Goldbach Conjecture. That is a rather famous conjecture in Number Theory that asserts that every even number can be written as the sum of two primes.

The talk will be in the Pure Mathematics Colloquium on November 8 2016 at the University of New South Wales, Sydney (UNSW), probably at 3 pm. (Note the change in date from a previous announcement!)


Speaker: A/Prof N J Wildberger (UNSW)

Title: Primes, Complexity and Computation: How Big Number theory resolves the Goldbach Conjecture

Abstract: The Goldbach Conjecture states that every even number greater than 2 can be written as the sum of two primes, and it is one of the most famous unsolved problems in number theory. In this lecture, we look at the problem from the novel point of view of Big Number theory – the investigation of large numbers exceeding the computational capacity of our computers, starting from Ackermann’s and Goodstein’s hyperoperations, to the presenter’s successor-limit hierarchy which parallels ordinal set theory.

This will involve a journey to a distant, seldom visited corner of number theory that impinges very directly on the Goldbach conjecture, and also on quite a few other open problems. Along the way we will meet some seriously big numbers, and pass by vast tracts of dark numbers. We will also bump into philosophical questions about the true nature of natural numbers—and the arithmetic that is possible with them.
We’ll begin with a review of prime numbers and their distribution, notably the Prime Number Theorem of Hadamard and de la Vallee Poussin. Then we look at how complexity interacts with primality and factorization, and present simple but basic results on the compression of complexity. These ideas allow us to slice through the Gordian knot and resolve the Goldbach Conjecture: using common sense, an Aristotelian view on the foundations of mathematics as espoused by James Franklin and his school, and back of the envelope calculations.


This lecture will be live streamed on YouTube at
So anyone from around the world who is interested can watch if they like.  Hope you all will be able to join us for this fun, invigorating, and enlightening event! If you are in Sydney on the day, and can head over the UNSW for the event, we will be delighted to see you there.

Viewers’ comments on infinity

One of the many pleasures in having my YouTube channel is getting to observe and participate in lots of spirited discussion by a wide range of viewers making comments on my videos. Here is my latest video in the MathFoundations series:

MathFoundations178: The law of (logical) honesty and the end of infinity

Even after one day, I have had many interesting comments. I would like to take the liberty of sharing with you two particularly cogent and insightful comments. The first is by Karma Peny, who writes (I have added some paragraph breaks):


Excellent video; I could not agree more that it is time to expel “infinity” from mathematics. Not only do we need to define fundamental concepts with more clarity, but we need to define exactly what mathematics is. After thousands of years we still have no clear statement to describe what mathematics is.

In the early days of mathematics, all fundamental axioms were derived from real-world objects and actions. Any dispute over axioms could be resolved by examination of real-world objects and actions. As such, fundamental axioms were to some extent ‘provable’ by studying real-world objects and actions. Mathematics was devised to solve real-world problems and it was underpinned by real-world physics. Essentially mathematics provided a modelling tool to help us manage quantities of objects, determine measurements and to make predictions about the real-world, such as for engineering purposes and in astronomy. Many real-world scenarios have the same underlying physics, and so the same general-purpose mathematics can be applied to all cases. The addition of 6 apples to 2 apples has the same underlying mathematics as the addition of 6 pears to 2 pears.

This apparent generic nature can create the illusion that mathematics has its own ‘existence’ and that it is not simply a tool based on real-world physics. This will annoy many mathematicians, but the fact is that to claim that something is not related to the physical universe is to believe in the supernatural. This is what a belief in the supernatural means… the acceptance of phenomena that is not of this world. Whether maths is in the chemistry of the brain, on a computer, or in written form, it consists of rules devised by humans and all of maths has a physical presence.

To claim it has its own inherent existence or that it is in some way detached from reality is to turn maths into a belief system. The axiom ‘an infinite set exists’ is of equal value to an axiom that states ‘the god of thunder exists’. We can claim it is consistent and cannot be disproved, but both these axioms are equally worthless and irrelevant in the real-world, just as are any deductions derived using these axioms. It is often argued that the use of ‘infinity’ in mathematics has proven to be very successful, but the successes could be despite the use of ‘infinity’ rather than because of it. I suspect we will have more clarity and even more successes if we abandon the use non real-world axioms.


And now here is the response to Karma Peny’s comment by Amanojack A, a consistent contributor of well written and insightful comments. (I have made a single spelling correction.)


I think you have it exactly right. Math was born out of finding useful abstract objects and situations whose relations were isomorphic/homomorphic to various real-world situations. In other words, a mathematical field’s objects, “moving parts,” and those movements and relations usefully corresponded to certain objects, moving parts, and their movements and relations in the physical world. Pin down the math and now you have a powerful tool applicable to any real-world situation as long as it has an aspect with a homomorphic correspondent in the math. For example, pin down multiplication and you have a powerful tool for counting how many apples you have if they come in crates of 24 each.

So-called “pure math” was born out of the idea that it might be worth developing mathematical objects and relations that correspond to no physical situation yet discovered, but could. Seems noble enough. The problem came when people failed to keep track of context. They floundered into musing about things that not only had no known physical analog, but that couldn’t ever even conceivably have a physical analog. They were unpicturable, things we “only imagine that we can imagine,” as Wildberger said. Like infinity. In another comment I elaborate on how this mind trick is pulled off, making us think we can imagine something we really can’t.

When physicists objected, mathematicians like Hilbert decided to take over the physics departments as well – such has been the power of this social trick of intimidation by pretending to have a unique ability to imagine the nonsensical. Paradox thus became a badge of honor, a sign that you were approaching deep wisdom (rather than stumbling into incoherence). We live with results; they now affect every field, as people point to how physics – king of the sciences! – gets away with it. The infection started with math, spread to physics, and after a century has turned into an epidemic with tendrils extending even as far as the art world of all things.

Returning mathematics to a solid footing is of paramount importance to all fields, as math is the standards bearer for rigor. It does a good job with logical rigor but tends to ignore semantic rigor as is convenient, which in turn lets all other disciplines off the hook in this regard, weakening all of academia (physics being the main conduit).

You hit the nail on the head when you say the successes of mainstream math have come in spite of infinity rather than because of it. Just like the axiom, “There exists a god of thunder,” the axiom of infinite sets functions as a cultural license; it simply allows those figures with the most authority to make up whatever fudges they want to make it look like they’ve proven something rigorously when they haven’t. The resulting mathematical world and its engineering applications retain the appearance of being held up by mathematical rigor, but they are actually held up variously by fudges handed down by fiat and by engineers adjusting them to avoid the cases where they break down. In other words it’s a big mess that is shoehorned into a usable framework, but not by the rigor of mathematicians – that is just smoke and mirrors (see calculus, example; “we’ll prove it rigorously, with limits!” – no, we’ll just make a show of it and move on, knowing it already works well enough for engineering).

In a sense, then, infinity has been quite successful…as a tool for advancing people’s math careers and social standing.


Thanks to both Karma Peny and Amonojack A for these penetrating comments!

AlphaGo beats Lee Sedol in second game

I, along with many fans of Go around the world, have been amazed and surprised at the power of Google DeepMind’s AI program AlphaGo, which has burst on the international Go scene in a monumental way, and threatens to change the dynamic and thinking about this great game in a very big way.

Go is originally a Chinese game, but is played extensively also in Japan and Korea, and other Asian countries, along with the rest of the world. Here in Sydney we are very lucky to have a high ranking Korean professional, Young-gil An (8D) to help promote the game and give teaching lessons. I will be heading to the Sydney Go Club this evening, to hear him analyse the second game in the historical match between AlphaGo and Lee Sedol, one of the world’s top ranked professional GO players. AlphaGo has won both of the first two games of this ground breaking series of 5, which are being played over the next week in Seoul.

I watched much of the second game on YouTube, and loved Michael Redmond’s analysis of the game, and the associated comments by Chris Garlock. You can find the entire game and commentary at https://www.youtube.com/watch?v=l-GsfyVCBu0.

I felt that the innovative aspects of AlphaGo’s opening play were particularly noteworthy. Lee Sedol knows that AlphaGo has records of hundreds of thousands of games in its data base (okay probably millions, since it has been playing itself a lot, which is a very unique and interesting way for it to impove), and so if it departs from very standard and traditionally respected patterns in the opening–the question naturally arises: does it know something that he, or other professional GO players, don’t?

This was perhaps most striking with the shoulder hit move on the fourth line stone at B37. Most of us amateurs would have been delighted to press along the fourth line making territory, but I guess Lee Sedol perhaps thought that would be submissive. Great stuff though.

I must admit that the awe and respect I have for the DeepMind team in creating such a powerful program is tempered with a bittersweet sadness that one of the really fundamentally human intellectual disciplines has been caught up to with our computers.

We can’t help but think: when will pure mathematics fall?


Space-filling curves do not exist

When does ideology trump common sense?  This question is very relevant to the sad situation with modern pure mathematics, which is in a dire logical mess. All manner of dubious concepts and arguments are floating around out there, sustained by our fervent desire that the limiting operations underlying modern analysis actually make sense. We must believe — we will believe!

And there is hardly a more obviously suspicious case than that of space-filling curves. These are purportedly one-dimensional continuous curves that pass through every (real) point in the interior of a unit square.

But this contradicts ordinary common sense. It imbues mathematics with an air of disconnection with reality that lay people find disconcerting, just like the Banach-Tarski Paradox nonsense that I talked about a few blogs back.

In mathematics, dimension is an important concept; a line, or more generally a curve, is one-dimensional, while a square in the plane, or a surface is two-dimensional, and of course we appear to live in a three-dimensional physical space. But ever since the 17th century, mathematicians had started to realize that the correct definitions of “curve” and “surface” were in fact much more subtle and logically problematic than at first appeared, and that “dimension” was not so easy to pin down either.

In 1890 a new kind of phenomenon was introduced which cast additional doubt on our understanding of these concepts. This was the space-filling curve of Peano, which ostensibly fills up all of a square, without crossing itself. This was a contentious “construction” at the time, resting on the hotly debated new ideas of George Cantor on infinite sets and processes. But the influential German mathematician David Hilbert rose to defend it, and so generally 20th-century pure mathematicians fell into line, and today these curves are considered unremarkable, and just another curious aspect of the modern mathematical landscape.

But do these curves really exist? More fundamentally, are they even well defined? Or are we talking about some kind of mathematical nonsense here?

While Peano’s original article did not contain a diagram, Hilbert in the following year published a version with a picture, essentially the one produced below, so we will discuss this so-called space-filling curve of Hilbert. It turns out that the curve is created by iterating a certain process indefinitely. Along the way, we get explicit, finitely prescribed, discrete curves that twist and turn around the square in a predictable pattern. Then “in the limit”, as the analysts like to say—as we “go to infinity”—these concrete zig-zag paths turn into a continuous path that supposedly passes through every real point in the interior of the square exactly once. Does this argument really work??

The pattern can be discerned from the sequence of pictures below. Consider the square as being divided into 4 equal squares. At the first stage we join the centres of these four squares with line segments, moving say from the bottom left to the top left, then to the top right, and then to the bottom right. This gives us a U shape, opening down. Now at the next stage, we join four such U shapes, each in one of the smaller sub-squares of the original. The first opens to the left, the next two open down, and the last opens to the right, and they are linked with segments to form a new shape, which we call U_2 as shown in the second diagram. In the third diagram, we put four smaller U_2 shapes together, also oriented in a similar way to the previous stage, to create a new U_3 curve. And then we carry on doing the same: shrink whatever curve U_n we have just produced, and arrange four copies in the sub-squares oriented in the same way, and linked by segments to get the next curve U_{n+1}.


“Hilbert curve”. Licensed under CC BY-SA 3.0 via Wikipedia – https://en.wikipedia.org/wiki/File:Hilbert_curve.svg#/media/File:Hilbert_curve.svg

These are what we might call Hilbert curves, and they are pleasant and useful objects. Computer scientists sometimes use them to store date in a two dimensional array in a non-obvious way, and they are also used in image processing. Notice that at this point all these curves are purely rational objects. No real number shenanigans are necessary to either define or construct them. Peano and Hilbert made a real contribution to mathematics in introducing these finite curves!

And now we get to the critical point, where Hilbert, following Peano and ultimately Cantor, went beyond the bounds of reasonableness. He postulated that we could carry on this inductive process of producing ever more and more refined and convoluted curves to infinity. Once we have arrived at this infinity, we are supposedly in possession of a “curve” U_{infinity} with remarkable (read unbelievable) properties. [Naturally all of this requires the usual belief system of “real numbers”, which I suppose you know by now is a chimera.]

The “infinite Hilbert curve” U_{infinity} is supposedly continuous, but differentiable nowhere. It supposedly passes through every point of the interior of the square. By this, we mean that every point [x,y], where  x and y are “real numbers”, is on this curve somewhere. Supposedly the curve U_{infinity} is “parametrized” by a “real number”, say t in the interval [0,1]. So given a real number such as


we get a point U_{infinity}(t)=


in the unit square [0,1] x [0,1].

(Legal Disclaimer: these real numbers are for illustration purposes only and do not necessarily correspond to reality in any fashion whatsoever. In particular we make no comment on the meaning of the three dot sequences that appear. Perhaps there are oracles or slimy galactic super-octopuses responsible for their generation, perhaps computer programs. You may interpret as you like.)

The infinite Hilbert curve U_{infinity} cannot be drawn. Its “construction” amounts to an imaginary thought process akin to an uncountably infinite army of pointillist painters, each spending an eternity creating their own individual minute point contributions as infinite limits of sequences of rational dots. Unlike those actual, computable and constructable curves U_n, the fantasy curve U_{infinity} has no practical application. How could it, since it does not exist.

Or we could just apply the (surely by now well-known) Law of (Logical) Honesty, formulated on this blog last year, which states:

Don’t pretend that you can do something that you can’t.

While you are free to create curves U_n even for very large n if you have the patience, resources and time, it is both logically and morally wrong to assert that you can continue to do this for all natural numbers, with a legitimate mathematical curve as the end product. It is just not true! You cannot do this. Stop pretending, analysts!

But in modern pure mathematics, we believe everything we are told. Sure, let’s “go to infinity”, even if what we get is obvious nonsense.

Conceptual versus rhetorical definitions

Here are two definitions, both taken from the internet. Definition 1: A dog is a domesticated carnivorous mammal that typically has a long snout, an acute sense of smell, non-retractile claws, and a barking, howling, or whining voice. Definition 2: An encumbered asset is one that is currently being used as security or collateral for a loan.

These two definitions illustrate an important distinction which ought to be more widely appreciated: that some definitions bring into being a new concept, while others merely package conveniently and concisely what we already know.

Each of us from an early age understands what a dog is, by having many of them pointed out to us. We learn from experience that there are many different types of dog, but they mostly all have some common characteristics that generally separate them say from other animals, typically cats. The definition of a dog given above is only a summary, short and sweet, of familiar properties of the animal.

Most of us know what an asset it. But the adjective “encumbered”, when applied to assets, is not one that is familiar to us. At some point in the history of finance someone thought up this particular concept  and needed a word for it. How about encumbered? This might have been one of several terms proposed— borrowing a word from English with a related but different meaning, and giving it here a precise new meaning.

Let’s give a name to this distinction that I am trying to draw here. Let’s say that a definition that summarizes more concisely, or accurately, something that we already know is a rhetorical definition. Let’s also say that a definition that creates a new kind of object or concept by bringing together previously unconnected properties is a conceptual definition.

If I ask you what love is, you will draw upon your experience with life and the human condition, and give me a list of enough characteristics that characterize love in your view. Almost everyone would have an opinion on the worth of your definition, because we all have prior ideas about what love is, and will judge whether your definition properly captures what we already know from our a priori experience. This kind of definition is largely rhetorical.

If I ask you what a perfect number is, and you are a good mathematics student, you will tell me that it is a natural number which is equal to the sum of those of its divisors which are less than itself. So  6 is a perfect number since  6=1+2+3, and 28 is a perfect number since  28=1+2+4+7+14. This is not the usual colloquial meaning of perfect: we are just hijacking this word to bring into focus a formerly unconsidered notion (this was done by the ancient Greeks in this case). This is a conceptual definition.

In mathematics, we prefer conceptual definitions to rhetorical ones. When we define a concept, we want our statement of that concept to be so clear and precise that it invokes the same notion to all who hear it, even those who are unfamiliar with the idea in question. Prior experience is not required to understand conceptual definitions, except to the extent of having mastered the various technical terms involved as constituents of the definition.

We do not want that in order to properly understand a term someone needs some, perhaps implicit, prior understanding of the term. If I tell you that a number officially is something used for counting or measurement, you are probably not happy. While this kind of loose description is fine for everyday usage, it is not adequate in mathematics. Such a rhetorical definition is ambiguous: because it draws upon your prior loose experience with counting and measuring, and we can all see that people could view the boundaries of this definition differently from others. In mathematics we want to create fences around our concepts; our definitions ought to be precise, visible and unchanging.

If I tell you that a function is continuous if it varies without abrupt gaps or fractures, then you recognize that I am not stepping up to the plate mathematically speaking. This is a rhetorical definition: it relies on some prior understanding of notions that are loosely intertwined with the very concept we are attempting to attempting to frame.

And now we come to the painful reality: modern mathematics is full of rhetorical definitions. Of concepts such as: number, function, variable, set, sequence, real number, formula, statement, topological space, continuity, variety, manifold, group, field, ring, and category. These notions in modern mathematics rest on definitions that are mostly unintelligible to the uninitiated. These definitions implicitly assume familiarity with the topic in question.

The standard treatment in undergrad courses shows you first lots of examples. Then after enough of these have been digested, you get a “definition” that captures enough aspects of the concept that we feel it characterises the examples we have come to learn. The cumulative effect is that you have been led to believe you “know” what the concept is, but the reality is something else. This becomes clear quickly when you are presented with non standard examples that fall outside the comfortable bounds of the text books.

This is a big barrier to the dissemination of mathematical knowledge. While modern books and articles give the appearance of precision and completeness, this is often a ruse: implicitly the reader is assumed to have gained some experience with the the topic from another source. There is a big difference between a layout of a topic and a summary of that topic. An excellent example is the treatment of real numbers in introductory Calculus or Analysis texts. Have a look at how cavalierly these books just quickly gloss over the “definition”, essentially assuming that you already know what real numbers supposedly are. Didn’t you learn that way back in high school?

Understanding the rhetorical aspects of fundamental concepts in pure mathematics goes a long way to explaining why the subject is beset with logical problems. Sigh. I guess I have some work to do explaining this. But you can do some of it yourself by opening a textbook and looking up one of these terms. Ask yourself: without any examples, pictures or further explanations, does this definition stand up on its own two legs? If so, then it can claim to be a logical conceptual definition. Otherwise it is more likely a dubious rhetorical definition.


The Green-Tao theorem on arithmetical sequences of primes: is it true?

In 2004 Ben Green and Terence Tao ostensibly proved a result which is now called the Green-Tao theorem. It asserts that there are arbitrarily long arithmetical sequences of prime numbers.

That is, given a natural number n, there is a sequence of prime numbers of the form p+mk, k=0,1,…,n-1 where p and m are natural numbers. For example 5, 11, 17, 23, 29 is a sequence of 5 primes in arithmetical progression with difference m=6, while 199, 409, 619, 829, 1039, 1249, 1459, 1669, 1879, 2089 is a sequence of 10 primes in arithmetical progression, with difference m=210.

Up to now the longest sequences of primes in such an arithmetical sequence that I know about was found in 2010 by Benoãt Perichon: it is

43,142,746,595,714,191 + (23,681,770)( 223,092,870), for k = 0 to 25.

The proof of Green and Tao is clearly a tour-de-force of modern analysis and number theory. It relies on a result called Szemeredi’s theorem along with other results and techniques from analytical number theory, combinatorics, harmonic analysis and ergodic theory. Measure theory naturally plays an important role.

Both Green and Tao are brilliant mathematicians, and Terence Tao is a Fields Medal winner. Terence is also originally Australian, and spent half a year at UNSW some time ago, where I had the pleasure of having some interesting chats over coffee with him.

Is the Green-Tao theorem true? This is actually quite an interesting question. The official proof was published in Annals of Math. 167 (2008), 481-547, and has been intensively studied by dozens of experts. No serious problems with the argument has been found, and it is now acknowledged that the result is firmly established. By the experts.

But is the Green-Tao theorem true? That depends not only on whether the arguments hang together logically when viewed from the top down, but also crucially on whether the underlying assumptions that underpin the theories in which those arguments take place are correct. It is here that one must accept that problems might arise.

So I am not suggesting that any particular argument of the Green-Tao paper is faulty. But there is the more unpleasant possibility that the whole edifice of modern analysis on which it depends is logically compromised, and that this theorem is but one of hundreds in the modern literature that actually don’t work logically if one descends right down to the fundamental level.

Let me state my position, which is rather a minority view. I don’t believe in real numbers. The current definitions of real numbers are logically invalid in my opinion. I am of the view that the arithmetic of such real numbers also has not been established properly.

I do not believe that the concept of a set has been established, and so consequently for me any discussion involving infinite sets is logically compromised. I do not accept that there is a completed set of natural numbers. I find fault with analysts’ invocation of limits, as often this brazenly assumes that one is able to perform an infinite number of operations, which I deny. I don’t believe that transcendental functions currently have a correct formulation, and I reject modern topology’s reliance on infinite sets of infinite sets to establish continuity. I believe that analysts are not being upfront in their approach to Wittgenstein’s distinction between choice and algorithms when they discuss infinite processes.

Consequently I find most of the theorems of Measure Theory meaningless. The usual arguments that fill the analysis journals are to me but possible precursors to a more rigorous analysis that may or may not be established some time in the future.

Clearly I have big problems.

But as a logical consequence of my position, I cannot accept the argument of the Green-Tao theorem, because I do not share the belief in the underlying assumptions that modern experts in analysis have.

But there is another reason why I do not accept the Green-Tao theorem, that does not depend on a critical analysis of their proof. I do not accept the Green-Tao theorem because I am sure that it is not true. I do not believe that there are arbitrarily long arithmetical progressions of prime numbers.

Let me be more specific. Consider the number z=10^10^10^10^10^10^10^10^10^10+23 that appeared in my debate last year with James Franklin called Infinity: does it exist?

My Claim: There is no arithmetical sequence of primes of length z.

This claim is to be distinguished from the argument that such a progression exists, but it would be just too hard for us to find it. My position is not based on what our computers can or cannot do. Rather, I assert that there is no such progression of prime numbers. Never was, never will be.

I do not have a proof of this claim, but I have a very good argument for it. I am more than 99.99% sure that this argument is correct. For me, the Green-Tao argument, powerful and impressive though it is, would be better rephrased in a more limited and precise way.

I do not doubt that with some considerable additional work, they, or others, might be able to reframe the statement and argument to be independent of all infinite considerations, real number musings, and dubious measure theoretic arguments. Then some true bounds on the extent and validity of the result might be established. That would be a lot of effort, but it might then be logically correct– from the ground up.

Uncomputable decimals and measure theory: is it nonsense?

Modern Measure Theory has something of a glitch. It asserts, as a main result, something which is rather obviously logically problematic (I am feeling polite this New Year’s morning!) Let’s talk a little about this subject today.

Modern measure theory studies, for example, the interval [0,1] of so-called real numbers. There are quite a lot of different ways of trying to conjure these real numbers into existence, and I have discussed some of these at length in many of my YouTube videos and also here in this blog: Dedekind cuts, Cauchy sequences of rationals, continued fractions, infinite decimals, or just via some axiomatic wishful thinking. In this list, and in what follows, I will suppress my natural inclination to put all dubious concepts in quotes. So don’t believe for a second that I buy most of the notions I am now going to talk about.

Measure theory texts are remarkably casual about defining and constructing the real numbers. Let’s just assume that they are there, shall we? Once we have the real numbers, measure theory asserts that it is meaningful to consider various infinite subsets of them, and to assign numbers that measure the extent of these various subsets, or at least some of them. The numbers that are assigned are also typically real numbers. The starting point of all this is familiar and reasonable: that a rational interval [a,b], where a,b are rational numbers and a is less than or equal to b, ought to have measure (b-a).

So measure theory is an elaborate scheme that attempts to extend this simple primary school intuition to the rather more convoluted, and logically problematic, arena of real numbers and their subsets. And it wants to do this without addressing, or even acknowledging, any of the serious logical problems that people (like me) have been pointing out for quite a long time.

If you open a book on modern measure theory, you will find a long chain of definitions and theorems: so-called. But what you will not find, along with  a thorough discussion of the logical problems, is a wide range of illustrative examples. This is a theory that floats freely above the unpleasant constraint of exhibiting concrete examples.

Your typical student is of course not happy with this situation: how can she verify independently that the ideas actually have some tangible meaning? Young people are obliged to accept the theories they learn as undergraduates on the terms they are given, and as usual appeals to authority play a big role. And when they turn to the internet, as they do these days, they often find the same assumptions and lack of interest in specific examples and concrete computations.

Here, to illustrate, is the Example section of the Wikipedia entry on Measure, which is what you get when you search for Measure Theory (from Wikipedia at https://en.wikipedia.org/wiki/Measure_(mathematics) ):



Some important measures are listed here.

Other ‘named’ measures used in various theories include: Borel measure, Jordan measure, ergodic measure, Euler measure, Gaussian measure, Baire measure,Radon measure, Young measure, and strong measure zero.

In physics an example of a measure is spatial distribution of mass (see e.g., gravity potential), or another non-negative extensive property, conserved (seeconservation law for a list of these) or not. Negative values lead to signed measures, see “generalizations” below.

Liouville measure, known also as the natural volume form on a symplectic manifold, is useful in classical statistical and Hamiltonian mechanics.

Gibbs measure is widely used in statistical mechanics, often under the name canonical ensemble.


(Back to the regular channel) Now one of the serious problems with theories which float independent of examples is that it becomes harder to tell if we have overstepped logical bounds. This is a problem with many theories based on real numbers.

Here is a key illustration: modern measure theory asserts that the real numbers with which it is preoccupied actually fall into two types: the computable ones, and the uncomputable ones. Computable ones include rational numbers, and all irrational numbers that (supposedly) arise as algebraic numbers (solutions of polynomial equations), definite integrals, infinite sums, infinite products, values of transcendental functions; and in fact any kind of computer program.

These include sqrt(2), ln 10, pi, e, sqrt(3+sqrt(5)), Euler’s constant gamma, values of the zeta function, gamma function, etc. etc. Every number that you will ever meet concretely in a mathematics course is a computable number. Any kind of decimal that is conjured up by some pattern, say 0.1101001000100001000001…, or even by some rule such as 0.a_1 a_2 a_3 … where  a_i is 1 unless i is an odd perfect number, in which case  a_i=2, is a computable number.

And what is then an uncomputable real number?? Hmm.. let’s just say this rather quickly and then move on to something more interesting, okay? Right: an uncomputable real number is just a real number that is not computable.

Uhh.. such as…? Sorry, but there are no known examples. It is impossible to write down any such uncomputable number in a concrete fashion. And what do these uncomputable numbers do for us? Well, the short answer is: nothing. They are not used in practical applications, and even theoretically, they don’t gain us anything. But they are there, my friends—oh yes, they are there — because the measure theory texts tell us they are!

And the measure theory texts tell us even more: that the uncomputable real numbers in fact swamp the computable ones measure-theoretically. In the interval [0,1], the computable numbers have measure zero, while the uncomputable numbers have measure one.

Yes, you heard correctly, this is a bona-fide theorem of modern measure theory: the computable numbers in [0,1] have measure zero, while the uncomputable numbers in [0,1] have measure one!

Oh, sure. So according to modern probability theory, which is based on measure theory, the probability of picking a random real number in [0,1] and getting a computable one is zero. Yet no measure theorist can give us even one example of a single uncomputable real number.

This is modern pure mathematics going beyond parody. Future generations are going to shake their heads in disbelief that we happily swallowed this kind of thing without even a trace of resistance, or at least disbelief.

But this is 2016, and the start of a New Year! I hope you will join me in an exciting venture to expose some of the many logical blemishes of modern pure mathematics, and to propose some much better alternatives — theories that actually make sense. Tell your friends, spread the word, and let’s not be afraid of thinking differently. Happy New Year.