One of the many pleasures in having my YouTube channel is getting to observe and participate in lots of spirited discussion by a wide range of viewers making comments on my videos. Here is my latest video in the MathFoundations series:

MathFoundations178: The law of (logical) honesty and the end of infinity

Even after one day, I have had many interesting comments. I would like to take the liberty of sharing with you two particularly cogent and insightful comments. The first is by Karma Peny, who writes (I have added some paragraph breaks):

***************

Excellent video; I could not agree more that it is time to expel “infinity” from mathematics. Not only do we need to define fundamental concepts with more clarity, but we need to define exactly what mathematics is. After thousands of years we still have no clear statement to describe what mathematics is.

In the early days of mathematics, all fundamental axioms were derived from real-world objects and actions. Any dispute over axioms could be resolved by examination of real-world objects and actions. As such, fundamental axioms were to some extent ‘provable’ by studying real-world objects and actions. Mathematics was devised to solve real-world problems and it was underpinned by real-world physics. Essentially mathematics provided a modelling tool to help us manage quantities of objects, determine measurements and to make predictions about the real-world, such as for engineering purposes and in astronomy. Many real-world scenarios have the same underlying physics, and so the same general-purpose mathematics can be applied to all cases. The addition of 6 apples to 2 apples has the same underlying mathematics as the addition of 6 pears to 2 pears.

This apparent generic nature can create the illusion that mathematics has its own ‘existence’ and that it is not simply a tool based on real-world physics. This will annoy many mathematicians, but the fact is that to claim that something is not related to the physical universe is to believe in the supernatural. This is what a belief in the supernatural means… the acceptance of phenomena that is not of this world. Whether maths is in the chemistry of the brain, on a computer, or in written form, it consists of rules devised by humans and all of maths has a physical presence.

To claim it has its own inherent existence or that it is in some way detached from reality is to turn maths into a belief system. The axiom ‘an infinite set exists’ is of equal value to an axiom that states ‘the god of thunder exists’. We can claim it is consistent and cannot be disproved, but both these axioms are equally worthless and irrelevant in the real-world, just as are any deductions derived using these axioms. It is often argued that the use of ‘infinity’ in mathematics has proven to be very successful, but the successes could be despite the use of ‘infinity’ rather than because of it. I suspect we will have more clarity and even more successes if we abandon the use non real-world axioms.

*****************

And now here is the response to Karma Peny’s comment by Amanojack A, a consistent contributor of well written and insightful comments. (I have made a single spelling correction.)

*****************

I think you have it exactly right. Math was born out of finding useful abstract objects and situations whose relations were isomorphic/homomorphic to various real-world situations. In other words, a mathematical field’s objects, “moving parts,” and those movements and relations usefully corresponded to certain objects, moving parts, and their movements and relations in the physical world. Pin down the math and now you have a powerful tool applicable to any real-world situation as long as it has an aspect with a homomorphic correspondent in the math. For example, pin down multiplication and you have a powerful tool for counting how many apples you have if they come in crates of 24 each.

So-called “pure math” was born out of the idea that it might be worth developing mathematical objects and relations that correspond to no physical situation yet discovered, but could. Seems noble enough. The problem came when people failed to keep track of context. They floundered into musing about things that not only had no known physical analog, but that couldn’t ever even conceivably have a physical analog. They were unpicturable, things we “only imagine that we can imagine,” as Wildberger said. Like infinity. In another comment I elaborate on how this mind trick is pulled off, making us think we can imagine something we really can’t.

When physicists objected, mathematicians like Hilbert decided to take over the physics departments as well – such has been the power of this social trick of intimidation by pretending to have a unique ability to imagine the nonsensical. Paradox thus became a badge of honor, a sign that you were approaching deep wisdom (rather than stumbling into incoherence). We live with results; they now affect every field, as people point to how physics – king of the sciences! – gets away with it. The infection started with math, spread to physics, and after a century has turned into an epidemic with tendrils extending even as far as the art world of all things.

Returning mathematics to a solid footing is of paramount importance to all fields, as math is the standards bearer for rigor. It does a good job with logical rigor but tends to ignore semantic rigor as is convenient, which in turn lets all other disciplines off the hook in this regard, weakening all of academia (physics being the main conduit).

You hit the nail on the head when you say the successes of mainstream math have come in spite of infinity rather than because of it. Just like the axiom, “There exists a god of thunder,” the axiom of infinite sets functions as a cultural license; it simply allows those figures with the most authority to make up whatever fudges they want to make it look like they’ve proven something rigorously when they haven’t. The resulting mathematical world and its engineering applications retain the appearance of being held up by mathematical rigor, but they are actually held up variously by fudges handed down by fiat and by engineers adjusting them to avoid the cases where they break down. In other words it’s a big mess that is shoehorned into a usable framework, but not by the rigor of mathematicians – that is just smoke and mirrors (see calculus, example; “we’ll prove it rigorously, with limits!” – no, we’ll just make a show of it and move on, knowing it already works well enough for engineering).

In a sense, then, infinity has been quite successful…as a tool for advancing people’s math careers and social standing.

********************

Thanks to both Karma Peny and Amonojack A for these penetrating comments!

# AlphaGo beats Lee Sedol in second game

I, along with many fans of Go around the world, have been amazed and surprised at the power of Google DeepMind’s AI program AlphaGo, which has burst on the international Go scene in a monumental way, and threatens to change the dynamic and thinking about this great game in a very big way.

Go is originally a Chinese game, but is played extensively also in Japan and Korea, and other Asian countries, along with the rest of the world. Here in Sydney we are very lucky to have a high ranking Korean professional, Young-gil An (8D) to help promote the game and give teaching lessons. I will be heading to the Sydney Go Club this evening, to hear him analyse the second game in the historical match between AlphaGo and Lee Sedol, one of the world’s top ranked professional GO players. AlphaGo has won both of the first two games of this ground breaking series of 5, which are being played over the next week in Seoul.

I watched much of the second game on YouTube, and loved Michael Redmond’s analysis of the game, and the associated comments by Chris Garlock. You can find the entire game and commentary at https://www.youtube.com/watch?v=l-GsfyVCBu0.

I felt that the innovative aspects of AlphaGo’s opening play were particularly noteworthy. Lee Sedol knows that AlphaGo has records of hundreds of thousands of games in its data base (okay probably millions, since it has been playing itself a lot, which is a very unique and interesting way for it to impove), and so if it departs from very standard and traditionally respected patterns in the opening–the question naturally arises: does it know something that he, or other professional GO players, don’t?

This was perhaps most striking with the shoulder hit move on the fourth line stone at B37. Most of us amateurs would have been delighted to press along the fourth line making territory, but I guess Lee Sedol perhaps thought that would be submissive. Great stuff though.

I must admit that the awe and respect I have for the DeepMind team in creating such a powerful program is tempered with a bittersweet sadness that one of the really fundamentally human intellectual disciplines has been caught up to with our computers.

We can’t help but think: when will pure mathematics fall?

# Space-filling curves do not exist

When does ideology trump common sense?  This question is very relevant to the sad situation with modern pure mathematics, which is in a dire logical mess. All manner of dubious concepts and arguments are floating around out there, sustained by our fervent desire that the limiting operations underlying modern analysis actually make sense. We must believe — we will believe!

And there is hardly a more obviously suspicious case than that of space-filling curves. These are purportedly one-dimensional continuous curves that pass through every (real) point in the interior of a unit square.

But this contradicts ordinary common sense. It imbues mathematics with an air of disconnection with reality that lay people find disconcerting, just like the Banach-Tarski Paradox nonsense that I talked about a few blogs back.

In mathematics, dimension is an important concept; a line, or more generally a curve, is one-dimensional, while a square in the plane, or a surface is two-dimensional, and of course we appear to live in a three-dimensional physical space. But ever since the 17th century, mathematicians had started to realize that the correct definitions of “curve” and “surface” were in fact much more subtle and logically problematic than at first appeared, and that “dimension” was not so easy to pin down either.

In 1890 a new kind of phenomenon was introduced which cast additional doubt on our understanding of these concepts. This was the space-filling curve of Peano, which ostensibly fills up all of a square, without crossing itself. This was a contentious “construction” at the time, resting on the hotly debated new ideas of George Cantor on infinite sets and processes. But the influential German mathematician David Hilbert rose to defend it, and so generally 20th-century pure mathematicians fell into line, and today these curves are considered unremarkable, and just another curious aspect of the modern mathematical landscape.

But do these curves really exist? More fundamentally, are they even well defined? Or are we talking about some kind of mathematical nonsense here?

While Peano’s original article did not contain a diagram, Hilbert in the following year published a version with a picture, essentially the one produced below, so we will discuss this so-called space-filling curve of Hilbert. It turns out that the curve is created by iterating a certain process indefinitely. Along the way, we get explicit, finitely prescribed, discrete curves that twist and turn around the square in a predictable pattern. Then “in the limit”, as the analysts like to say—as we “go to infinity”—these concrete zig-zag paths turn into a continuous path that supposedly passes through every real point in the interior of the square exactly once. Does this argument really work??

The pattern can be discerned from the sequence of pictures below. Consider the square as being divided into 4 equal squares. At the first stage we join the centres of these four squares with line segments, moving say from the bottom left to the top left, then to the top right, and then to the bottom right. This gives us a U shape, opening down. Now at the next stage, we join four such U shapes, each in one of the smaller sub-squares of the original. The first opens to the left, the next two open down, and the last opens to the right, and they are linked with segments to form a new shape, which we call U_2 as shown in the second diagram. In the third diagram, we put four smaller U_2 shapes together, also oriented in a similar way to the previous stage, to create a new U_3 curve. And then we carry on doing the same: shrink whatever curve U_n we have just produced, and arrange four copies in the sub-squares oriented in the same way, and linked by segments to get the next curve U_{n+1}.

“Hilbert curve”. Licensed under CC BY-SA 3.0 via Wikipedia – https://en.wikipedia.org/wiki/File:Hilbert_curve.svg#/media/File:Hilbert_curve.svg

These are what we might call Hilbert curves, and they are pleasant and useful objects. Computer scientists sometimes use them to store date in a two dimensional array in a non-obvious way, and they are also used in image processing. Notice that at this point all these curves are purely rational objects. No real number shenanigans are necessary to either define or construct them. Peano and Hilbert made a real contribution to mathematics in introducing these finite curves!

And now we get to the critical point, where Hilbert, following Peano and ultimately Cantor, went beyond the bounds of reasonableness. He postulated that we could carry on this inductive process of producing ever more and more refined and convoluted curves to infinity. Once we have arrived at this infinity, we are supposedly in possession of a “curve” U_{infinity} with remarkable (read unbelievable) properties. [Naturally all of this requires the usual belief system of “real numbers”, which I suppose you know by now is a chimera.]

The “infinite Hilbert curve” U_{infinity} is supposedly continuous, but differentiable nowhere. It supposedly passes through every point of the interior of the square. By this, we mean that every point [x,y], where  x and y are “real numbers”, is on this curve somewhere. Supposedly the curve U_{infinity} is “parametrized” by a “real number”, say t in the interval [0,1]. So given a real number such as

t=0.52897750910859340798571569247120345759873492374566519237492742938775…

we get a point U_{infinity}(t)=

[0.68909814147239785423401979874234…,0.36799574952335879124312358098423435…]

in the unit square [0,1] x [0,1].

(Legal Disclaimer: these real numbers are for illustration purposes only and do not necessarily correspond to reality in any fashion whatsoever. In particular we make no comment on the meaning of the three dot sequences that appear. Perhaps there are oracles or slimy galactic super-octopuses responsible for their generation, perhaps computer programs. You may interpret as you like.)

The infinite Hilbert curve U_{infinity} cannot be drawn. Its “construction” amounts to an imaginary thought process akin to an uncountably infinite army of pointillist painters, each spending an eternity creating their own individual minute point contributions as infinite limits of sequences of rational dots. Unlike those actual, computable and constructable curves U_n, the fantasy curve U_{infinity} has no practical application. How could it, since it does not exist.

Or we could just apply the (surely by now well-known) Law of (Logical) Honesty, formulated on this blog last year, which states:

Don’t pretend that you can do something that you can’t.

While you are free to create curves U_n even for very large n if you have the patience, resources and time, it is both logically and morally wrong to assert that you can continue to do this for all natural numbers, with a legitimate mathematical curve as the end product. It is just not true! You cannot do this. Stop pretending, analysts!

But in modern pure mathematics, we believe everything we are told. Sure, let’s “go to infinity”, even if what we get is obvious nonsense.

# Conceptual versus rhetorical definitions

Here are two definitions, both taken from the internet. Definition 1: A dog is a domesticated carnivorous mammal that typically has a long snout, an acute sense of smell, non-retractile claws, and a barking, howling, or whining voice. Definition 2: An encumbered asset is one that is currently being used as security or collateral for a loan.

These two definitions illustrate an important distinction which ought to be more widely appreciated: that some definitions bring into being a new concept, while others merely package conveniently and concisely what we already know.

Each of us from an early age understands what a dog is, by having many of them pointed out to us. We learn from experience that there are many different types of dog, but they mostly all have some common characteristics that generally separate them say from other animals, typically cats. The definition of a dog given above is only a summary, short and sweet, of familiar properties of the animal.

Most of us know what an asset it. But the adjective “encumbered”, when applied to assets, is not one that is familiar to us. At some point in the history of finance someone thought up this particular concept  and needed a word for it. How about encumbered? This might have been one of several terms proposed— borrowing a word from English with a related but different meaning, and giving it here a precise new meaning.

Let’s give a name to this distinction that I am trying to draw here. Let’s say that a definition that summarizes more concisely, or accurately, something that we already know is a rhetorical definition. Let’s also say that a definition that creates a new kind of object or concept by bringing together previously unconnected properties is a conceptual definition.

If I ask you what love is, you will draw upon your experience with life and the human condition, and give me a list of enough characteristics that characterize love in your view. Almost everyone would have an opinion on the worth of your definition, because we all have prior ideas about what love is, and will judge whether your definition properly captures what we already know from our a priori experience. This kind of definition is largely rhetorical.

If I ask you what a perfect number is, and you are a good mathematics student, you will tell me that it is a natural number which is equal to the sum of those of its divisors which are less than itself. So  6 is a perfect number since  6=1+2+3, and 28 is a perfect number since  28=1+2+4+7+14. This is not the usual colloquial meaning of perfect: we are just hijacking this word to bring into focus a formerly unconsidered notion (this was done by the ancient Greeks in this case). This is a conceptual definition.

In mathematics, we prefer conceptual definitions to rhetorical ones. When we define a concept, we want our statement of that concept to be so clear and precise that it invokes the same notion to all who hear it, even those who are unfamiliar with the idea in question. Prior experience is not required to understand conceptual definitions, except to the extent of having mastered the various technical terms involved as constituents of the definition.

We do not want that in order to properly understand a term someone needs some, perhaps implicit, prior understanding of the term. If I tell you that a number officially is something used for counting or measurement, you are probably not happy. While this kind of loose description is fine for everyday usage, it is not adequate in mathematics. Such a rhetorical definition is ambiguous: because it draws upon your prior loose experience with counting and measuring, and we can all see that people could view the boundaries of this definition differently from others. In mathematics we want to create fences around our concepts; our definitions ought to be precise, visible and unchanging.

If I tell you that a function is continuous if it varies without abrupt gaps or fractures, then you recognize that I am not stepping up to the plate mathematically speaking. This is a rhetorical definition: it relies on some prior understanding of notions that are loosely intertwined with the very concept we are attempting to attempting to frame.

And now we come to the painful reality: modern mathematics is full of rhetorical definitions. Of concepts such as: number, function, variable, set, sequence, real number, formula, statement, topological space, continuity, variety, manifold, group, field, ring, and category. These notions in modern mathematics rest on definitions that are mostly unintelligible to the uninitiated. These definitions implicitly assume familiarity with the topic in question.

The standard treatment in undergrad courses shows you first lots of examples. Then after enough of these have been digested, you get a “definition” that captures enough aspects of the concept that we feel it characterises the examples we have come to learn. The cumulative effect is that you have been led to believe you “know” what the concept is, but the reality is something else. This becomes clear quickly when you are presented with non standard examples that fall outside the comfortable bounds of the text books.

This is a big barrier to the dissemination of mathematical knowledge. While modern books and articles give the appearance of precision and completeness, this is often a ruse: implicitly the reader is assumed to have gained some experience with the the topic from another source. There is a big difference between a layout of a topic and a summary of that topic. An excellent example is the treatment of real numbers in introductory Calculus or Analysis texts. Have a look at how cavalierly these books just quickly gloss over the “definition”, essentially assuming that you already know what real numbers supposedly are. Didn’t you learn that way back in high school?

Understanding the rhetorical aspects of fundamental concepts in pure mathematics goes a long way to explaining why the subject is beset with logical problems. Sigh. I guess I have some work to do explaining this. But you can do some of it yourself by opening a textbook and looking up one of these terms. Ask yourself: without any examples, pictures or further explanations, does this definition stand up on its own two legs? If so, then it can claim to be a logical conceptual definition. Otherwise it is more likely a dubious rhetorical definition.

# The Green-Tao theorem on arithmetical sequences of primes: is it true?

In 2004 Ben Green and Terence Tao ostensibly proved a result which is now called the Green-Tao theorem. It asserts that there are arbitrarily long arithmetical sequences of prime numbers.

That is, given a natural number n, there is a sequence of prime numbers of the form p+mk, k=0,1,…,n-1 where p and m are natural numbers. For example 5, 11, 17, 23, 29 is a sequence of 5 primes in arithmetical progression with difference m=6, while 199, 409, 619, 829, 1039, 1249, 1459, 1669, 1879, 2089 is a sequence of 10 primes in arithmetical progression, with difference m=210.

Up to now the longest sequences of primes in such an arithmetical sequence that I know about was found in 2010 by Benoãt Perichon: it is

43,142,746,595,714,191 + (23,681,770)( 223,092,870), for k = 0 to 25.

The proof of Green and Tao is clearly a tour-de-force of modern analysis and number theory. It relies on a result called Szemeredi’s theorem along with other results and techniques from analytical number theory, combinatorics, harmonic analysis and ergodic theory. Measure theory naturally plays an important role.

Both Green and Tao are brilliant mathematicians, and Terence Tao is a Fields Medal winner. Terence is also originally Australian, and spent half a year at UNSW some time ago, where I had the pleasure of having some interesting chats over coffee with him.

Is the Green-Tao theorem true? This is actually quite an interesting question. The official proof was published in Annals of Math. 167 (2008), 481-547, and has been intensively studied by dozens of experts. No serious problems with the argument has been found, and it is now acknowledged that the result is firmly established. By the experts.

But is the Green-Tao theorem true? That depends not only on whether the arguments hang together logically when viewed from the top down, but also crucially on whether the underlying assumptions that underpin the theories in which those arguments take place are correct. It is here that one must accept that problems might arise.

So I am not suggesting that any particular argument of the Green-Tao paper is faulty. But there is the more unpleasant possibility that the whole edifice of modern analysis on which it depends is logically compromised, and that this theorem is but one of hundreds in the modern literature that actually don’t work logically if one descends right down to the fundamental level.

Let me state my position, which is rather a minority view. I don’t believe in real numbers. The current definitions of real numbers are logically invalid in my opinion. I am of the view that the arithmetic of such real numbers also has not been established properly.

I do not believe that the concept of a set has been established, and so consequently for me any discussion involving infinite sets is logically compromised. I do not accept that there is a completed set of natural numbers. I find fault with analysts’ invocation of limits, as often this brazenly assumes that one is able to perform an infinite number of operations, which I deny. I don’t believe that transcendental functions currently have a correct formulation, and I reject modern topology’s reliance on infinite sets of infinite sets to establish continuity. I believe that analysts are not being upfront in their approach to Wittgenstein’s distinction between choice and algorithms when they discuss infinite processes.

Consequently I find most of the theorems of Measure Theory meaningless. The usual arguments that fill the analysis journals are to me but possible precursors to a more rigorous analysis that may or may not be established some time in the future.

Clearly I have big problems.

But as a logical consequence of my position, I cannot accept the argument of the Green-Tao theorem, because I do not share the belief in the underlying assumptions that modern experts in analysis have.

But there is another reason why I do not accept the Green-Tao theorem, that does not depend on a critical analysis of their proof. I do not accept the Green-Tao theorem because I am sure that it is not true. I do not believe that there are arbitrarily long arithmetical progressions of prime numbers.

Let me be more specific. Consider the number z=10^10^10^10^10^10^10^10^10^10+23 that appeared in my debate last year with James Franklin called Infinity: does it exist?

My Claim: There is no arithmetical sequence of primes of length z.

This claim is to be distinguished from the argument that such a progression exists, but it would be just too hard for us to find it. My position is not based on what our computers can or cannot do. Rather, I assert that there is no such progression of prime numbers. Never was, never will be.

I do not have a proof of this claim, but I have a very good argument for it. I am more than 99.99% sure that this argument is correct. For me, the Green-Tao argument, powerful and impressive though it is, would be better rephrased in a more limited and precise way.

I do not doubt that with some considerable additional work, they, or others, might be able to reframe the statement and argument to be independent of all infinite considerations, real number musings, and dubious measure theoretic arguments. Then some true bounds on the extent and validity of the result might be established. That would be a lot of effort, but it might then be logically correct– from the ground up.

# Uncomputable decimals and measure theory: is it nonsense?

Modern Measure Theory has something of a glitch. It asserts, as a main result, something which is rather obviously logically problematic (I am feeling polite this New Year’s morning!) Let’s talk a little about this subject today.

Modern measure theory studies, for example, the interval [0,1] of so-called real numbers. There are quite a lot of different ways of trying to conjure these real numbers into existence, and I have discussed some of these at length in many of my YouTube videos and also here in this blog: Dedekind cuts, Cauchy sequences of rationals, continued fractions, infinite decimals, or just via some axiomatic wishful thinking. In this list, and in what follows, I will suppress my natural inclination to put all dubious concepts in quotes. So don’t believe for a second that I buy most of the notions I am now going to talk about.

Measure theory texts are remarkably casual about defining and constructing the real numbers. Let’s just assume that they are there, shall we? Once we have the real numbers, measure theory asserts that it is meaningful to consider various infinite subsets of them, and to assign numbers that measure the extent of these various subsets, or at least some of them. The numbers that are assigned are also typically real numbers. The starting point of all this is familiar and reasonable: that a rational interval [a,b], where a,b are rational numbers and a is less than or equal to b, ought to have measure (b-a).

So measure theory is an elaborate scheme that attempts to extend this simple primary school intuition to the rather more convoluted, and logically problematic, arena of real numbers and their subsets. And it wants to do this without addressing, or even acknowledging, any of the serious logical problems that people (like me) have been pointing out for quite a long time.

If you open a book on modern measure theory, you will find a long chain of definitions and theorems: so-called. But what you will not find, along with  a thorough discussion of the logical problems, is a wide range of illustrative examples. This is a theory that floats freely above the unpleasant constraint of exhibiting concrete examples.

Your typical student is of course not happy with this situation: how can she verify independently that the ideas actually have some tangible meaning? Young people are obliged to accept the theories they learn as undergraduates on the terms they are given, and as usual appeals to authority play a big role. And when they turn to the internet, as they do these days, they often find the same assumptions and lack of interest in specific examples and concrete computations.

Here, to illustrate, is the Example section of the Wikipedia entry on Measure, which is what you get when you search for Measure Theory (from Wikipedia at https://en.wikipedia.org/wiki/Measure_(mathematics) ):

Examples

___________________________

Some important measures are listed here.

Other ‘named’ measures used in various theories include: Borel measure, Jordan measure, ergodic measure, Euler measure, Gaussian measure, Baire measure,Radon measure, Young measure, and strong measure zero.

In physics an example of a measure is spatial distribution of mass (see e.g., gravity potential), or another non-negative extensive property, conserved (seeconservation law for a list of these) or not. Negative values lead to signed measures, see “generalizations” below.

Liouville measure, known also as the natural volume form on a symplectic manifold, is useful in classical statistical and Hamiltonian mechanics.

Gibbs measure is widely used in statistical mechanics, often under the name canonical ensemble.

_______________________________

(Back to the regular channel) Now one of the serious problems with theories which float independent of examples is that it becomes harder to tell if we have overstepped logical bounds. This is a problem with many theories based on real numbers.

Here is a key illustration: modern measure theory asserts that the real numbers with which it is preoccupied actually fall into two types: the computable ones, and the uncomputable ones. Computable ones include rational numbers, and all irrational numbers that (supposedly) arise as algebraic numbers (solutions of polynomial equations), definite integrals, infinite sums, infinite products, values of transcendental functions; and in fact any kind of computer program.

These include sqrt(2), ln 10, pi, e, sqrt(3+sqrt(5)), Euler’s constant gamma, values of the zeta function, gamma function, etc. etc. Every number that you will ever meet concretely in a mathematics course is a computable number. Any kind of decimal that is conjured up by some pattern, say 0.1101001000100001000001…, or even by some rule such as 0.a_1 a_2 a_3 … where  a_i is 1 unless i is an odd perfect number, in which case  a_i=2, is a computable number.

And what is then an uncomputable real number?? Hmm.. let’s just say this rather quickly and then move on to something more interesting, okay? Right: an uncomputable real number is just a real number that is not computable.

Uhh.. such as…? Sorry, but there are no known examples. It is impossible to write down any such uncomputable number in a concrete fashion. And what do these uncomputable numbers do for us? Well, the short answer is: nothing. They are not used in practical applications, and even theoretically, they don’t gain us anything. But they are there, my friends—oh yes, they are there — because the measure theory texts tell us they are!

And the measure theory texts tell us even more: that the uncomputable real numbers in fact swamp the computable ones measure-theoretically. In the interval [0,1], the computable numbers have measure zero, while the uncomputable numbers have measure one.

Yes, you heard correctly, this is a bona-fide theorem of modern measure theory: the computable numbers in [0,1] have measure zero, while the uncomputable numbers in [0,1] have measure one!

Oh, sure. So according to modern probability theory, which is based on measure theory, the probability of picking a random real number in [0,1] and getting a computable one is zero. Yet no measure theorist can give us even one example of a single uncomputable real number.

This is modern pure mathematics going beyond parody. Future generations are going to shake their heads in disbelief that we happily swallowed this kind of thing without even a trace of resistance, or at least disbelief.

But this is 2016, and the start of a New Year! I hope you will join me in an exciting venture to expose some of the many logical blemishes of modern pure mathematics, and to propose some much better alternatives — theories that actually make sense. Tell your friends, spread the word, and let’s not be afraid of thinking differently. Happy New Year.

# Let alpha be a real number

PM (Pure Mathematician): Let alpha be a real number.

NJ (Me): What does that mean?

PM: Surely you are joking. What do you mean by such a question? Everyone uses this phrase all the time, probably you also.

NJ: I used to, but now I am not so sure anymore what it means. In fact I suspect it is nonsense. So I am asking you to clarify its meaning for me.

PM: No problem, then. It means that we are considering a real number, whose name is alpha. For example alpha = 438.0457897416622849… .

NJ: Is that a real number, or just a few decimal digits followed by three dots?

PM: It is a real number.

NJ: So a real number is a bunch of decimal digits followed by three dots.

PM: I think you know full well what a real number is, Norman. You are playing devil’s advocate. Officially a real number is an equivalence class of Cauchy sequences of rational numbers. The above decimal representation was just a shorthand.

NJ: So the real number alpha you informally described above is actually the following: {{32/141,13/55234,-444123/9857,…},{-62666626/43,49985424243/2,7874/3347,…},{4234/555,7/3,-424/55,…},…}?

PM: Well obviously that equivalence class of Cauchy sequences you started writing here is just a random collection of lists of rational numbers you have dreamed up. It has nothing to do with the real number alpha I am considering.

But now that I think about it for a minute, I suppose you are exploiting the fact that Cauchy sequences of rationals can be arbitrarily altered in a finite number of places without changing their limits, so you could argue that yes, my real number does look like that, although naturally alpha has a lot more information.

PM: If you like.

NJ: What if I don’t like?

PM: Look, there is no use you quibbling about definitions. Modern pure mathematicians need real numbers for all sorts of things, not just for analysis, but also modern geometry, algebra, topology, you name it. Real numbers are not going away, no matter what kind of spurious objections you come up with. So why don’t you spend your time more fruitfully, and write some papers?

NJ: Have you heard of Wittgenstein’s objections to the infinite shenanigans of modern pure mathematics?

PM: No, but I think I am about to.

NJ: Wittgenstein claimed that modern pure mathematicians were trying to have their cake and eat it too, when it came to specifying infinite processes, by bouncing around between believing that infinite sequences could be described by algorithms, or they could be defined by choice. Algorithms are the stuff of computers and programming, while choice is the stuff of oracles and slimy intergalactic super-octopi. Which camp are you in? Is your real number alpha given by some finite code or by the infinite musings of a god-like creature?

PM: I think you are trying to ensnare me. You want me to say that I am thinking about decimal digits given by a program, but then you are going to say that that repudiates the Axiom of Choice. I know your strategy, you know! Don’t think you are the first to try to weaken our resolve or the faith in the Axioms. Mathematics has to start somewhere, after all.

PM: Sorry, my laundry is done now, and then I have to finish my latest paper on Dohomological Q-theory over twisted holographic pseudo-morphoids. Cheers!

NJ: Cheers. Don’t forget to take alpha with you.

# The Banach-Tarski paradox: is it nonsense?

How can you tell when your theory has overstepped the bounds of reasonableness? How about when you start telling people your “facts” and their faces register with incredulity and disbelief? That is the response of most reasonable people when they hear about the “Banach-Tarski paradox”.

From Wikipedia:

The Banach–Tarski paradox states that a ball in the ordinary Euclidean space can be doubled using only the operations of partitioning into subsets, replacing a set with a congruent set, and reassembly.

The “theorem” is commonly phrased in terms of two solid balls, one twice the radius of the other, in which case it asserts that we can subdivide the smaller ball into a small number (usually 5) of disjoint subsets, perform rigid motions (combinations of translations and rotations) to these sets, and obtain a partition of the larger ball. Or a couple of balls the same size as the original. It is to be emphasized that these are cut and paste congruences! This was first stated by S. Banach and A. Tarski in 1924, building on earlier work of Vitali and Hausdorff.

This “theorem” contradicts common sense. In real life we know that it is not easy to get something from nothing. We cannot take one dollar, subtly rearrange it in some clever fashion, and end up with two dollars. It doesn’t work.

That is why most ordinary people, when they hear about this kind of result, are at first disbelieving, and then, when told that the “proof” involves “free groups of rotations” and the “Axiom of Choice”, and that the resulting sets are in fact impossible to write down explicitly, just shake their heads. Those pure mathematicians: boy they are smart, but what arcane things they get up to!

This theorem is highly dubious. It really ought to be taken with a grain of salt, or at least generate some controversy. This kind of logical legerdemain probably should not go unchallenged for decades.

The logical flaws involved in the usual argument are actually quite numerous. First there are confusions about what “free groups” are and how we specify them. The definition of a finite group and the definition of an “infinite group” are vastly different kettles of fish. An underlying theory of infinite sets is assumed, but as usual a coherent theory of such infinite sets is missing.

Then there is a claim that free groups can be found inside the group of rotations of three dimensional space. This usually involves some discussion involving real numbers and irrational rotations. All the usual difficulties with real numbers that students of my YouTube series MathFoundations will be familiar with immediately bear down.

And then finally there is an appeal to the Axiom of Choice, from the ZFC axiomfest, which claims that one can make an infinite number of independent choices. But this contradicts the Law of (Logical) Honesty that I put forward several days ago. I remind you that this was the idea:

Don’t pretend that you can do something that you can’t.

You cannot make an infinite number of independent choices. Cannot. Impossible. Never could. Never will be able to. No amount of practice will help. Whistling while you do it won’t make it happen. You cannot make an infinite number of independent choices.

So we ought not to pretend that we can; that is what the Law of (Logical) Honesty asserts. We can’t just say: and now let’s suppose that we can make an infinite number of independent choices. That is just an empty phrase if we cannot support it in ways that people can observe and validate.

The actual “sets” involved in the case of transforming a ball of radius 1 to a ball of radius 2 are not sets that one can write down in any meaningful way. They exist only in a kind of no-mans land of speculative thinking, entirely dependent on these set-theoretic assumptions that pin them up. Ask for a concrete example, and explicit specifications, and you only get smiles and shrugs.

And so the Banach-Tarski nonsense has no practical application. There is no corresponding finite version that helps us do anything useful, at least none that I know of. It is something like a modern mathematical fairy tale.

Shouldn’t we be discussing this kind of thing more vigorously, here in pure mathematics?

# The Alexander Horned Sphere: is it nonsense?

Modern topology is full of contentious issues, but no-one seems to pay any notice. There are many weird, even absurd, “constructions” and “arguments” which really ought to generate vigorous debate. People should have differences of opinions. Alternatives ought to be floated. The logical structure of the entire enterprise ought to be called into question.

But not in these days of conformity and meekness, amongst pure mathematicians anyway. Students are indoctrinated, not by force of logic, clarity of examples and the compelling force of rigorous computations, but by being browbeaten into thinking that if they confess to “not understanding”, then they are tacitly admitting failure. Why don’t you understand? Don’t you have what it takes to be a professional pure mathematician?

Let’s have a historically interesting example: the so-called “Alexander Horned Sphere”. This is supposedly an example of a “topological space” which is “homeomorphic”… actually do you think I could get away with not putting everything in quotes here? Pretty well everything that I am now going to be talking about ought to be in quotes, okay?

Right, so as I was saying, the Alexander Horned sphere is supposedly a topological space which is homeomorphic to a two-dimensional sphere. It was first constructed (big quotation marks missing on this one!) by J. W. Alexander in 1924, who was interested in the question about whether it was possible for the complement of a simply-connected surface to not be simply connected.

Simply-connected means that any loop in the space can be continuously contracted to a point. The two-dimensional sphere is simply connected, but the one-dimensional sphere (a circle) is not. Alexander’s weird construction gives a surface which is topologically a two-sphere, but its complement is like the complement of a torus: if we take a loop around the main body of the sphere, then we cannot contract it to a point. And why not? Because there is a nested sequence, an infinitely nested sequence of entanglements that our contracting loop can’t get around.

Here is a way of imagining what is (kind of) going on. Put your two arms in front of you, so that your hands are close. Now with both hands, make a near circle with thumb and index finger, almost touching, but not quite, and link these two almost loops. Now imagine each of your fingers/thumbs as being like a little arm, with two new appendage finger/thumb pair growing from the end of each, also almost enclosing each other. And keep doing this, as the diagram suggests better than I can explain.

At any finite stage, none of the little almost loops is quite closed, so we could still untangle a string that was looped around say one of your arms, just by sliding it off your arm, past the finger and thumb, around the other arms finger and thumbs, and also navigating around all the little fingers and thumbs that you have grown, something like Swamp Thing.

Yes…but Alexander said “Let’s go to infinity!” And most of the topologists chorused” Yes, let’s go to infinity!” And most of their students dutifully repeated: “Yes, let’s go to infinity, … I guess!” And lo… there was the Alexander Horned Sphere!

But of course, it doesn’t really make sense, does it? Because it blatantly contravenes a core Law of Logic, in fact the one we enunciated two days ago, called the Law of (Logical) Honesty:

Don’t pretend that you can do something that you can’t.

The construction doesn’t work because it requires us to grow, or create, or construct, an infinite number of pairs of littler and littler fingers, and you just can’t do that!! All that we can logically contemplate is a finite version, as shown actually in the above diagram. And for any finite version, the supposed property that Alexander thought he constructed disintegrates.

The Alexander Horned Sphere: but one example of the questionable constructs that abound in modern pure mathematics.

# A new logical principle

We are supposed to have a very clear idea about the `laws of logic’. For example, if all men are mortal, and Socrates is a man, then Socrates is mortal.

Are there in fact such things as the “laws of logic”? While we can all agree that certain rules of inference, like the example above, are reasonably evident, there are a whole lot of more ambiguous situations where clear logical rules are hard to come by, and things amount more to clever arguments, weight of public opinion and the authority of people involved.

It is not dissimilar to the situation with moral codes, where we can all agree that certain rules are self-evident in abstract ideal situations, but when we look at real-life examples, we often are faced with moral dilemmas characterized by ambiguity rather than certainty. One should not kill. Okay, fair enough. But what about when someone threatens one’s loved ones? What moral law guides us as to when we ought to flip from passivity to aggression?

Similar kinds of logical ambiguities surface all the time in mathematics with the modern reliance on axioms, limits, infinite processes, real numbers etc.

Let’s consider here the situation with “infinity”. Most modern pure mathematicians believe, following Bolzano, Cantor and Dedekind, that this is a well-defined concept, and indeed that it rightfully plays a major role in advanced mathematics. I, on the other hand, claim that it is a highly dubious notion; in fact not properly defined; unsupported by explicit examples; the source of innumerable controversies, paradoxes and indeed outright errors; and that mathematics can happily do entirely without it. So we have a major difference of opinion. I can give plenty of reasons and evidence, and have done so, to support my position. By what rules of logic is someone going to convince me of the errors of my ways?

Appeals to authority? That won’t wash. A poll to decide things democratically? No, I will not accept public opinion over clear thinking.

Perhaps they could invoke the Axiom of Infinity from the ZFC axiomfest! According to Wikipedia this Axiom is:

$\exists X \left [\varnothing \in X \land \forall y (y \in X \Rightarrow S(y) \in X)\right ].$.

In other words, more or less: an infinite set exists. But I am just going to laugh at that. This is supposed to be mathematics, not some adolescent attempt to create god-like structures by stringing words, or symbols, together.

As a counter to such nonsense, I would like to propose my own new logical principle. It is simple and sweet:

Don’t pretend that you can do something that you can’t.

This principle asks us essentially to be honest. To not get carried away with flights of fancy. To keep our feet firmly planted in reality.

According to this principle, the following questions are invalid logically:

If you could jump to the moon, then would it hurt when you landed?

If you could live forever, what would be your greatest hope?

If you could add up all the natural numbers 1+2+3+4+…, what would you get?

As a consequence of my new logical principle, we are no longer allowed to entertain the possibility of “doing an infinite number of things”. No “adding up an infinite number of numbers”. No creating data structures by “inserting an infinite number” of objects. No “letting time go to infinity and seeing what happens”.

Instead, we might add up 10^6 numbers, or insert a trillion objects into a data set, or let time equal t=883,244,536,000. In my logical universe, computations finish. Statements are supported by explicit, complete, examples. The results of arithmetical operations are concrete numbers that everyone can look at in their entirety. Mathematical statements and equations do not trail “off to infinity” or “converge somewhere beyond the horizon”, or invoke mystical aspects of the physical universe that may or may not exist.

In my view, mathematics ought to be supported by computations that can be made on our computers.

As a consequence of my way of thinking, the following is also a logically invalid question:

If you could add up all the rational numbers 1/1+1/2+1/3+1/4+…, what would you get?

It is nonsense because you cannot add up all those numbers. And why can you not do that? It is not because the sum grows without bound (admittedly not in such an obvious way as in the previous example), but rather because you cannot do an infinite number of things.

As a consequence of my way of thinking, the following is also a logically invalid question:

If you could add up all the rational numbers 1/1^2+1/2^2+1/3^2+1/4^2+…, what would you get?

And the reason is exactly the same. It is because we cannot perform an infinite number of arithmetical operations.

Now in this case someone may argue: wait Norman – this case is different! Here the sum is “converging” to something (to “pi^2/6” according to Euler). But my response is: no, the sum does not make sense, because the actual act of adding up an infinite number of terms, even if the partial sums seems to be heading somewhere, is not something that we can do.

And this is not just a dogmatic or religious position on my part. It is an observation about the world in which we live in. You can try it for yourself. To give you a head start, here is the sum of the first one hundred terms of the above series:

(1589508694133037873 112297928517553859702383498543709859 889432834803818131 090369901)/(972186144434381030589657976 672623144161975583 995746241782720354705517986165248000)

Please have a go, by adding more and more terms of the series: the next one is 1/101^2. You will find that no matter how much determination, computing power and time you have, you will not be able to add up all those numbers. Try it, and see! And the idea that you can do this in a decimal system will very likely become increasingly dubious to you as you proceed. There is only one way to sum this series, and that is using rational number arithmetic, and that only up to a certain point. You can’t escape the framework of rational number arithmetic in which the question is given. Try it, and see if what I say is true!

There are many further consequences of this principle, and we will be exploring some of them in future blog entries. Clearly this new logical law ought to have a name. Let’s call it the law of (logical) honesty. Here it is again:

Don’t pretend that you can do something that you can’t.

As Socrates might have said, it’s just simple logic.