Space-filling curves do not exist

When does ideology trump common sense?  This question is very relevant to the sad situation with modern pure mathematics, which is in a dire logical mess. All manner of dubious concepts and arguments are floating around out there, sustained by our fervent desire that the limiting operations underlying modern analysis actually make sense. We must believe — we will believe!

And there is hardly a more obviously suspicious case than that of space-filling curves. These are purportedly one-dimensional continuous curves that pass through every (real) point in the interior of a unit square.

But this contradicts ordinary common sense. It imbues mathematics with an air of disconnection with reality that lay people find disconcerting, just like the Banach-Tarski Paradox nonsense that I talked about a few blogs back.

In mathematics, dimension is an important concept; a line, or more generally a curve, is one-dimensional, while a square in the plane, or a surface is two-dimensional, and of course we appear to live in a three-dimensional physical space. But ever since the 17th century, mathematicians had started to realize that the correct definitions of “curve” and “surface” were in fact much more subtle and logically problematic than at first appeared, and that “dimension” was not so easy to pin down either.

In 1890 a new kind of phenomenon was introduced which cast additional doubt on our understanding of these concepts. This was the space-filling curve of Peano, which ostensibly fills up all of a square, without crossing itself. This was a contentious “construction” at the time, resting on the hotly debated new ideas of George Cantor on infinite sets and processes. But the influential German mathematician David Hilbert rose to defend it, and so generally 20th-century pure mathematicians fell into line, and today these curves are considered unremarkable, and just another curious aspect of the modern mathematical landscape.

But do these curves really exist? More fundamentally, are they even well defined? Or are we talking about some kind of mathematical nonsense here?

While Peano’s original article did not contain a diagram, Hilbert in the following year published a version with a picture, essentially the one produced below, so we will discuss this so-called space-filling curve of Hilbert. It turns out that the curve is created by iterating a certain process indefinitely. Along the way, we get explicit, finitely prescribed, discrete curves that twist and turn around the square in a predictable pattern. Then “in the limit”, as the analysts like to say—as we “go to infinity”—these concrete zig-zag paths turn into a continuous path that supposedly passes through every real point in the interior of the square exactly once. Does this argument really work??

The pattern can be discerned from the sequence of pictures below. Consider the square as being divided into 4 equal squares. At the first stage we join the centres of these four squares with line segments, moving say from the bottom left to the top left, then to the top right, and then to the bottom right. This gives us a U shape, opening down. Now at the next stage, we join four such U shapes, each in one of the smaller sub-squares of the original. The first opens to the left, the next two open down, and the last opens to the right, and they are linked with segments to form a new shape, which we call U_2 as shown in the second diagram. In the third diagram, we put four smaller U_2 shapes together, also oriented in a similar way to the previous stage, to create a new U_3 curve. And then we carry on doing the same: shrink whatever curve U_n we have just produced, and arrange four copies in the sub-squares oriented in the same way, and linked by segments to get the next curve U_{n+1}.

640px-Hilbert_curve.svg

“Hilbert curve”. Licensed under CC BY-SA 3.0 via Wikipedia – https://en.wikipedia.org/wiki/File:Hilbert_curve.svg#/media/File:Hilbert_curve.svg

These are what we might call Hilbert curves, and they are pleasant and useful objects. Computer scientists sometimes use them to store date in a two dimensional array in a non-obvious way, and they are also used in image processing. Notice that at this point all these curves are purely rational objects. No real number shenanigans are necessary to either define or construct them. Peano and Hilbert made a real contribution to mathematics in introducing these finite curves!

And now we get to the critical point, where Hilbert, following Peano and ultimately Cantor, went beyond the bounds of reasonableness. He postulated that we could carry on this inductive process of producing ever more and more refined and convoluted curves to infinity. Once we have arrived at this infinity, we are supposedly in possession of a “curve” U_{infinity} with remarkable (read unbelievable) properties. [Naturally all of this requires the usual belief system of “real numbers”, which I suppose you know by now is a chimera.]

The “infinite Hilbert curve” U_{infinity} is supposedly continuous, but differentiable nowhere. It supposedly passes through every point of the interior of the square. By this, we mean that every point [x,y], where  x and y are “real numbers”, is on this curve somewhere. Supposedly the curve U_{infinity} is “parametrized” by a “real number”, say t in the interval [0,1]. So given a real number such as

t=0.52897750910859340798571569247120345759873492374566519237492742938775…

we get a point U_{infinity}(t)=

[0.68909814147239785423401979874234…,0.36799574952335879124312358098423435…]

in the unit square [0,1] x [0,1].

(Legal Disclaimer: these real numbers are for illustration purposes only and do not necessarily correspond to reality in any fashion whatsoever. In particular we make no comment on the meaning of the three dot sequences that appear. Perhaps there are oracles or slimy galactic super-octopuses responsible for their generation, perhaps computer programs. You may interpret as you like.)

The infinite Hilbert curve U_{infinity} cannot be drawn. Its “construction” amounts to an imaginary thought process akin to an uncountably infinite army of pointillist painters, each spending an eternity creating their own individual minute point contributions as infinite limits of sequences of rational dots. Unlike those actual, computable and constructable curves U_n, the fantasy curve U_{infinity} has no practical application. How could it, since it does not exist.

Or we could just apply the (surely by now well-known) Law of (Logical) Honesty, formulated on this blog last year, which states:

Don’t pretend that you can do something that you can’t.

While you are free to create curves U_n even for very large n if you have the patience, resources and time, it is both logically and morally wrong to assert that you can continue to do this for all natural numbers, with a legitimate mathematical curve as the end product. It is just not true! You cannot do this. Stop pretending, analysts!

But in modern pure mathematics, we believe everything we are told. Sure, let’s “go to infinity”, even if what we get is obvious nonsense.

Conceptual versus rhetorical definitions

Here are two definitions, both taken from the internet. Definition 1: A dog is a domesticated carnivorous mammal that typically has a long snout, an acute sense of smell, non-retractile claws, and a barking, howling, or whining voice. Definition 2: An encumbered asset is one that is currently being used as security or collateral for a loan.

These two definitions illustrate an important distinction which ought to be more widely appreciated: that some definitions bring into being a new concept, while others merely package conveniently and concisely what we already know.

Each of us from an early age understands what a dog is, by having many of them pointed out to us. We learn from experience that there are many different types of dog, but they mostly all have some common characteristics that generally separate them say from other animals, typically cats. The definition of a dog given above is only a summary, short and sweet, of familiar properties of the animal.

Most of us know what an asset it. But the adjective “encumbered”, when applied to assets, is not one that is familiar to us. At some point in the history of finance someone thought up this particular concept  and needed a word for it. How about encumbered? This might have been one of several terms proposed— borrowing a word from English with a related but different meaning, and giving it here a precise new meaning.

Let’s give a name to this distinction that I am trying to draw here. Let’s say that a definition that summarizes more concisely, or accurately, something that we already know is a rhetorical definition. Let’s also say that a definition that creates a new kind of object or concept by bringing together previously unconnected properties is a conceptual definition.

If I ask you what love is, you will draw upon your experience with life and the human condition, and give me a list of enough characteristics that characterize love in your view. Almost everyone would have an opinion on the worth of your definition, because we all have prior ideas about what love is, and will judge whether your definition properly captures what we already know from our a priori experience. This kind of definition is largely rhetorical.

If I ask you what a perfect number is, and you are a good mathematics student, you will tell me that it is a natural number which is equal to the sum of those of its divisors which are less than itself. So  6 is a perfect number since  6=1+2+3, and 28 is a perfect number since  28=1+2+4+7+14. This is not the usual colloquial meaning of perfect: we are just hijacking this word to bring into focus a formerly unconsidered notion (this was done by the ancient Greeks in this case). This is a conceptual definition.

In mathematics, we prefer conceptual definitions to rhetorical ones. When we define a concept, we want our statement of that concept to be so clear and precise that it invokes the same notion to all who hear it, even those who are unfamiliar with the idea in question. Prior experience is not required to understand conceptual definitions, except to the extent of having mastered the various technical terms involved as constituents of the definition.

We do not want that in order to properly understand a term someone needs some, perhaps implicit, prior understanding of the term. If I tell you that a number officially is something used for counting or measurement, you are probably not happy. While this kind of loose description is fine for everyday usage, it is not adequate in mathematics. Such a rhetorical definition is ambiguous: because it draws upon your prior loose experience with counting and measuring, and we can all see that people could view the boundaries of this definition differently from others. In mathematics we want to create fences around our concepts; our definitions ought to be precise, visible and unchanging.

If I tell you that a function is continuous if it varies without abrupt gaps or fractures, then you recognize that I am not stepping up to the plate mathematically speaking. This is a rhetorical definition: it relies on some prior understanding of notions that are loosely intertwined with the very concept we are attempting to attempting to frame.

And now we come to the painful reality: modern mathematics is full of rhetorical definitions. Of concepts such as: number, function, variable, set, sequence, real number, formula, statement, topological space, continuity, variety, manifold, group, field, ring, and category. These notions in modern mathematics rest on definitions that are mostly unintelligible to the uninitiated. These definitions implicitly assume familiarity with the topic in question.

The standard treatment in undergrad courses shows you first lots of examples. Then after enough of these have been digested, you get a “definition” that captures enough aspects of the concept that we feel it characterises the examples we have come to learn. The cumulative effect is that you have been led to believe you “know” what the concept is, but the reality is something else. This becomes clear quickly when you are presented with non standard examples that fall outside the comfortable bounds of the text books.

This is a big barrier to the dissemination of mathematical knowledge. While modern books and articles give the appearance of precision and completeness, this is often a ruse: implicitly the reader is assumed to have gained some experience with the the topic from another source. There is a big difference between a layout of a topic and a summary of that topic. An excellent example is the treatment of real numbers in introductory Calculus or Analysis texts. Have a look at how cavalierly these books just quickly gloss over the “definition”, essentially assuming that you already know what real numbers supposedly are. Didn’t you learn that way back in high school?

Understanding the rhetorical aspects of fundamental concepts in pure mathematics goes a long way to explaining why the subject is beset with logical problems. Sigh. I guess I have some work to do explaining this. But you can do some of it yourself by opening a textbook and looking up one of these terms. Ask yourself: without any examples, pictures or further explanations, does this definition stand up on its own two legs? If so, then it can claim to be a logical conceptual definition. Otherwise it is more likely a dubious rhetorical definition.

 

The Green-Tao theorem on arithmetical sequences of primes: is it true?

In 2004 Ben Green and Terence Tao ostensibly proved a result which is now called the Green-Tao theorem. It asserts that there are arbitrarily long arithmetical sequences of prime numbers.

That is, given a natural number n, there is a sequence of prime numbers of the form p+mk, k=0,1,…,n-1 where p and m are natural numbers. For example 5, 11, 17, 23, 29 is a sequence of 5 primes in arithmetical progression with difference m=6, while 199, 409, 619, 829, 1039, 1249, 1459, 1669, 1879, 2089 is a sequence of 10 primes in arithmetical progression, with difference m=210.

Up to now the longest sequences of primes in such an arithmetical sequence that I know about was found in 2010 by Benoãt Perichon: it is

43,142,746,595,714,191 + (23,681,770)( 223,092,870), for k = 0 to 25.

The proof of Green and Tao is clearly a tour-de-force of modern analysis and number theory. It relies on a result called Szemeredi’s theorem along with other results and techniques from analytical number theory, combinatorics, harmonic analysis and ergodic theory. Measure theory naturally plays an important role.

Both Green and Tao are brilliant mathematicians, and Terence Tao is a Fields Medal winner. Terence is also originally Australian, and spent half a year at UNSW some time ago, where I had the pleasure of having some interesting chats over coffee with him.

Is the Green-Tao theorem true? This is actually quite an interesting question. The official proof was published in Annals of Math. 167 (2008), 481-547, and has been intensively studied by dozens of experts. No serious problems with the argument has been found, and it is now acknowledged that the result is firmly established. By the experts.

But is the Green-Tao theorem true? That depends not only on whether the arguments hang together logically when viewed from the top down, but also crucially on whether the underlying assumptions that underpin the theories in which those arguments take place are correct. It is here that one must accept that problems might arise.

So I am not suggesting that any particular argument of the Green-Tao paper is faulty. But there is the more unpleasant possibility that the whole edifice of modern analysis on which it depends is logically compromised, and that this theorem is but one of hundreds in the modern literature that actually don’t work logically if one descends right down to the fundamental level.

Let me state my position, which is rather a minority view. I don’t believe in real numbers. The current definitions of real numbers are logically invalid in my opinion. I am of the view that the arithmetic of such real numbers also has not been established properly.

I do not believe that the concept of a set has been established, and so consequently for me any discussion involving infinite sets is logically compromised. I do not accept that there is a completed set of natural numbers. I find fault with analysts’ invocation of limits, as often this brazenly assumes that one is able to perform an infinite number of operations, which I deny. I don’t believe that transcendental functions currently have a correct formulation, and I reject modern topology’s reliance on infinite sets of infinite sets to establish continuity. I believe that analysts are not being upfront in their approach to Wittgenstein’s distinction between choice and algorithms when they discuss infinite processes.

Consequently I find most of the theorems of Measure Theory meaningless. The usual arguments that fill the analysis journals are to me but possible precursors to a more rigorous analysis that may or may not be established some time in the future.

Clearly I have big problems.

But as a logical consequence of my position, I cannot accept the argument of the Green-Tao theorem, because I do not share the belief in the underlying assumptions that modern experts in analysis have.

But there is another reason why I do not accept the Green-Tao theorem, that does not depend on a critical analysis of their proof. I do not accept the Green-Tao theorem because I am sure that it is not true. I do not believe that there are arbitrarily long arithmetical progressions of prime numbers.

Let me be more specific. Consider the number z=10^10^10^10^10^10^10^10^10^10+23 that appeared in my debate last year with James Franklin called Infinity: does it exist?

My Claim: There is no arithmetical sequence of primes of length z.

This claim is to be distinguished from the argument that such a progression exists, but it would be just too hard for us to find it. My position is not based on what our computers can or cannot do. Rather, I assert that there is no such progression of prime numbers. Never was, never will be.

I do not have a proof of this claim, but I have a very good argument for it. I am more than 99.99% sure that this argument is correct. For me, the Green-Tao argument, powerful and impressive though it is, would be better rephrased in a more limited and precise way.

I do not doubt that with some considerable additional work, they, or others, might be able to reframe the statement and argument to be independent of all infinite considerations, real number musings, and dubious measure theoretic arguments. Then some true bounds on the extent and validity of the result might be established. That would be a lot of effort, but it might then be logically correct– from the ground up.

Uncomputable decimals and measure theory: is it nonsense?

Modern Measure Theory has something of a glitch. It asserts, as a main result, something which is rather obviously logically problematic (I am feeling polite this New Year’s morning!) Let’s talk a little about this subject today.

Modern measure theory studies, for example, the interval [0,1] of so-called real numbers. There are quite a lot of different ways of trying to conjure these real numbers into existence, and I have discussed some of these at length in many of my YouTube videos and also here in this blog: Dedekind cuts, Cauchy sequences of rationals, continued fractions, infinite decimals, or just via some axiomatic wishful thinking. In this list, and in what follows, I will suppress my natural inclination to put all dubious concepts in quotes. So don’t believe for a second that I buy most of the notions I am now going to talk about.

Measure theory texts are remarkably casual about defining and constructing the real numbers. Let’s just assume that they are there, shall we? Once we have the real numbers, measure theory asserts that it is meaningful to consider various infinite subsets of them, and to assign numbers that measure the extent of these various subsets, or at least some of them. The numbers that are assigned are also typically real numbers. The starting point of all this is familiar and reasonable: that a rational interval [a,b], where a,b are rational numbers and a is less than or equal to b, ought to have measure (b-a).

So measure theory is an elaborate scheme that attempts to extend this simple primary school intuition to the rather more convoluted, and logically problematic, arena of real numbers and their subsets. And it wants to do this without addressing, or even acknowledging, any of the serious logical problems that people (like me) have been pointing out for quite a long time.

If you open a book on modern measure theory, you will find a long chain of definitions and theorems: so-called. But what you will not find, along with  a thorough discussion of the logical problems, is a wide range of illustrative examples. This is a theory that floats freely above the unpleasant constraint of exhibiting concrete examples.

Your typical student is of course not happy with this situation: how can she verify independently that the ideas actually have some tangible meaning? Young people are obliged to accept the theories they learn as undergraduates on the terms they are given, and as usual appeals to authority play a big role. And when they turn to the internet, as they do these days, they often find the same assumptions and lack of interest in specific examples and concrete computations.

Here, to illustrate, is the Example section of the Wikipedia entry on Measure, which is what you get when you search for Measure Theory (from Wikipedia at https://en.wikipedia.org/wiki/Measure_(mathematics) ):

Examples 

___________________________

Some important measures are listed here.

Other ‘named’ measures used in various theories include: Borel measure, Jordan measure, ergodic measure, Euler measure, Gaussian measure, Baire measure,Radon measure, Young measure, and strong measure zero.

In physics an example of a measure is spatial distribution of mass (see e.g., gravity potential), or another non-negative extensive property, conserved (seeconservation law for a list of these) or not. Negative values lead to signed measures, see “generalizations” below.

Liouville measure, known also as the natural volume form on a symplectic manifold, is useful in classical statistical and Hamiltonian mechanics.

Gibbs measure is widely used in statistical mechanics, often under the name canonical ensemble.

_______________________________

(Back to the regular channel) Now one of the serious problems with theories which float independent of examples is that it becomes harder to tell if we have overstepped logical bounds. This is a problem with many theories based on real numbers.

Here is a key illustration: modern measure theory asserts that the real numbers with which it is preoccupied actually fall into two types: the computable ones, and the uncomputable ones. Computable ones include rational numbers, and all irrational numbers that (supposedly) arise as algebraic numbers (solutions of polynomial equations), definite integrals, infinite sums, infinite products, values of transcendental functions; and in fact any kind of computer program.

These include sqrt(2), ln 10, pi, e, sqrt(3+sqrt(5)), Euler’s constant gamma, values of the zeta function, gamma function, etc. etc. Every number that you will ever meet concretely in a mathematics course is a computable number. Any kind of decimal that is conjured up by some pattern, say 0.1101001000100001000001…, or even by some rule such as 0.a_1 a_2 a_3 … where  a_i is 1 unless i is an odd perfect number, in which case  a_i=2, is a computable number.

And what is then an uncomputable real number?? Hmm.. let’s just say this rather quickly and then move on to something more interesting, okay? Right: an uncomputable real number is just a real number that is not computable.

Uhh.. such as…? Sorry, but there are no known examples. It is impossible to write down any such uncomputable number in a concrete fashion. And what do these uncomputable numbers do for us? Well, the short answer is: nothing. They are not used in practical applications, and even theoretically, they don’t gain us anything. But they are there, my friends—oh yes, they are there — because the measure theory texts tell us they are!

And the measure theory texts tell us even more: that the uncomputable real numbers in fact swamp the computable ones measure-theoretically. In the interval [0,1], the computable numbers have measure zero, while the uncomputable numbers have measure one.

Yes, you heard correctly, this is a bona-fide theorem of modern measure theory: the computable numbers in [0,1] have measure zero, while the uncomputable numbers in [0,1] have measure one!

Oh, sure. So according to modern probability theory, which is based on measure theory, the probability of picking a random real number in [0,1] and getting a computable one is zero. Yet no measure theorist can give us even one example of a single uncomputable real number.

This is modern pure mathematics going beyond parody. Future generations are going to shake their heads in disbelief that we happily swallowed this kind of thing without even a trace of resistance, or at least disbelief.

But this is 2016, and the start of a New Year! I hope you will join me in an exciting venture to expose some of the many logical blemishes of modern pure mathematics, and to propose some much better alternatives — theories that actually make sense. Tell your friends, spread the word, and let’s not be afraid of thinking differently. Happy New Year.

 

 

Let alpha be a real number

PM (Pure Mathematician): Let alpha be a real number.

NJ (Me): What does that mean?

PM: Surely you are joking. What do you mean by such a question? Everyone uses this phrase all the time, probably you also.

NJ: I used to, but now I am not so sure anymore what it means. In fact I suspect it is nonsense. So I am asking you to clarify its meaning for me.

PM: No problem, then. It means that we are considering a real number, whose name is alpha. For example alpha = 438.0457897416622849… .

NJ: Is that a real number, or just a few decimal digits followed by three dots?

PM: It is a real number.

NJ: So a real number is a bunch of decimal digits followed by three dots.

PM: I think you know full well what a real number is, Norman. You are playing devil’s advocate. Officially a real number is an equivalence class of Cauchy sequences of rational numbers. The above decimal representation was just a shorthand.

NJ: So the real number alpha you informally described above is actually the following: {{32/141,13/55234,-444123/9857,…},{-62666626/43,49985424243/2,7874/3347,…},{4234/555,7/3,-424/55,…},…}?

PM: Well obviously that equivalence class of Cauchy sequences you started writing here is just a random collection of lists of rational numbers you have dreamed up. It has nothing to do with the real number alpha I am considering.

But now that I think about it for a minute, I suppose you are exploiting the fact that Cauchy sequences of rationals can be arbitrarily altered in a finite number of places without changing their limits, so you could argue that yes, my real number does look like that, although naturally alpha has a lot more information.

NJ: An infinite amount of more information?

PM: If you like.

NJ: What if I don’t like?

PM: Look, there is no use you quibbling about definitions. Modern pure mathematicians need real numbers for all sorts of things, not just for analysis, but also modern geometry, algebra, topology, you name it. Real numbers are not going away, no matter what kind of spurious objections you come up with. So why don’t you spend your time more fruitfully, and write some papers?

NJ: Have you heard of Wittgenstein’s objections to the infinite shenanigans of modern pure mathematics?

PM: No, but I think I am about to.

NJ: Wittgenstein claimed that modern pure mathematicians were trying to have their cake and eat it too, when it came to specifying infinite processes, by bouncing around between believing that infinite sequences could be described by algorithms, or they could be defined by choice. Algorithms are the stuff of computers and programming, while choice is the stuff of oracles and slimy intergalactic super-octopi. Which camp are you in? Is your real number alpha given by some finite code or by the infinite musings of a god-like creature?

PM: I think you are trying to ensnare me. You want me to say that I am thinking about decimal digits given by a program, but then you are going to say that that repudiates the Axiom of Choice. I know your strategy, you know! Don’t think you are the first to try to weaken our resolve or the faith in the Axioms. Mathematics has to start somewhere, after all.

NJ: And your answer is?

PM: Sorry, my laundry is done now, and then I have to finish my latest paper on Dohomological Q-theory over twisted holographic pseudo-morphoids. Cheers!

NJ: Cheers. Don’t forget to take alpha with you.

 

Doubling_of_a_sphere,_as_per_the_Banach-Tarski_Theorem (1)

The Banach-Tarski paradox: is it nonsense?

How can you tell when your theory has overstepped the bounds of reasonableness? How about when you start telling people your “facts” and their faces register with incredulity and disbelief? That is the response of most reasonable people when they hear about the “Banach-Tarski paradox”.

From Wikipedia:

The Banach–Tarski paradox states that a ball in the ordinary Euclidean space can be doubled using only the operations of partitioning into subsets, replacing a set with a congruent set, and reassembly.

The “theorem” is commonly phrased in terms of two solid balls, one twice the radius of the other, in which case it asserts that we can subdivide the smaller ball into a small number (usually 5) of disjoint subsets, perform rigid motions (combinations of translations and rotations) to these sets, and obtain a partition of the larger ball. Or a couple of balls the same size as the original. It is to be emphasized that these are cut and paste congruences! This was first stated by S. Banach and A. Tarski in 1924, building on earlier work of Vitali and Hausdorff.

Doubling_of_a_sphere,_as_per_the_Banach-Tarski_Theorem (1)

This “theorem” contradicts common sense. In real life we know that it is not easy to get something from nothing. We cannot take one dollar, subtly rearrange it in some clever fashion, and end up with two dollars. It doesn’t work.

That is why most ordinary people, when they hear about this kind of result, are at first disbelieving, and then, when told that the “proof” involves “free groups of rotations” and the “Axiom of Choice”, and that the resulting sets are in fact impossible to write down explicitly, just shake their heads. Those pure mathematicians: boy they are smart, but what arcane things they get up to!

This theorem is highly dubious. It really ought to be taken with a grain of salt, or at least generate some controversy. This kind of logical legerdemain probably should not go unchallenged for decades.

The logical flaws involved in the usual argument are actually quite numerous. First there are confusions about what “free groups” are and how we specify them. The definition of a finite group and the definition of an “infinite group” are vastly different kettles of fish. An underlying theory of infinite sets is assumed, but as usual a coherent theory of such infinite sets is missing.

Then there is a claim that free groups can be found inside the group of rotations of three dimensional space. This usually involves some discussion involving real numbers and irrational rotations. All the usual difficulties with real numbers that students of my YouTube series MathFoundations will be familiar with immediately bear down.

And then finally there is an appeal to the Axiom of Choice, from the ZFC axiomfest, which claims that one can make an infinite number of independent choices. But this contradicts the Law of (Logical) Honesty that I put forward several days ago. I remind you that this was the idea:

Don’t pretend that you can do something that you can’t.

You cannot make an infinite number of independent choices. Cannot. Impossible. Never could. Never will be able to. No amount of practice will help. Whistling while you do it won’t make it happen. You cannot make an infinite number of independent choices.

So we ought not to pretend that we can; that is what the Law of (Logical) Honesty asserts. We can’t just say: and now let’s suppose that we can make an infinite number of independent choices. That is just an empty phrase if we cannot support it in ways that people can observe and validate.

The actual “sets” involved in the case of transforming a ball of radius 1 to a ball of radius 2 are not sets that one can write down in any meaningful way. They exist only in a kind of no-mans land of speculative thinking, entirely dependent on these set-theoretic assumptions that pin them up. Ask for a concrete example, and explicit specifications, and you only get smiles and shrugs.

And so the Banach-Tarski nonsense has no practical application. There is no corresponding finite version that helps us do anything useful, at least none that I know of. It is something like a modern mathematical fairy tale.

Shouldn’t we be discussing this kind of thing more vigorously, here in pure mathematics?

 

 

The Alexander Horned Sphere: is it nonsense?

Modern topology is full of contentious issues, but no-one seems to pay any notice. There are many weird, even absurd, “constructions” and “arguments” which really ought to generate vigorous debate. People should have differences of opinions. Alternatives ought to be floated. The logical structure of the entire enterprise ought to be called into question.

But not in these days of conformity and meekness, amongst pure mathematicians anyway. Students are indoctrinated, not by force of logic, clarity of examples and the compelling force of rigorous computations, but by being browbeaten into thinking that if they confess to “not understanding”, then they are tacitly admitting failure. Why don’t you understand? Don’t you have what it takes to be a professional pure mathematician?

Let’s have a historically interesting example: the so-called “Alexander Horned Sphere”. This is supposedly an example of a “topological space” which is “homeomorphic”… actually do you think I could get away with not putting everything in quotes here? Pretty well everything that I am now going to be talking about ought to be in quotes, okay?

Right, so as I was saying, the Alexander Horned sphere is supposedly a topological space which is homeomorphic to a two-dimensional sphere. It was first constructed (big quotation marks missing on this one!) by J. W. Alexander in 1924, who was interested in the question about whether it was possible for the complement of a simply-connected surface to not be simply connected.

Simply-connected means that any loop in the space can be continuously contracted to a point. The two-dimensional sphere is simply connected, but the one-dimensional sphere (a circle) is not. Alexander’s weird construction gives a surface which is topologically a two-sphere, but its complement is like the complement of a torus: if we take a loop around the main body of the sphere, then we cannot contract it to a point. And why not? Because there is a nested sequence, an infinitely nested sequence of entanglements that our contracting loop can’t get around.

Alexander-horn-sphere

This image was made by Ryan Dahl, Creative Commons license.

Here is a way of imagining what is (kind of) going on. Put your two arms in front of you, so that your hands are close. Now with both hands, make a near circle with thumb and index finger, almost touching, but not quite, and link these two almost loops. Now imagine each of your fingers/thumbs as being like a little arm, with two new appendage finger/thumb pair growing from the end of each, also almost enclosing each other. And keep doing this, as the diagram suggests better than I can explain.

At any finite stage, none of the little almost loops is quite closed, so we could still untangle a string that was looped around say one of your arms, just by sliding it off your arm, past the finger and thumb, around the other arms finger and thumbs, and also navigating around all the little fingers and thumbs that you have grown, something like Swamp Thing.

Yes…but Alexander said “Let’s go to infinity!” And most of the topologists chorused” Yes, let’s go to infinity!” And most of their students dutifully repeated: “Yes, let’s go to infinity, … I guess!” And lo… there was the Alexander Horned Sphere!

But of course, it doesn’t really make sense, does it? Because it blatantly contravenes a core Law of Logic, in fact the one we enunciated two days ago, called the Law of (Logical) Honesty:

Don’t pretend that you can do something that you can’t.

The construction doesn’t work because it requires us to grow, or create, or construct, an infinite number of pairs of littler and littler fingers, and you just can’t do that!! All that we can logically contemplate is a finite version, as shown actually in the above diagram. And for any finite version, the supposed property that Alexander thought he constructed disintegrates.

The Alexander Horned Sphere: but one example of the questionable constructs that abound in modern pure mathematics.