# Logical difficulties in modern mathematics

Modern mathematics is enormously complicated and sophisticated. It takes some courage, and perhaps some foolishness, to dare to suggest that behind the fancy theories lie serious logical gaps, and indeed error. But this is the unfortunate reality. Around the corner, however, is a new and more beautiful mathematics, a more honest mathematics, in which everything makes complete sense! It is my job to give people glimpses of this better, more logical alternative, and to empower young people especially to not be afraid to question the status-quo and the dubious thinking that currently holds sway over the subject. My MathFoundations series of videos will investigate these problems in a systematic way; let me here at least briefly outline some of the problems, so you can get an initial idea, and so that perhaps some of you will start to think more seriously about these important issues. I will be saying a lot more about these topics in future posts.

The notion of rigour in mathematics is a difficult one to pin down. Certain historical periods accepted notions or arguments that later were deemed insufficiently precise, or even incorrect, but this often became clearer only once a more accurate way of thinking emerged. A familiar illustration is the geometry of Euclid’s Elements, which for most of the last two thousand years was considered the model for logical presentation of mathematics. Only in the nineteenth century did it become acknowledged that Euclid’s definitions of point and line were imprecise, that he implicitly used rigid motions for proofs without defining them, that intersections of circles were taken for granted, that notions of betweenness were used without being supported by corresponding definitions, that arguments by pictures were implicitly used, and that most of the three-dimensional parts of the geometry were logically unsubstantiated. In each of these cases it became possible to talk about alternative ways of thinking, due to non-Euclidean geometries, linear algebra, and the idea of geometry over finite fields. Einstein’s theory no doubt played a big role in loosening people’s conviction that Euclidean geometry was somehow God-given.

The foundations of trigonometry are also suspect as soon as one inquires carefully into the nature of an angle—a difficult concept that Euclid purposefully avoided. It requires either the notion of arc-length or area contained by a curve, and both of these require calculus. The usual pastiche of trigonometric relations depend logically on a prior theory of analysis; a point that even most undergraduates never really properly see. Indeed the very notion of a curve was problematic for seventeenth and eighteenth century mathematicians, and even to this day it is not straightforward. For example, one of the supposedly basic results about curves is the Jordan curve theorem: a simple closed curve in the plane separates the plane into two regions; but it is the rare undergraduate who can even state this result correctly, least of all prove it.

There are even surprising and serious logical gaps with first year calculus. The foundations of the “real number line” are notoriously weak, with continued confusions as to the nature of the basic objects and the operations on them. Attempts at trying to define “real numbers” in the way applied mathematicians and physicists would prefer—as decimal expansions—run into the serious problems of how to define the basic operations, and prove the usual laws or arithmetic. [Try to define multiplication between two infinite decimals, and then prove that this law is associative!] The approaches using equivalence classes of Cauchy sequences, or Dedekind cuts, suffer from an inability to identify when two “real numbers” are the same, and purposefully side-step the crucial issue of how we actually specify these objects in practice. Dedekind cuts in particular are virtually picking oneself up with one’s own boot straps, with a notable poverty of examples. The continued fractions approach, while in many ways the most enlightened path, suffers also from difficulties. The result of these ambiguities is a kind of fantasy arithmetic of real numbers, a thought-experiment floating above and beyond the reach of concrete examples and computations. Which is why the computer scientists have such a headache trying to encode these “real numbers” and their arithmetic on our computers.

The serious problems with the continuum are reflected by an attendant state of denial by our first year Calculus texts, which try to bluff their way through these difficulties by either pretending that the foundations have been laid out properly elsewhere, can be replaced by some suitable belief system dressed up using “axiomatics”, or can be glossed over by appeals to authority. The lack of examples and illustrative computations is illuminating. A challenge to those pure mathematicians who object to these claims: can you show us some explicit first year examples of arithmetic with real numbers??

The Fundamental Theorem of Algebra, a key result in undergraduate mathematics, that a polynomial of degree n has a zero in the complex plane, is almost never proved properly. While it ostensibly appears to be `proved’ in complex analysis courses, it is doubtful that this is convincing to students: after all, by the time one has studied complex analytic functions to the point of being able to apply Liouville’s theorem, who can say for sure whether one has not already used the very result one is ostensibly proving, perhaps implicitly? In fact complex analysis as laid out in undergraduate courses is very much open to criticism, and not just because of the nebulous situation with `real numbers’. Yet this crucial result (FTA) is used all the time to simplify arguments.

Closely connected with all of this is Cantor’s theory of `infinite sets’ and its current acceptance by the majority as the foundation of mathematics. The essential problem that ultimately overwhelmed Cantor is still with us: what exactly is an “infinite set”? For a long time now it has been well-known that Cantor’s initial “definition” of an infinite set was far too vague; consideration of the “set of all sets”, or the “set of all groups” or the “set of all topological spaces” are fraught with difficulty and indeed paradox. The modern attitude is to slyly substitute some other terms like “class” or “family” or “category” when possible contradictions might arise. Hopefully fellow citizens will have the decency to not raise the question of what exactly these words mean! If everyone plays along, there is no problem, right?

Other weaknesses of modern analysis arise with issues of constructability and specification. What do we actually mean when we say “Let G be a Lie group”, or “Consider the space of all analytic functions on the circle” or “Now take the nth homology group”?? Terminology is important: I have never seen a proper discussion of what the words let, consider or take actually mean in pure mathematics, despite their universal usage. Difficulties with terminology also affect the core set-up: the modern mathematician likes to frame her subject in terms not only of sets but also of functions. The latter term is almost as problematic as the former.

What precisely is a “function”? Okay, the usual definition is something like “a rule that inputs one kind of object and outputs a possibly different kind of object”. But this passes the buck from defining the term “function” to defining the term “rule”. Are we thinking about a computer program here? If so, what kind of program? What language and syntax? What conventions about how to specify a program, and how does one tell if my program defines the same “function” as your program??

The modern analyst likes to go further, and also talk about “arbitrary functions”, allowing not only those that can be described in some concrete way by an arithmetical expression or a computer program, but also all those “functions which are not of this form”. What exactly this means, if anything, is highly debatable. The lack of clear examples that can be brought to bear on such a discussion is a hint that we are chatting here about something other than mathematics. Surely a distinction ought to be drawn between “functions” which one can concretely specify and “functions” which one can only talk about. Even better would be to cease discussion about the latter entirely, or at least relegate them to philosophy!

The theoretical use of limits in calculus is generally lax. This despite all the huffing and puffing with epsilons and deltas, whose seeming precision obscures the more devious sleights of hand, of which there are many. For example, while care is often used to `prove’ the Intermediate Value Theorem (which is obvious to any engineer or physicist), the use of `limit’ in the usual definition of the Riemann integral is almost a complete cheat. Have a look at your calculus book carefully in this section, and see what I mean! Most first year students are blissfully unaware of the vast logical gaps in their courses. Most mathematicians do not go out of their way to point these out.

Of course there is much more to be said about these issues. All of them will be addressed in my MathFoundations YouTube series, but I think it useful to also begin a discussion of them here in this blog. There is another, more beautiful, mathematics waiting to be discovered, but first before we can properly see it, we need to clean out the cobwebs that currently obstruct our vision.

## 15 thoughts on “Logical difficulties in modern mathematics”

1. Kernel

I don’t understand your point about the Riemann integral. I’ll try to understand by posing you a question.

Knowing you like rational numbers, let’s define the function f(x): I–>I (the unit interval to the unit interval) to be 1 if and only if x is a rational number for which the denominator of its standard form is even. That is, if x is a rational number, f(x)=1 if and only if there exists an odd integer k and a non-zero even integer n so that x=k/n. For every other x in I, we set f(x)=0.

Obviously the Riemann integral of f over I doesn’t exist, which I’m sure is the “cheat” to which you allude. What ought to be the integral?

1. njwildberger: tangential thoughts Post author

There are difficulties in trying to make precise what you are saying, but they are not entirely relevant to my criticism of the Riemann integral: which concerns the confusion between two very different usages of the idea of a limit. The first is usually defined rather carefully in the context of the limit of a function say f(x) as x –> a, where a is a particular number. But then for the definition of the integral we introduce a markedly different usage of a limit of a sum as certain partitions get small.

This latter notion is almost never properly defined, and no properties of it established. So in other words if you have very carefully studied and understood the classical definition of lim_{x–>a} f(x), you ought to be completely mystified by how this notion is applied to the definition of a Riemann integral.

The point is made clearer by actually looking at a specific calculus text and seeing explicitly the two different usages, rarely with even a sideways glance towards a justification or explanation. This pretence that nothing underhanded is going on is just as annoying as the actual subterfuge, at least to me.

1. Kernel

Interesting answer. Thinking about it now, I expect that I should be able to define a directed set of partitions (if a partition is a set of points, any superset of the partition is a ‘greater’ partition) and make an argument about convergence of a net rather than convergence of a series. That said, I’m not an analyst (or even a mathematician — I’m a physics PhD student with some maths training) and don’t know if there is still subterfuge happening.

I’m curious to know the issue with my suggested integral. Being in quantum physics, I’ve gotten used to the idea that different researchers can disagree violently about the philosophy underlying the subject and yet reach totally consistent answers when solving problems. I chose f(x) so that it could be defined just as easily for rational numbers as for real numbers. In the rational case, my definition is completely algorithmic (I think). I’m curious to know if your views lead to a different answer than that of Lebesgue.

2. njwildberger: tangential thoughts Post author

Indeed, more serious attempts try to use “nets of partitions” but beginning students have no clue about this, since it is invariably never mentioned. However even after you have identified the problem, there are logical concerns with “nets of partitions”, as well as the more serious underlying problems with the nature of the “real numbers” which supposedly are the “limits” to which all this hocus pocus “converges to”.

3. Jim

A little apart from the real numbers, there’s this other notion of complex numbers. Early on in your book, in discussing finite fields Fp, you discuss how, depending on the specific field chosen, numbers like -1 or -3, can be square numbers. I found this idea intriguing. Of course, in algebra, one use of imaginary numbers is to show when there are no possible solutions to a problem. And another seems to be just because using that as a tool in algebra can reveal other hidden truths. I’m sure there’s more uses to them to someone more versed in math than I. But it was surprising and interesting to me that, viewed through the lens of finite number fields, imaginary numbers can, in certain situations, cease to be “imaginary” and instead represent actual values. Would you say the reigning theory of complex numbers is also due some reimagining (pardon the pun)?

1. njwildberger: tangential thoughts Post author

I predict that one major trend in mathematics in the next century will be the universalization of much of geometry and analysis. By this I mean the realization that when you set up certain theories in a particular way, they manifest themselves not just over the “real or complex numbers” but more generally over other fields, including even finite fields. In my book I try to promote the idea that trigonometry is one such subject.

I think that calculus is another. This may seem a bit strange, but there are interesting aspects of calculus that can still be done over more general fields. One also learns something by seeing more clearly how things are different.

Almost all of traditional analysis can certainly be reformulated over the rational numbers, for example, where it inevitably leads to greater insight into the relations between theory and practice, and between exact and approximate solutions. One can no longer hide behind the easy idealizations involved in the current framework using “real numbers”.

You ask an important question on how to define functions and whether they relate to computer algorithms. This video series does a good job at stepping back and looking at the bigger picture of what it means to say something is an algorithm:
http://channel9.msdn.com/Series/C9-Lectures-Yuri-Gurevich-Introduction-to-Algorithms-and-Computational-Complexity
Because by experience people expect functions and algorithms to compute a value, or perform a task. But really when you step back and look at it from a scientific standpoint like how a physicist would see it, you see that an algorithm doesnt have to do anything at all, ie a video game does not produce any result we need. Nor does an algorithm have to follow any path of ordered steps but can be parallel and still produce results.

5. Thomas Fuhrmann

Concerning your considerations about limits and integrals above – what about starting out with an algebraic definition of an integral based on polynumbers, like:
I(x)=x^2/2
exploring possible geometric interpretations of this definition and extending this definition to rational polynumbers. Maybe that’s also what you’re thinking about. I’m curious to read and see more about your very interesting ideas!

1. njwildberger: tangential thoughts Post author

That kind of algebraic approach to integrals is very much in the direction I will be taking in the MathFoundations series. It is somewhat surprising that the basics of integration theory can be established without either “real numbers” or “limits”.

1. Thomas Fuhrmann

That is interesting, and for polynumbers it is not that difficult, I think. But I was wondering about extending it to rational polynumbers, beginning with “simple” ones like: 1/x, 1/x^2, 1/x^3, … Already 1/x is a little bit problematic, I think, because of the non–uniqueness of the polynomial on–sequence corresponding to log(x). So the question is: is it possible to find a unique representation, or do we have to content ourselves with the fact, that there is no such unique representation?

6. Paul Miller

(I’m really buying into this style of thought of yours, Norman)

I was looking into the diagonal argument of Cantor (admittedly from Wikipedia) but, whilst clearing up my understanding of what the argument does claim – and it is elegant – the problem you highlight was right there

It starts off ‘reasonably’ enough showing that any binary sequence you could write down can be put into correspondence with a unique natural number, which would make the sequences (whilst ‘infinite’ in themselves) ‘countable’ as objects.

It then proceeds to construct a ‘zeroth’ binary sequence, using the information from the diagonal elements to ensure that we produce a result which cannot be listed. And then that sleight of hand occurs: “Let T be a set consisting of all infinite sequences of 0s and 1s”. (That should be ‘the’ set surely?)

Having somehow ‘defined’ a way to construct ‘infinite sequences’ that relies only on the natural numbers, and painstakingly shown that every such sequence will be ‘in addition’ to any possible listing of sequences we may have (an appeal to construction) it leaps off a theological cliff edge by invoking a ‘set T’ (i.e. not an ‘object’ so much as a mental category).

This reminded me immediately (and the article makes mention) of Russell’s paradoxical ‘definition’ on the ‘set of all sets’. Doesn’t that just show the level of distraction required to accept this? There really is no diagonal argument without an appeal to logic (and ‘logic’ is not much of a foundation for mathematics, actually.)

7. Relike868p

Dear Sir,
Would you suggest theory of fields to be introduced into secondary education to fill up the holes of “irrational numbers” or “real numbers”?

1. njwildberger: tangential thoughts Post author

I would not suggest that the general theory of fields be introduced into secondary education. But I do believe that mention of the finite (prime) fields is a natural and good thing to do. Arithmetic mod p, where p is a prime, is a fun and useful topic that enlivens arithmetic and has plenty of practical applications too.

8. Maths Lover

I want to ask , what is the problem with ” let G be a group ” ? why not we deal with ” let ” as ” suppose ” or a ” if … then … ” statement ? why does ” suppose ” or ” if … then …. ” statement mean ?! this is a meta-language , which we use to deal with our formal language . in last , to deal with any language ( mathematical one for instance ) you must have a meta-language which you use to construct your new language . in last , if we defined some term t in terms of another terms , say t1 , t 2 , so we want to define t1 and t2 , say we can define this using s1 , s2 , s3 , so we want to define those s’s ! so we may continue defining thing forever! which isn’t sensible to me at least! we way stop as some levels and consider some things as ” primitive notions” or construct our mathematical system to be true regardless what those terms actually mean!

1. njwildberger: tangential thoughts Post author

The terms `let’, `consider’, `suppose’ `given’ are often used in an ambiguous way in modern mathematics. I would not be too confident about making statements about a `meta-language’; it seems more straightforward to just talk about plain English, which we are mostly using here.
An example of a common confusion: students often hear the phrase `let x be a real number’. But what does this concretely mean? Is it a thought experiment? If so which of the many possible `models’ of the `real numbers’ might we have in mind? Is there any conflict with the embarrassing fact that most models of real numbers have nowhere been written down completely and clearly? There are many examples when us pure mathematicians conjure stuff from nowhere with a wave of the `let there be’ phrase, or some variant of it.