A new logical principle

We are supposed to have a very clear idea about the `laws of logic’. For example, if all men are mortal, and Socrates is a man, then Socrates is mortal.

Are there in fact such things as the “laws of logic”? While we can all agree that certain rules of inference, like the example above, are reasonably evident, there are a whole lot of more ambiguous situations where clear logical rules are hard to come by, and things amount more to clever arguments, weight of public opinion and the authority of people involved.

It is not dissimilar to the situation with moral codes, where we can all agree that certain rules are self-evident in abstract ideal situations, but when we look at real-life examples, we often are faced with moral dilemmas characterized by ambiguity rather than certainty. One should not kill. Okay, fair enough. But what about when someone threatens one’s loved ones? What moral law guides us as to when we ought to flip from passivity to aggression?

Similar kinds of logical ambiguities surface all the time in mathematics with the modern reliance on axioms, limits, infinite processes, real numbers etc.

Let’s consider here the situation with “infinity”. Most modern pure mathematicians believe, following Bolzano, Cantor and Dedekind, that this is a well-defined concept, and indeed that it rightfully plays a major role in advanced mathematics. I, on the other hand, claim that it is a highly dubious notion; in fact not properly defined; unsupported by explicit examples; the source of innumerable controversies, paradoxes and indeed outright errors; and that mathematics can happily do entirely without it. So we have a major difference of opinion. I can give plenty of reasons and evidence, and have done so, to support my position. By what rules of logic is someone going to convince me of the errors of my ways?

Appeals to authority? That won’t wash. A poll to decide things democratically? No, I will not accept public opinion over clear thinking.

Perhaps they could invoke the Axiom of Infinity from the ZFC axiomfest! According to Wikipedia this Axiom is:

\exists X \left [\varnothing \in X \land \forall y (y \in X \Rightarrow S(y)  \in X)\right ]..

In other words, more or less: an infinite set exists. But I am just going to laugh at that. This is supposed to be mathematics, not some adolescent attempt to create god-like structures by stringing words, or symbols, together.

As a counter to such nonsense, I would like to propose my own new logical principle. It is simple and sweet:

Don’t pretend that you can do something that you can’t.

This principle asks us essentially to be honest. To not get carried away with flights of fancy. To keep our feet firmly planted in reality.

According to this principle, the following questions are invalid logically:

If you could jump to the moon, then would it hurt when you landed?

If you could live forever, what would be your greatest hope?

If you could add up all the natural numbers 1+2+3+4+…, what would you get?

As a consequence of my new logical principle, we are no longer allowed to entertain the possibility of “doing an infinite number of things”. No “adding up an infinite number of numbers”. No creating data structures by “inserting an infinite number” of objects. No “letting time go to infinity and seeing what happens”.

Instead, we might add up 10^6 numbers, or insert a trillion objects into a data set, or let time equal t=883,244,536,000. In my logical universe, computations finish. Statements are supported by explicit, complete, examples. The results of arithmetical operations are concrete numbers that everyone can look at in their entirety. Mathematical statements and equations do not trail “off to infinity” or “converge somewhere beyond the horizon”, or invoke mystical aspects of the physical universe that may or may not exist.

In my view, mathematics ought to be supported by computations that can be made on our computers.

As a consequence of my way of thinking, the following is also a logically invalid question:

If you could add up all the rational numbers 1/1+1/2+1/3+1/4+…, what would you get?

It is nonsense because you cannot add up all those numbers. And why can you not do that? It is not because the sum grows without bound (admittedly not in such an obvious way as in the previous example), but rather because you cannot do an infinite number of things.

As a consequence of my way of thinking, the following is also a logically invalid question:

If you could add up all the rational numbers 1/1^2+1/2^2+1/3^2+1/4^2+…, what would you get?

And the reason is exactly the same. It is because we cannot perform an infinite number of arithmetical operations.

Now in this case someone may argue: wait Norman – this case is different! Here the sum is “converging” to something (to “pi^2/6” according to Euler). But my response is: no, the sum does not make sense, because the actual act of adding up an infinite number of terms, even if the partial sums seems to be heading somewhere, is not something that we can do.

And this is not just a dogmatic or religious position on my part. It is an observation about the world in which we live in. You can try it for yourself. To give you a head start, here is the sum of the first one hundred terms of the above series:

(1589508694133037873 112297928517553859702383498543709859 889432834803818131 090369901)/(972186144434381030589657976 672623144161975583 995746241782720354705517986165248000)

Please have a go, by adding more and more terms of the series: the next one is 1/101^2. You will find that no matter how much determination, computing power and time you have, you will not be able to add up all those numbers. Try it, and see! And the idea that you can do this in a decimal system will very likely become increasingly dubious to you as you proceed. There is only one way to sum this series, and that is using rational number arithmetic, and that only up to a certain point. You can’t escape the framework of rational number arithmetic in which the question is given. Try it, and see if what I say is true!

There are many further consequences of this principle, and we will be exploring some of them in future blog entries. Clearly this new logical law ought to have a name. Let’s call it the law of (logical) honesty. Here it is again:

Don’t pretend that you can do something that you can’t.

As Socrates might have said, it’s just simple logic.

 

 

 

3 thoughts on “A new logical principle

  1. Vincent Marciante

    You asked “If you could add up all the rational numbers 1/1+1/2+1/3+1/4+…, what would you get?” and regarding the question stated that “It is nonsense because

    you cannot add up all those numbers.” If one was asked to say something interesting about summing 1/1 + 1/2 + 1/4 + 1/8 …, do you think that saying “2 is

    the smallest rational number that is greater than the partial sum at any stage in the process.” is logically valid/honest? Also, what about the more succinct

    phrase “the value of the sum approaches 2”?

    Reply
    1. njwildberger: tangential thoughts Post author

      I would agree that a statement like; “2 is the smallest rational number that is greater than the partial sum at any stage in the process” is logically valid, or can be made so. As for “the value of the sum approaches 2” that is a somewhat more problematic: one is flagging here that the meaning of the expression 1/1 + 1/2 + 1/4 + 1/8 … is going to be defined in a certain way. I am not saying that cannot be done, but one has to be very very careful not to build into the definition the assumption that one is “going to do an infinite number of things”.

      Reply
  2. gentzen

    In my logical universe, computations finish. Statements are supported by explicit, complete, examples. The results of arithmetical operations are concrete numbers that everyone can look at in their entirety.

    Even the concrete numbers may be less explicit than expected. After all, there are many possible representations of a concrete natural number. You might write it down in unary notation, in binary notation, or in the commonly used decimal system.

    I wanted a more canonical notation system instead, and wanted it to also support rational numbers (and maybe modular arithmetic and q-adic numbers). This gave essentially gave me a notation as acyclic graphs, but deciding whether two numbers are equal in this system leads to questions related to probabilistic identity testing.

    For me, this shows that even explicit representations of concrete numbers quickly lead back to questions about the “law of excluded middle”. For the intuitionistic logic started by Brouwer, we have “learned” since how to refute the “law of excluded middle” (in that context). For the ultrafinitist position you encourage, we strongly suspect that the “law of excluded middle” also fails, but being able to prove this would basically be equivalent to resolve some famous open problems in computational complexity.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s