Here is the video that presents this new result, at Wild Egg mathematics courses. The video description contains the following:

********************

The very first and arguably most important calculation in Calculus was Archimedes’ determination of the slice area of a parabola in terms of the area of a suitably inscribed triangle, involving the ratio 4/3. Remarkably, Archimedes’ formula extends to the cubic case once we identify the right class of cubic curves. These are the de Casteljau Bezier cubic curves with an additional Archimedean property, characterized either by the nature of the point at infinity on the curve, or alternatively by the geometry of the quadrilateral of control points.

This is a very pleasant situation, and shows the power of the Algebraic Calculus to not only explain current theories more carefully and correctly, but also to discover novel results and open new directions.

I should have mentioned in the video that this Archimedean situation covers also the special case of a cubic function of one variable, that is a curve with equation y=a+bx+cx^2+dx^3.

*************************

Posting research directly to YouTube, or some other web place, is quite an important development I believe. Here I am foregoing the usual refereeing process and uploading the material to the world, or in practice anyone interested in it who can find it. Should academics be allowed to do this?

On the one hand the work has not been peer reviewed, but these days peer review is often problematic, with most papers in pure mathematics almost certainly not being reviewed carefully and critically. This is not due to laziness or negligence, rather it is a necessary consequence of the increasing specialization and complexity of the subject. Most reviewers do not have the several weeks, or months, that it would typically require to delve into the details of a longish and complicated paper. It is understandable that on average they only skim the results and try to selectively check accessible proofs.

On the other hand, this new process completely sidesteps the usual gatekeepers of knowledge, namely editors and referees. Journals are often oriented to certain points of view or orthodoxies, unstated yet omnipresent. Perhaps they are entering a new phase when they will have to share relevance with the wiki processes of people deciding directly which content creators they value and trust.

In the meantime, I hope you enjoy the idea of a two thousand three hundred year old calculus result being extended to the next level!

]]>These are remarkable claims that are sure to raise eyebrows not just in the historical community, but also in the mathematical one. How can an unknown scribe, writing almost 4000 years ago with a cuneiform wedge on a small clay tablet, possibly have understood trigonometry not only before anyone else, but in a fashion quite different from anything since (at least, everything before my book on Rational Trigonometry published in 2005)?

Could it really be that this ancient form of ratio-based trigonometry, which completely avoids all mention of angles, actually contains a more profound understanding of this fundamental subject than all those hundreds of subsequent tables? Might it be that we are on the verge of a major shift in our understanding of how to teach trigonometry to high school students by incorporating this new/ very old understanding? And could it be that the powerful sexagesimal system that the ancient Sumerians first devised and that is essential to the understanding of P322 holds powerful advantages for modern computing?

And of course: do we need to seriously re-evaluate the role of OB mathematics in the history of the subject? How many other important mathematics that is currently credited to the Greeks actually is due to the much earlier cultures of the Sumerians/Akkadians/Babylonians and/or the Egyptians?

These are fascinating questions that we hope will be among those discussed as the result of our work. But we do hope that people debate these and other important issues after at least having looked at our paper in some detail. Unfortunately some serious historical academics, as well as at least one science journalist, have leapt to negative conclusions without giving our paper a serious reading.

Eleanor and Evelyn: here is the link to the paper again — please have a go at digesting our arguments, which we have spent two years carefully crafting, and which we are confident will change your orientation to this tablet: Plimpton 322 is far more than a teaching aid for teachers to cook up quadratic problems for their students. It is a work of undisputed genius which required a deep understanding of the trigonometry of a right triangle, and took a huge amount of effort to compile.

Anyway, I anticipate quite a few more posts on this fascinating development.

]]>This happy arrangement ensures that the generous life styles that we have voted upon ourselves can continue well into our long and satisfying retirement years; that we can look forward to an efficient health care system oriented towards our greying needs; and that eventually our children can pass on the ever increasing cumulative burden of debt that our parents started, onto our grandkids.

We have now several highly effective strategies to ensure our offsprings’ indebtedness. The first idea is both simple and foolproof, and revolves around the key idea of government bonds and debt. We borrow money from rich people to fund the development of our life styles, and promise to pay those rich people back at a higher rate of return than they otherwise would get. That way we get to party now, guarantee ourselves generous pensions once we retire, and ensure a lavish health care system is in place once we start to get decrepit. And the remarkable beauty is: we don’t have to pay for it — our children will! And of course the rich people are happy too, as they get even richer from the scheme.

This would be not such a clever idea if it was done on an individual basis, since it is hard to get your children to agree to taking on the personal debts that you have accumulated over a lifetime. Instead, we would find ourselves after some years paying through the nose for indulgences past. But when we do this at a societal level, we get to stretch the life of these bond debts out from our generation to the next one, and crucially we can just issue more bonds to service the debts from our old ones! So we never actually have to pay the piper, but just get to pass the increasing mess on to our kids.

The other very cool strategy that is now in place worldwide is to jack up the price of real estate everywhere, so that young people have to enslave themselves to purchase a place to live. We do this by first of all crucially restricting supply: governments have careful “zoning laws” in place that ensure that empty land, even if it is in abundance, can not be accessed for housing. We also ensure that ever more and more people are squeezed into a few mega-cities, where the obvious restrictions on land availability ensure that prices will ever only go upwards. And we orient the tax structures to favour “investors” (i.e. older people) to allow them to speculate advantageously, ensuring that young people who actually want to live in houses or flats to raise families have to juggle two jobs a piece to manage it.

And finally we count on pliant governments to maintain our interests, so that if anything comes along to threaten our real estate bubbles, they quickly enact first home buyers loans, or reductions on stamp duty etc to heat up flagging demand and keep prices on the move upwards. Every government knows that whoever is in power when the bubble breaks will be in the electoral wilderness for a generation afterwards, such is the power of our greying voting bloc.

The tax system and superannuation laws are set up to advantage senior citizens. Younger people pay for older people’s retirement. This used to be, in agrarian times, a societal convention that was more or less understood: grandma and grandpa were given a room at the back of the house, made sure to be given enough food, and were tended when sick. Now we have managed to hardwire something rather more insidious into the system: we want our original residences with all their accumulated junk into extreme old age, we want mobile health care as well as dialysis machines, we want travel reductions for the elderly, we want a good range of senior cruises and holidays, and we want tax breaks at every opportunity.

Then there’s university or college education. That used to be free, or almost free, when the baby boomers were going through the system. But now we have decided that students need to pay for a good part of their higher education, and clearly we can just keep jacking up the prices, forcing them into greater and greater debt before they have even landed their first jobs. Someone has to pay for my retirement, and why should it be me?

Increasingly young people are starting to wake up to the shoddy deal that we have dealt them. But there is little use in complaining, since with a demographic as large as us baby boomers, democracy is on our side. Our children can console themselves by the realization that in the fullness of time, they too can pass on the accumulated debts to their children, and by the possibility that when we pass on, the family house will go to them—at least whatever is left of it after the reverse mortgage we took out to finance that half year in Tuscany.

Australia is one of the world’s most economically advantaged countries, with 25 years of continued “economic growth”. And we have a mountain of debt, both public (government of different levels), business and private (all those expensive homes). Have a look at the Australian Debt Clock at http://www.australiandebtclock.com.au/. The estimate is a total of about 6 trillion AUS$ of debt, which works out to every man woman and child owing, on average, around

$ (6 x 10^12) / (25 x 10^6) = $ 24 x 10^4 =$ 240,000.

That includes, of course, those young children just coming into the world. But of course the pain is not spread equally, since a lot of people (often rich retirees) hold a good amount of that debt, and so benefit from the larger problem afflicting our society as a whole.

It is a sad situation. Perhaps we will see the day when kids are automatically born into slavery, to look forward to a life of working their way out of it. Perhaps that day is already here?

]]>The talk will be in the Pure Mathematics Colloquium on **November 8 2016** at the University of New South Wales, Sydney (UNSW), probably at 3 pm. (Note the change in date from a previous announcement!)

**********************************

Speaker: A/Prof N J Wildberger (UNSW)

Title: Primes, Complexity and Computation: How Big Number theory resolves the Goldbach Conjecture

Abstract: The Goldbach Conjecture states that every even number greater than 2 can be written as the sum of two primes, and it is one of the most famous unsolved problems in number theory. In this lecture, we look at the problem from the novel point of view of *Big Number theory* – the investigation of large numbers exceeding the computational capacity of our computers, starting from Ackermann’s and Goodstein’s hyperoperations, to the presenter’s successor-limit hierarchy which parallels ordinal set theory.

This will involve a journey to a distant, seldom visited corner of number theory that impinges very directly on the Goldbach conjecture, and also on quite a few other open problems. Along the way we will meet some seriously big numbers, and pass by vast tracts of *dark numbers*. We will also bump into philosophical questions about the true nature of natural numbers—and the arithmetic that is possible with them.

We’ll begin with a review of prime numbers and their distribution, notably the Prime Number Theorem of Hadamard and de la Vallee Poussin. Then we look at how complexity interacts with primality and factorization, and present simple but basic results on the *compression of complexity*. These ideas allow us to slice through the Gordian knot and resolve the Goldbach Conjecture: using common sense, an Aristotelian view on the foundations of mathematics as espoused by James Franklin and his school, and back of the envelope calculations.

*******************************

This lecture will be live streamed on YouTube at

So anyone from around the world who is interested can watch if they like. Hope you all will be able to join us for this fun, invigorating, and enlightening event! If you are in Sydney on the day, and can head over the UNSW for the event, we will be delighted to see you there.

]]>MathFoundations178: The law of (logical) honesty and the end of infinity

Even after one day, I have had many interesting comments. I would like to take the liberty of sharing with you two particularly cogent and insightful comments. The first is by *Karma Peny*, who writes (I have added some paragraph breaks):

***************

Excellent video; I could not agree more that it is time to expel “infinity” from mathematics. Not only do we need to define fundamental concepts with more clarity, but we need to define exactly what mathematics is. After thousands of years we still have no clear statement to describe what mathematics is.

In the early days of mathematics, all fundamental axioms were derived from real-world objects and actions. Any dispute over axioms could be resolved by examination of real-world objects and actions. As such, fundamental axioms were to some extent ‘provable’ by studying real-world objects and actions. Mathematics was devised to solve real-world problems and it was underpinned by real-world physics. Essentially mathematics provided a modelling tool to help us manage quantities of objects, determine measurements and to make predictions about the real-world, such as for engineering purposes and in astronomy. Many real-world scenarios have the same underlying physics, and so the same general-purpose mathematics can be applied to all cases. The addition of 6 apples to 2 apples has the same underlying mathematics as the addition of 6 pears to 2 pears.

This apparent generic nature can create the illusion that mathematics has its own ‘existence’ and that it is not simply a tool based on real-world physics. This will annoy many mathematicians, but the fact is that to claim that something is not related to the physical universe is to believe in the supernatural. This is what a belief in the supernatural means… the acceptance of phenomena that is not of this world. Whether maths is in the chemistry of the brain, on a computer, or in written form, it consists of rules devised by humans and all of maths has a physical presence.

To claim it has its own inherent existence or that it is in some way detached from reality is to turn maths into a belief system. The axiom ‘an infinite set exists’ is of equal value to an axiom that states ‘the god of thunder exists’. We can claim it is consistent and cannot be disproved, but both these axioms are equally worthless and irrelevant in the real-world, just as are any deductions derived using these axioms. It is often argued that the use of ‘infinity’ in mathematics has proven to be very successful, but the successes could be despite the use of ‘infinity’ rather than because of it. I suspect we will have more clarity and even more successes if we abandon the use non real-world axioms.

*****************

And now here is the response to *Karma Peny’s* comment by *Amanojack A, *a consistent contributor of well written and insightful comments. (I have made a single spelling correction.)

*****************

I think you have it exactly right. Math was born out of finding useful abstract objects and situations whose relations were isomorphic/homomorphic to various real-world situations. In other words, a mathematical field’s objects, “moving parts,” and those movements and relations usefully corresponded to certain objects, moving parts, and their movements and relations in the physical world. Pin down the math and now you have a powerful tool applicable to any real-world situation as long as it has an aspect with a homomorphic correspondent in the math. For example, pin down multiplication and you have a powerful tool for counting how many apples you have if they come in crates of 24 each.

So-called “pure math” was born out of the idea that it might be worth developing mathematical objects and relations that correspond to no physical situation yet discovered, but could. Seems noble enough. The problem came when people failed to keep track of context. They floundered into musing about things that not only had no known physical analog, but that **couldn’t ever even conceivably** have a physical analog. They were unpicturable, things we “only imagine that we can imagine,” as Wildberger said. Like infinity. In another comment I elaborate on how this mind trick is pulled off, making us think we can imagine something we really can’t.

When physicists objected, mathematicians like Hilbert decided to take over the physics departments as well – such has been the power of this social trick of intimidation by pretending to have a unique ability to imagine the nonsensical. Paradox thus became a badge of honor, a sign that you were approaching deep wisdom (rather than stumbling into incoherence). We live with results; they now affect every field, as people point to how physics – king of the sciences! – gets away with it. The infection started with math, spread to physics, and after a century has turned into an epidemic with tendrils extending even as far as the art world of all things.

Returning mathematics to a solid footing is of paramount importance to all fields, as math is the standards bearer for rigor. It does a good job with logical rigor but tends to ignore semantic rigor as is convenient, which in turn lets all other disciplines off the hook in this regard, weakening all of academia (physics being the main conduit).

You hit the nail on the head when you say the successes of mainstream math have come in spite of infinity rather than because of it. Just like the axiom, “There exists a god of thunder,” the axiom of infinite sets functions as a cultural license; it simply allows those figures with the most authority to make up whatever fudges they want to make it look like they’ve proven something rigorously when they haven’t. The resulting mathematical world and its engineering applications retain the appearance of being held up by mathematical rigor, but they are actually held up variously by fudges handed down by fiat and by engineers adjusting them to avoid the cases where they break down. In other words it’s a big mess that is shoehorned into a usable framework, but not by the rigor of mathematicians – that is just smoke and mirrors (see calculus, example; “we’ll prove it rigorously, with limits!” – no, we’ll just make a show of it and move on, knowing it already works well enough for engineering).

In a sense, then, infinity has been quite successful…as a tool for advancing people’s math careers and social standing.

********************

Thanks to both Karma Peny and Amonojack A for these penetrating comments!

]]>Go is originally a Chinese game, but is played extensively also in Japan and Korea, and other Asian countries, along with the rest of the world. Here in Sydney we are very lucky to have a high ranking Korean professional, Young-gil An (8D) to help promote the game and give teaching lessons. I will be heading to the Sydney Go Club this evening, to hear him analyse the second game in the historical match between AlphaGo and Lee Sedol, one of the world’s top ranked professional GO players. AlphaGo has won both of the first two games of this ground breaking series of 5, which are being played over the next week in Seoul.

I watched much of the second game on YouTube, and loved Michael Redmond’s analysis of the game, and the associated comments by Chris Garlock. You can find the entire game and commentary at https://www.youtube.com/watch?v=l-GsfyVCBu0.

I felt that the innovative aspects of AlphaGo’s opening play were particularly noteworthy. Lee Sedol knows that AlphaGo has records of hundreds of thousands of games in its data base (okay probably millions, since it has been playing itself a lot, which is a very unique and interesting way for it to impove), and so if it departs from very standard and traditionally respected patterns in the opening–the question naturally arises: does it know something that he, or other professional GO players, don’t?

This was perhaps most striking with the shoulder hit move on the fourth line stone at B37. Most of us amateurs would have been delighted to press along the fourth line making territory, but I guess Lee Sedol perhaps thought that would be submissive. Great stuff though.

I must admit that the awe and respect I have for the DeepMind team in creating such a powerful program is tempered with a bittersweet sadness that one of the really fundamentally human intellectual disciplines has been caught up to with our computers.

We can’t help but think: when will pure mathematics fall?

]]>

And there is hardly a more obviously suspicious case than that of *space-filling curves*. These are purportedly one-dimensional continuous curves that pass through every (real) point in the interior of a unit square.

But this contradicts ordinary common sense. It imbues mathematics with an air of disconnection with reality that lay people find disconcerting, just like the Banach-Tarski Paradox nonsense that I talked about a few blogs back.

In mathematics, dimension is an important concept; a line, or more generally a curve, is one-dimensional, while a square in the plane, or a surface is two-dimensional, and of course we appear to live in a three-dimensional physical space. But ever since the 17th century, mathematicians had started to realize that the correct definitions of “curve” and “surface” were in fact much more subtle and logically problematic than at first appeared, and that “dimension” was not so easy to pin down either.

In 1890 a new kind of phenomenon was introduced which cast additional doubt on our understanding of these concepts. This was the **space-filling curve** **of Peano**, which ostensibly fills up all of a square, without crossing itself. This was a contentious “construction” at the time, resting on the hotly debated new ideas of George Cantor on infinite sets and processes. But the influential German mathematician David Hilbert rose to defend it, and so generally 20th-century pure mathematicians fell into line, and today these curves are considered unremarkable, and just another curious aspect of the modern mathematical landscape.

But do these curves really exist? More fundamentally, are they even well defined? Or are we talking about some kind of mathematical nonsense here?

While Peano’s original article did not contain a diagram, Hilbert in the following year published a version with a picture, essentially the one produced below, so we will discuss this so-called **space-filling curve of Hilbert**. It turns out that the curve is created by iterating a certain process indefinitely. Along the way, we get explicit, finitely prescribed, discrete curves that twist and turn around the square in a predictable pattern. Then “in the limit”, as the analysts like to say—as we “go to infinity”—these concrete zig-zag paths turn into a continuous path that supposedly passes through every real point in the interior of the square exactly once. Does this argument really work??

The pattern can be discerned from the sequence of pictures below. Consider the square as being divided into 4 equal squares. At the first stage we join the centres of these four squares with line segments, moving say from the bottom left to the top left, then to the top right, and then to the bottom right. This gives us a U shape, opening down. Now at the next stage, we join four such U shapes, each in one of the smaller sub-squares of the original. The first opens to the left, the next two open down, and the last opens to the right, and they are linked with segments to form a new shape, which we call U_2 as shown in the second diagram. In the third diagram, we put four smaller U_2 shapes together, also oriented in a similar way to the previous stage, to create a new U_3 curve. And then we carry on doing the same: shrink whatever curve U_n we have just produced, and arrange four copies in the sub-squares oriented in the same way, and linked by segments to get the next curve U_{n+1}.

“Hilbert curve”. Licensed under CC BY-SA 3.0 via Wikipedia – https://en.wikipedia.org/wiki/File:Hilbert_curve.svg#/media/File:Hilbert_curve.svg

These are what we might call **Hilbert curves**, and they are pleasant and useful objects. Computer scientists sometimes use them to store date in a two dimensional array in a non-obvious way, and they are also used in image processing. Notice that at this point all these curves are purely rational objects. No real number shenanigans are necessary to either define or construct them. Peano and Hilbert made a real contribution to mathematics in introducing these finite curves!

And now we get to the critical point, where Hilbert, following Peano and ultimately Cantor, went *beyond the bounds of reasonableness*. He postulated that we could carry on this inductive process of producing ever more and more refined and convoluted curves *to infinity*. Once we have arrived at this infinity, we are supposedly in possession of a “curve” U_{infinity} with remarkable (read unbelievable) properties. [Naturally all of this requires the usual belief system of “real numbers”, which I suppose you know by now is a chimera.]

The “infinite Hilbert curve” U_{infinity} is supposedly continuous, but differentiable nowhere. It supposedly passes through every point of the interior of the square. By this, we mean that every point [x,y], where x and y are “real numbers”, is on this curve somewhere. Supposedly the curve U_{infinity} is “parametrized” by a “real number”, say t in the interval [0,1]. So given a real number such as

t=0.52897750910859340798571569247120345759873492374566519237492742938775…

we get a point U_{infinity}(t)=

[0.68909814147239785423401979874234…,0.36799574952335879124312358098423435…]

in the unit square [0,1] x [0,1].

(Legal Disclaimer: these real numbers are for illustration purposes only and do not necessarily correspond to reality in any fashion whatsoever. In particular we make no comment on the meaning of the three dot sequences that appear. Perhaps there are oracles or slimy galactic super-octopuses responsible for their generation, perhaps computer programs. You may interpret as you like.)

The infinite Hilbert curve U_{infinity} cannot be drawn. Its “construction” amounts to an imaginary thought process akin to an uncountably infinite army of pointillist painters, each spending an eternity creating their own individual minute point contributions as infinite limits of sequences of rational dots. Unlike those actual, computable and constructable curves U_n, the fantasy curve U_{infinity} has no practical application. How could it, since it does not exist.

Or we could just apply the (surely by now well-known) *Law of (Logical) Honesty*, formulated on this blog last year, which states:

Don’t pretend that you can do something that you can’t.

While you are free to create curves U_n even for very large n if you have the patience, resources and time, it is both logically and morally wrong to assert that you can continue to do this for all natural numbers, with a legitimate mathematical curve as the end product. It is just not true! You cannot do this. Stop pretending, analysts!

But in modern pure mathematics, we believe everything we are told. Sure, let’s “go to infinity”, even if what we get is obvious nonsense.

]]>These two definitions illustrate an important distinction which ought to be more widely appreciated: that some definitions bring into being a *new concept*, while others merely package conveniently and concisely *what we already know*.

Each of us from an early age understands what a dog is, by having many of them pointed out to us. We learn from experience that there are many different types of dog, but they mostly all have some common characteristics that generally separate them say from other animals, typically cats. The definition of a dog given above is only a summary, short and sweet, of familiar properties of the animal.

Most of us know what an asset it. But the adjective “encumbered”, when applied to assets, is not one that is familiar to us. At some point in the history of finance someone thought up this particular concept and needed a word for it. How about *encumbered*? This might have been one of several terms proposed— borrowing a word from English with a related but different meaning, and giving it here a precise new meaning.

Let’s give a name to this distinction that I am trying to draw here. Let’s say that a definition that summarizes more concisely, or accurately, something that we already know is a **rhetorical definition**. Let’s also say that a definition that creates a new kind of object or concept by bringing together previously unconnected properties is a **conceptual definition**.

If I ask you what **love** is, you will draw upon your experience with life and the human condition, and give me a list of enough characteristics that characterize love in your view. Almost everyone would have an opinion on the worth of your definition, because we all have prior ideas about what love is, and will judge whether your definition properly captures what we already know from our a priori experience. This kind of definition is largely rhetorical.

If I ask you what a **perfect number** is, and you are a good mathematics student, you will tell me that it is a natural number which is equal to the sum of those of its divisors which are less than itself. So 6 is a perfect number since 6=1+2+3, and 28 is a perfect number since 28=1+2+4+7+14. This is not the usual colloquial meaning of *perfect*: we are just hijacking this word to bring into focus a formerly unconsidered notion (this was done by the ancient Greeks in this case). This is a conceptual definition.

In mathematics, we *prefer conceptual definitions to rhetorical ones*. When we define a concept, we want our statement of that concept to be so clear and precise that it invokes the same notion to all who hear it, even those who are unfamiliar with the idea in question. Prior experience is not required to understand conceptual definitions, except to the extent of having mastered the various technical terms involved as constituents of the definition.

We do not want that in order to properly understand a term someone needs some, perhaps implicit, prior understanding of the term. If I tell you that a **number** officially is something used for counting or measurement, you are probably not happy. While this kind of loose description is fine for everyday usage, it is not adequate in mathematics. Such a rhetorical definition is ambiguous: because it draws upon your prior loose experience with counting and measuring, and we can all see that people could view the boundaries of this definition differently from others. In mathematics we want to create fences around our concepts; our definitions ought to be precise, visible and unchanging.

If I tell you that a function is **continuous** if it varies without abrupt gaps or fractures, then you recognize that I am not stepping up to the plate mathematically speaking. This is a rhetorical definition: it relies on some prior understanding of notions that are loosely intertwined with the very concept we are attempting to attempting to frame.

And now we come to the painful reality: **modern mathematics is full of rhetorical definitions.** Of concepts such as: *number, function, variable, set, sequence, real number, formula, statement, topological space, continuity, variety, manifold, group, field, ring, *and* category*. These notions in modern mathematics rest on definitions that are mostly unintelligible to the uninitiated. These definitions implicitly assume familiarity with the topic in question.

The standard treatment in undergrad courses shows you first lots of examples. Then after enough of these have been digested, you get a “definition” that captures enough aspects of the concept that we feel it characterises the examples we have come to learn. The cumulative effect is that you have been led to believe you “know” what the concept is, but the reality is something else. This becomes clear quickly when you are presented with non standard examples that fall outside the comfortable bounds of the text books.

This is a big barrier to the dissemination of mathematical knowledge. While modern books and articles give the appearance of precision and completeness, this is often a ruse: implicitly the reader is assumed to have gained some experience with the the topic from another source. There is a big difference between a layout of a topic and a summary of that topic. An excellent example is the treatment of real numbers in introductory Calculus or Analysis texts. Have a look at how cavalierly these books just quickly gloss over the “definition”, essentially assuming that you already know what real numbers supposedly are. Didn’t you learn that way back in high school?

Understanding the rhetorical aspects of fundamental concepts in pure mathematics goes a long way to explaining why the subject is beset with logical problems. Sigh. I guess I have some work to do explaining this. But you can do some of it yourself by opening a textbook and looking up one of these terms. Ask yourself: without any examples, pictures or further explanations, does this definition stand up on its own two legs? If so, then it can claim to be a logical conceptual definition. Otherwise it is more likely a dubious rhetorical definition.

]]>

That is, given a natural number n, there is a sequence of prime numbers of the form p+mk, k=0,1,…,n-1 where p and m are natural numbers. For example 5, 11, 17, 23, 29 is a sequence of 5 primes in arithmetical progression with difference m=6, while 199, 409, 619, 829, 1039, 1249, 1459, 1669, 1879, 2089 is a sequence of 10 primes in arithmetical progression, with difference m=210.

Up to now the longest sequences of primes in such an arithmetical sequence that I know about was found in 2010 by Benoãt Perichon: it is

43,142,746,595,714,191 + (23,681,770)( 223,092,870)*k *, for *k* = 0 to 25.

The proof of Green and Tao is clearly a tour-de-force of modern analysis and number theory. It relies on a result called Szemeredi’s theorem along with other results and techniques from analytical number theory, combinatorics, harmonic analysis and ergodic theory. Measure theory naturally plays an important role.

Both Green and Tao are brilliant mathematicians, and Terence Tao is a Fields Medal winner. Terence is also originally Australian, and spent half a year at UNSW some time ago, where I had the pleasure of having some interesting chats over coffee with him.

Is the Green-Tao theorem true? This is actually quite an interesting question. The official proof was published in *Annals of Math.* 167 (2008), 481-547, and has been intensively studied by dozens of experts. No serious problems with the argument has been found, and it is now acknowledged that the result is firmly established. By the experts.

But is the Green-Tao theorem true? That depends not only on whether the arguments hang together logically when viewed from the top down, but also crucially on whether the underlying assumptions that underpin the theories in which those arguments take place are correct. It is here that one must accept that problems might arise.

So I am not suggesting that any particular argument of the Green-Tao paper is faulty. But there is the more unpleasant possibility that the whole edifice of modern analysis on which it depends is logically compromised, and that this theorem is but one of hundreds in the modern literature that actually don’t work logically if one descends right down to the fundamental level.

Let me state my position, which is rather a minority view. I don’t believe in real numbers. The current definitions of real numbers are logically invalid in my opinion. I am of the view that the arithmetic of such real numbers also has not been established properly.

I do not believe that the concept of a set has been established, and so consequently for me any discussion involving infinite sets is logically compromised. I do not accept that there is a completed set of natural numbers. I find fault with analysts’ invocation of limits, as often this brazenly assumes that one is able to perform an infinite number of operations, which I deny. I don’t believe that transcendental functions currently have a correct formulation, and I reject modern topology’s reliance on infinite sets of infinite sets to establish continuity. I believe that analysts are not being upfront in their approach to Wittgenstein’s distinction between choice and algorithms when they discuss infinite processes.

Consequently I find most of the theorems of Measure Theory meaningless. The usual arguments that fill the analysis journals are to me but possible precursors to a more rigorous analysis that may or may not be established some time in the future.

Clearly I have big problems.

But as a logical consequence of my position, I cannot accept the argument of the Green-Tao theorem, because I do not share the belief in the underlying assumptions that modern experts in analysis have.

But there is another reason why I do not accept the Green-Tao theorem, that does not depend on a critical analysis of their proof. I do not accept the Green-Tao theorem because *I am sure that it is not true*. I do not believe that there are arbitrarily long arithmetical progressions of prime numbers.

Let me be more specific. Consider the number z=10^10^10^10^10^10^10^10^10^10+23 that appeared in my debate last year with James Franklin called *Infinity: does it exist?*

My Claim:There is no arithmetical sequence of primes of length z.

This claim is to be distinguished from the argument that such a progression exists, but it would be just too hard for us to find it. My position is not based on what our computers can or cannot do. Rather, I assert that there is no such progression of prime numbers. Never was, never will be.

I do not have a proof of this claim, but I have a very good argument for it. I am more than 99.99% sure that this argument is correct. For me, the Green-Tao argument, powerful and impressive though it is, would be better rephrased in a more limited and precise way.

I do not doubt that with some considerable additional work, they, or others, might be able to reframe the statement and argument to be independent of all infinite considerations, real number musings, and dubious measure theoretic arguments. Then some true bounds on the extent and validity of the result might be established. That would be a lot of effort, but it might then be logically correct– from the ground up.

]]>Modern measure theory studies, for example, the interval [0,1] of so-called real numbers. There are quite a lot of different ways of trying to conjure these real numbers into existence, and I have discussed some of these at length in many of my YouTube videos and also here in this blog: Dedekind cuts, Cauchy sequences of rationals, continued fractions, infinite decimals, or just via some axiomatic wishful thinking. In this list, and in what follows, I will suppress my natural inclination to put all dubious concepts in quotes. So don’t believe for a second that I buy most of the notions I am now going to talk about.

Measure theory texts are remarkably casual about defining and constructing the real numbers. Let’s just assume that they are there, shall we? Once we have the real numbers, measure theory asserts that it is meaningful to consider various infinite subsets of them, and to assign numbers that measure the extent of these various subsets, or at least some of them. The numbers that are assigned are also typically real numbers. The starting point of all this is familiar and reasonable: that a rational interval [a,b], where a,b are rational numbers and a is less than or equal to b, ought to have measure (b-a).

So measure theory is an elaborate scheme that attempts to extend this simple primary school intuition to the rather more convoluted, and logically problematic, arena of real numbers and their subsets. And it wants to do this without addressing, or even acknowledging, any of the serious logical problems that people (like me) have been pointing out for quite a long time.

If you open a book on modern measure theory, you will find a long chain of definitions and theorems: so-called. But what you will not find, along with a thorough discussion of the logical problems, is a *wide range of illustrative examples*. This is a theory that floats freely above the unpleasant constraint of exhibiting concrete examples.

Your typical student is of course not happy with this situation: how can she verify independently that the ideas actually have some tangible meaning? Young people are obliged to accept the theories they learn as undergraduates on the terms they are given, and as usual appeals to authority play a big role. And when they turn to the internet, as they do these days, they often find the same assumptions and lack of interest in specific examples and concrete computations.

Here, to illustrate, is the Example section of the Wikipedia entry on Measure, which is what you get when you search for Measure Theory (from Wikipedia at https://en.wikipedia.org/wiki/Measure_(mathematics) ):

**Examples **

___________________________

Some important measures are listed here.

- The counting measure is defined by
*μ*(*S*) = number of elements in*S*. - The Lebesgue measure on
**R**is a complete translation-invariant measure on a*σ*-algebra containing the intervals in**R**such that*μ*([0, 1]) = 1; and every other measure with these properties extends Lebesgue measure. - Circular angle measure is invariant under rotation, and hyperbolic angle measure is invariant under squeeze mapping.
- The Haar measure for a locally compact topological group is a generalization of the Lebesgue measure (and also of counting measure and circular angle measure) and has similar uniqueness properties.
- The Hausdorff measure is a generalization of the Lebesgue measure to sets with non-integer dimension, in particular, fractal sets.
- Every probability space gives rise to a measure which takes the value 1 on the whole space (and therefore takes all its values in the unit interval [0, 1]). Such a measure is called a
*probability measure*. See probability axioms. - The Dirac measure δ
_{a}(cf. Dirac delta function) is given by δ_{a}(*S*) = χ_{S}(a), where χ_{S}is the characteristic function of*S*. The measure of a set is 1 if it contains the point*a*and 0 otherwise.

Other ‘named’ measures used in various theories include: Borel measure, Jordan measure, ergodic measure, Euler measure, Gaussian measure, Baire measure,Radon measure, Young measure, and strong measure zero.

In physics an example of a measure is spatial distribution of mass (see e.g., gravity potential), or another non-negative extensive property, conserved (seeconservation law for a list of these) or not. Negative values lead to signed measures, see “generalizations” below.

Liouville measure, known also as the natural volume form on a symplectic manifold, is useful in classical statistical and Hamiltonian mechanics.

Gibbs measure is widely used in statistical mechanics, often under the name canonical ensemble.

_______________________________

(Back to the regular channel) Now one of the serious problems with theories which float independent of examples is that it becomes harder to tell if we have overstepped logical bounds. This is a problem with many theories based on real numbers.

Here is a key illustration: modern measure theory asserts that the real numbers with which it is preoccupied actually fall into two types: the *computable* ones, and the *uncomputable* ones. Computable ones include rational numbers, and all irrational numbers that (supposedly) arise as algebraic numbers (solutions of polynomial equations), definite integrals, infinite sums, infinite products, values of transcendental functions; and in fact any kind of computer program.

These include sqrt(2), ln 10, pi, e, sqrt(3+sqrt(5)), Euler’s constant gamma, values of the zeta function, gamma function, etc. etc. Every number that you will ever meet concretely in a mathematics course is a computable number. Any kind of decimal that is conjured up by some pattern, say 0.1101001000100001000001…, or even by some rule such as 0.a_1 a_2 a_3 … where a_i is 1 unless i is an odd perfect number, in which case a_i=2, is a computable number.

And what is then an uncomputable real number?? Hmm.. let’s just say this rather quickly and then move on to something more interesting, okay? Right: *an uncomputable real number is just a real number that is not computable*.

Uhh.. such as…? Sorry, but there are no known examples. It is impossible to write down any such uncomputable number in a concrete fashion. And what do these uncomputable numbers do for us? Well, the short answer is: nothing. They are not used in practical applications, and even theoretically, they don’t gain us anything. But they are there, my friends—oh yes, they are there — because the measure theory texts tell us they are!

And the measure theory texts tell us even more: that the uncomputable real numbers in fact swamp the computable ones *measure-theoretically.* In the interval [0,1], the computable numbers have measure zero, while the uncomputable numbers have measure one.

Yes, you heard correctly, this is a bona-fide theorem of modern measure theory: *the computable numbers in [0,1] have measure zero, while the uncomputable numbers in [0,1] have measure one!*

Oh, sure. So according to modern probability theory, which is based on measure theory, the probability of picking a random real number in [0,1] and getting a computable one is zero. Yet no measure theorist can give us even one example of a single uncomputable real number.

This is modern pure mathematics going beyond parody. Future generations are going to shake their heads in disbelief that we happily swallowed this kind of thing without even a trace of resistance, or at least disbelief.

But this is 2016, and the start of a New Year! I hope you will join me in an exciting venture to expose some of the many logical blemishes of modern pure mathematics, and to propose some much better alternatives — theories that actually make sense. Tell your friends, spread the word, and let’s not be afraid of *thinking differently*. Happy New Year.

]]>