Let H be a load of hogwash. By which we mean, of course, that H is an unbounded category of fuzzy schemas, expressed in the first order language of obfuscation with only countably many incompleted disjunctions.
Now take the space L of all cohomological Aleph one completions of H, partially ordered by increasing complexity—the de-facto mathematical convention in the beginning twenty-first century, but we spell it out for grad students—and consider the set N of all normalized functors from L to its contragradient.
The model space of N clearly has an adelic inductive boundary, which we denote by N_infinity. Let M be the infinite unstable tensor product of Aleph squared many copies of N_infinity, and take G to be the stable homotopy group of the measure zero projection of the affine homological dual of the K theory retract of M upon its enveloping quantum C* algebra.
While there are many fascinating questions arising from the inverse scattering problem of the functorial pair (L,G), we are naturally interested in considering the projective Hom groups of M into the space of all transcendental harmonic twistings of G mod its radical.
Assuming the Axiom of Unrestricted Freedom with NP dominance, the associated cardinality of all semi-stable injections of H into the perverse sheaf of pseudo-differential connections of the cotangent bundle T(L,G) ought to be wildly inaccessible, making the whole subject a bonanza for further investigations and grant applications. Which of course goes to show yet again that ZFC is indeed finger-licking good.
Just some thoughts I had the other day, which i thought I might share with you.
A little “extra” intense today?
Just kidding. Keep up the videos and blogging!
Guess I was feeling a bit combative this morning 🙂
I understand. And I happen to sympathize with a great deal of what you’ve explained in your videos. I’ve done a few laps around real analysis, ZF+C set theory, measure theory and topology, and still I have a handful of key questions after watching the “Dedekind cuts and computational difficulties with real numbers” video. Would it be possible to contact you to address these? Or would you prefer I just keep watching the videos?
If your questions are concise, and relate to a particular video, I suggest you pose the question in a comment there, so others can view it too. Or you could even pose the question(s) here if you like.
On the topic of various non-specific subsets of H, I was starting to get a picture of what you have been saying about angles and how they complicate. It never occurred to me before, for instance, that the x and y on the circle are our “sine” and “cosine”. For rational t, we get rational sine and cosine. Who knew? It’s when we try to break up the circle into “equal increments of rotation”, i.e., angles, that, in most cases, we fail to get rational t. And we attempt to process the theta value as an “arc length”, which introduces pi. But at its simplest, spread, using quadrance, expresses, rationally, the value we seek, while sine is really its square root. In a sense, then, the infinite algorithms used to “find” sines when there is no rational t must connect to the infinite algorithms used to express “continuous” square roots for generally non-square numbers.
It occurs to me, that maybe circles are really fairly simple objects. It’s certain questions we ask about them which cannot be answered so simply; questions like “how do we find the ratio of the radius to the circumference?” or “how do we locate points [approximately] on the circle at equal increments of rotation about the center?” The latter is usually presented as trivial. It appears so when we try to cut a pie into 8 slices, or examine a protractor that someone else engineered for us and take it for granted. When we try to cut the pie into 7 slices, we might begin to observe the problem!
A very insightful and lovely comment. I think you are right on the money.
I think you have said it very well here. It is a sizable mental adjustment to go from the inherently vague parametrization involving cos and sin to the more accurate and logical rational parametrization, because it teaches us about our limitations too. Just because we can string words together does not mean that the mathematics reflects those grammatical constructions.
Is this a pun (or a fun) on univalence homotopy type theory?
Can I ask for your thoughts? Do you think we still need to do Foundations of Mathematics, i.e., the Hilbert program of reconstructing maths from the ground up? It seems to me maths has been happy flying along for centuries without it.
Constructing proper foundations is far and away the most important problem in modern pure mathematics. Have you checked out my MathFoundations series at my YouTube channel Insights into Mathematics?
LOL! “We guys” consider the same in… software engineering. C.A.R. Hoare come to our rescue and help us! 😉
I really like your courage of calling out mathematicians for being cavalier with respect to rigor. They are paying lip service to it, but in their hearts don’t believe that it is really possible to be rigorous without being utterly boring. It’s sad that young students believe to be more rigorous than Euclid, just because he is sometimes non-commital, while they themselves never even try to be explicit about all details. However, book authors sometimes still know what rigor really means, and can be explicit about the details. Even the HoTT book might fit into this category of books which try to be rigorous.
Those are good points. We need more examples of people being less ambitious content-wise and more ambitious rigour-wise!
Mathematicians are generally great when it comes to logical rigor (ensuring that theorems follow from their definitions and assumptions), but make a mockery of semantic rigor (ensuring that fundamental terms are clearly defined so that everyone understands the same thing by them, and that terms are only used in well-defined contexts).
This is a game of using surface-level rigor to plaster over fundamental waffling. It is the old Platonist error of placing words as fundamental, rather than realizing that words are just communication signals to get people to visualize the same thing. In modern math, if you can string together a grammatically correct sentence with accepted terms, it is automatically assumed to mean something. “Let S be the set of all sets that do not contain themselves.” Never is it asked whether this can even be coherently interpreted; instead we immediately jump to considering the set as if the coherence of the utterance were apparent simply by its being grammatically correct.
Very well put! Thanks.
I found this from a link on MathOverflow:
Comment by Todd Trimble:
The Sokal hoax was meant to expose actual intellectual bankruptcy in certain academic circles, where nonsense dressed up in jargon could pass muster. I can’t think of examples where “abstruse” fields of mathematics are ripe for a similar kind of parodizing, since we make a point of being careful and at least somewhat rigorous, unless we’re talking about the output of outright incompetents. And I confess that I don’t understand the boxed question; what is meant by “(in another universe) make mathematical sense”? Could you give an example of what you mean?
Reply by me: @ToddTrimble “since we make a point of being careful and at least somewhat rigorous, unless we’re talking about the output of outright incompetents.” Or outright something else beginning with F, as in examples I could give of publications in otherwise respectable journals with well-known editors, Not that I could actually post them here without inviting serious repercussions, but I can be reached by email
Write to me if you want to see some truly egregious examples of peer review failure.
Pingback: Notes on Galois Theory – Programming, Made Complicated