- The law of excluded middle is invalid intuitionistically. [Brouwer; Heyting]
- The axiom of choice is a theorem of intuitionistic type theory. [Martin-Löf, 1984, p. 50]
- The axiom of choice implies the law of excluded middle in the internal logic of a topos, and so in particular in any intuitionistic set theory. [Diaconescu, 1975]
We appear to have a contradiction. But this is an illusion, since the ‘axiom of choice’ in (2) and (3) are actually subtly different.  Indeed, the axiom of choice of (3) follows from what Martin-Löf  calls the ‘extensional axiom of choice’, but not the intensional axiom of choice of (2), which, according to Bishop , is evident ‘because a choice is implied by the very meaning of existence.’
Read more …
From Scott Aaronson's recent essay, Why philosophers should care about computational complexity:
While the details differ, what most formal accounts of knowledge have in common is that they treat an agent’s knowledge as closed under the application of various deduction rules like the ones above. In other words, agents are considered logically omniscient: if they know certain facts, then they also know all possible logical consequences of those facts.
Sadly and obviously, no mortal being has ever attained or even approximated this sort of omniscience (recall the Turing quote from the beginning of Section 1). So for example, I can know the rules of arithmetic without knowing Fermat’s Last Theorem, and I can know the rules of chess without knowing whether White has a forced win.
Intuitively, we want to say that your “knowledge” consists of various non-logical facts (“grass is green”), together with some simple consequences of those facts (“grass is not pink”), but not necessarily all the consequences, and certainly not all consequences that involve difficult mathematical reasoning. Unfortunately, as soon as we try to formalize this idea, we run into problems.
The most obvious problem is the lack of a sharp boundary between the facts you know right away, and those you “could” know, but only after significant thought. (Recall the discussion of “known primes” from Section 3.3.) A related problem is the lack of a sharp boundary between the facts you know “only if asked about them,” and those you know even if you’re not asked.
Of course, the same is obviously true of belief systems. If one holds beliefs which can be shown to imply a logical contradiction, if beliefs were deductively closed (and logic classical, or at least intuitionistic), then one would believe anything and everything, by the principle that ex falso quodlibet. But then again we have such things as paraconsistent logic to deal with this particular problem.
Je pourrais illustrer la deuxième approche, en gardant l’image de la noix qu’il s’agit d’ouvrir. La première parabole qui m’est venue à l’esprit tantôt, c’est qu’on plonge la noix dans un liquide émollient, de l’eau simplement pourquoi pas, de temps en temps on frotte pour qu’elle pénètre mieux, pour le reste on laisse faire le temps. La coque s’assouplit au fil des semaines et des mois – quand le temps est mûr, une pression de la main suffit, la coque s’ouvre comme celle d’un avocat mûr à point !
— Alexandre Grothendieck, Récoltes et semailles
I can illustrate the second approach with the same image of a nut to be opened. The first analogy that came to my mind is of immersing the nut in some softening liquid, and why not simply water? From time to time you rub so the liquid penetrates better, and otherwise you let time pass. The shell becomes more flexible through weeks and months—when the time is ripe, hand pressure is enough, the shell opens like a perfectly ripened avocado!
— as translated by Colin McLarty, The rising sea: Grothendieck on simplicity and generality
Some cute examples of computability theory applied to specific problems.
- We now know that, if an alien with enormous computational powers came to Earth, it could prove to us whether White or Black has the winning strategy in chess. To be convinced of the proof, we would not have to trust the alien or its exotic technology, and we would not have to spend billions of years analyzing one move sequence after another. We’d simply have to engage in a short conversation with the alien about the sums of certain polynomials over finite fields.
- There’s a finite (and not unimaginably-large) set of boxes, such that if we knew how to pack those boxes into the trunk of your car, then we’d also know a proof of the Riemann Hypothesis. Indeed, every formal proof of the Riemann Hypothesis with at most (say) a million symbols corresponds to some way of packing the boxes into your trunk, and vice versa. Furthermore, a list of the boxes and their dimensions can be feasibly written down.
- Supposing you do prove the Riemann Hypothesis, it’s possible to convince someone of that fact, without revealing anything other than the fact that you proved it. It’s also possible to write the proof down in such a way that someone else could verify it, with very high confidence, having only seen 10 or 20 bits of the proof.
An interesting paper from Peter Freyd, of adjoint functor theorem fame.
The title is wishful thinking; there ought to be a subject that deserves the name “algebraic real analysis.”
Herein is a possible beginning.
Not a short paper, nor easy! However, his final appendix makes for interesting reading. A quote:
I was appalled by the gap between applied mathematical experience and what we could even imagine proving. How does one integrate over all continuous functions to arrive at the expected error of a particular method?
Of course one can carve out finite dimensional vector spaces of continuous functions and compute an expected error thereon. But all continuous functions? It’s easy to prove that there is no measure—not even a finitely additive measure—on the set of all continuous functions assuming at least that we ask for even a few of the most innocuous of invariance properties. Yet experience said that there was, indeed, such a measure on the set of functions one actually encounters.
But it wasn’t just a problem in mathematics: I learned from physicists that they succeed in coming to verifiable conclusions by pretending to integrate over the set of all paths between two points. Again it is not hard to prove that no such “Feynman integral” is possible once one insists on a few invariance properties.
Even later I learned (from the work of David Mumford) about “Bayesian vision”: in this case one wants to integrate over all possible “scenes” in order to deduce the most probable interpretations of what is being seen. A scene is taken to be a function from, say, a square to shades of gray. It would be a mistake to restrict to continuous functions—sharp contour boundaries surely want to exist. Quite remarkable “robotic vision” machines had been constructed for specific purposes by judiciously cutting down to appropriate finite-dimensional vector spaces of scenes. But once again, there is no possible measure on sets of all scenes which enjoy even the simplest of invariance conditions.
Thus three examples coming with quite disparate origins—math, science, engineering—were shouting that we need a new approach to measure theory.
One line of hope arose from the observation that the non-existence proofs all require a very classical foundation. There’s the enticing possibility that a more computationally realistic setting—as offered, say, by effective topoi—could resolve the difficulties. A wonderful dream presents itself: the role of foundations in mathematics—and its applications—could undergo a transformation similar to the last two centuries’ transformation of geometry.
An amusing observation from Todd Trimble’s essay on the elementary theory of the category of sets:
My own reaction is that ZFC is perhaps way too powerful! For example, the fact that ∈ is an endo-relation makes possible the kind of feedback which can result in things like Russell’s paradox, if one is not careful. Even if one is free from the paradoxes, though, the point remains that ZFC pumps out not only all of mathematics, but all sorts of dross and weird by-products that are of no conceivable interest or relevance to mathematics. One might think, for example, that to understand a model of ZFC, we have to be able to understand which definable pairs (a, b) satisfy a ∈ b. So, in principle, we can ask ourselves such otherwise meaningless gibberish as “what in our model and implementation is the set-theoretic intersection of the real number π and Cantor space?” and expect to get a well-defined answer. When you get right down to it, the idea that everything in mathematics (like say the number e) is a “set” is just plain bizarre, and actually very far removed from the way mathematicians normally think. And yet this is how we are encouraged to think, if we are asked to take ZFC seriously as a foundations.
This has been bugging me ever since I understood that the ontology of ZFC contains sets, and only sets. A bit too barren for my taste. I expect the objects of my universe to have types other than just the one!
Although topology is usually motivated as a study of spatial structures, you can interpret topological spaces as being a particular type of logic, and give a purely logical, non-spatial interpretation to a number of bits of topology.
I should note that the ‘semi-decidable logic’ he describes in this post is in fact the propositional fragment of geometric logic. A model of propositional geometric logic would be precisely the frame of open sets of a locale (a ‘pointless’ topological space), or equivalently, a complete Heyting algebra.
To be precise, in a category with a pair of objects with at least two distinct arrows between them, there cannot be exponential objects if there is a zero object. (Proof below.) This shows that, for example, the set of all homomorphisms from one group to another is not “naturally” a group. On the other hand, the set of all homomorphisms from one abelian group to another is naturally an abelian group, but it is not an exponential object in the strict sense.
Read more …