I assume the reason this article has been published this week in particular is that a proof of the geometric Langlands conjecture has just been announced. [1]
The first paper in the series was posted to the arXiv yesterday. [2]
It’s obviously far too soon to say whether the proof is correct, but I’m sure experts will be taking a keen interest.
Geometric Langlands, while inspired by questions Weil was interested in, is actually answering somewhat different questions, which are not arithmetic in nature: hence the 'geometric' moniker. The actual Langlands program, which deals with number fields and hence with questions of an arithmetic flavor (meaning solving equations over the rational numbers rather than the real or complex numbers) is still very much unexplored in its full generality.
Edit: it appears that I might have spoken too soon. At the link below is a paper by Sam Rankin establishing some consequences for arithmetic questions, though over function fields (such as those formed by rational functions over finite fields).
An obligatory quote on the Langlands Program from a well-known category theorist: The thing is, I’ve never succeeded in understanding the slightest thing about it.
You know, it's always kind of bothered me that we constantly run into these "it's far too soon to say whether the proof is correct" issues. They're proofs. It should be apparent whether they are correct. Otherwise, are they really proof? As a layman who's only familiar with Hilbert's work as it pertains to Godel and CS, I'm curious what makes expressing these proofs using some sort of formal logic so difficult.
In this particular case, the proof literally hasn’t been written up yet! The papers are works in progress, only one of which has been even posted as a preprint so far.
But in general it’s reasonably common for a proof to have been written up to the best of the author’s ability, and for it still to be impossible for anyone else to understand it well enough to be confident in its correctness.
Formal logic was no help, for two reasons. First of all the formal logical systems of the early 20th century were really a sort of existence proof, to show that a formal logical system can in principle be defined, and were too unwieldy and verbose for practical use in non-trivial situations. Secondly, it would then become necessary for the reader to check the correctness of a proof in formal logic, which is an even more difficult task than checking a well-written informal proof.
But this is changing: modern formal proof systems are designed with user-friendliness in mind, and produce proofs that can be verified by computer. It’s now starting to happen that formalisation can be used as a way out of the impasse where a proof is too complicated for anyone but the author to understand. I can think of two examples of this: Thomas Hales’s Flyspeck project [1] was the first such case, and required a monumental effort from Hales and his collaborators. Much more recently, Randall Holmes’s proof that NF is consistent was formalised [2], finally putting to rest a very old question in the foundations of logic.
Expressing proofs as formal logic is a great tool to make them more rigorous. Formal proofs are easier to check with a computer, provided that all the cases have been fleshed out, and that it has been written in a proof-assistant language. However those proofs are tedious to write, and a human reader has a limited capacity for handling high number of cases. Omitting cases and writing out informally why those cases can be omitted is crucial to help the reader to follow the proof. That is a step where errors can arise.
An example of wrong proof due to sketching quickly some cases is the proof that all triangles are isosceles [1].
Here is also an example of an apparently obvious result with a non-trivial proof, were all the cases are written out formally in the proof-assistant language Coq [2]. It is the proof that if we compute the multiplications and the square roots with floating-point arithmetic in base 2, then sqrt(a*a) is actually |a|. This was assumed without proof in some previous papers, and it is easy to understand how the authors may have missed it. Note that this result is not true in base 10.
Finally, sometimes non-formal proof can actually be more convincing than formal proof. For example, it was proven formally in 1957 that it is possible to turn a sphere inside out continuously, without cutting it, tearing it or creating any crease [3]. With more work, this result was later proven with a video. The video proof is arguably easier to follow and more convincing for a human. The formal proof has not yet been formally checked by a proof assistant, although work in this direction is on the way [4].
Proofs are literature. They need to go through a peer review process to have someone other than the author verify their correctness. Nowadays often tools like Lean are used to convert the argument into what you would call formal logic. It's nontrivial and takes time.
> As a layman who's only familiar with Hilbert's work as it pertains to Godel and CS, I'm curious what makes expressing these proofs using some sort of formal logic so difficult.
For one thing, they'd be much longer, and likely not human-readable.
Totally tangent: One of my favorite nitpicks (some others are use of two spaces before a period, the Oxford comma, and the idea that light slows down in a medium) is that the Rosetta Stone is not trilingual: the two texts is Ancient Egyptian use different characters - Hieroglyphs and Demotic. The former is a stilted form mimicking Middle Egyptian while the latter would be accessible to more people.
So the two are not word for word same but are not different languages either.
This only holds if you consider written language the same as spoken language. Which I think is an interesting idea. Languages are symbolic systems of representation. Whether they map onto a spoken language, a signed language, or reality itself aren't written languages languages in their own right?
Earnest thoughts from someone with no training in the philosophy of language or linguistics.
by the time the rosetta stone was written, the literary language had drifted pretty far from spoken egyptian anyway, which was well on the way to becoming Coptic.
Thank you for clearing this up: A book I was reading attributed this to Andrew Weils, where it was Andre Weil, and later in the book it discussed Andrew Weils, who was really Sir Andrew Wiles - who benefited greatly from the contributions of Andre Weil. Let us keep our w(e)iles straight.
There's another Weyl. Hermann Weyl. A mathematician almost as great as Hilbert. In fact he is the person who rigorously introduced Riemann surfaces. He is also the originator of Gauge theory -- so fundamental to modern physics.
This is not my field, and from loose understanding, so I’ll type out a hot take off the top of my head and prepare to be corrected about it, but… some of the things in the article seemed a bit in need of clarification and relevance. Hope I get it right.
Weil is most often encountered in the field of elliptic curves, which they alluded to by the form of the polynomial, but they are mixing the reference to two types of polynomials. The equivalence of finite fields and polynomials was already established by Galois, and is maybe the central theory of abstract algebra. Weil connected this to algebraic geometry through ‘torsion points’ on an elliptic curve, which is a specific form of polynomial itself defined on a (finite) field, that have degree (adding a point to itself until it equals itself again) equal to a prime power in elliptic algebra (addition is the inverse of the intersection of a secant defined by the two points with the curve). This defines a group of prime order, and to get a field, he defined a ‘pairing’ function that works like multiplication between two group elements and returns an element in an ‘extended’ curve that belongs to a group of the same order and has the same coefficient as if the two original group coefficients were multiplied, which is known as the bilinearity property. The extension here is I think what they mean by ‘complex’ in that it doubles the degree of the field polynomial, which gives a Cartesian product of groups from the original definition, and this can be used to divide through ‘torsion points’ back to the field element that defines the product point. This opens up an elliptic form of the zeta function, which was used to prove the Riemann hypothesis on elliptic curves, although not yet extended to the integers, and Fermat’s last theorem. The most common use of this is probably pairing-based cryptography, which is based on bilinearity and computational asymmetry of the pairing operation, which is to say that you can ‘multiply’ through the pairing operation, but you cannot efficiently factor.
So now you have some context of why it’s relevant and some basic terms to look up and you’ll probably find a lot more precise but convoluted language about these things to correct me on.
The first paper in the series was posted to the arXiv yesterday. [2]
It’s obviously far too soon to say whether the proof is correct, but I’m sure experts will be taking a keen interest.
1. https://people.mpim-bonn.mpg.de/gaitsgde/GLC/ 2. https://arxiv.org/abs/2405.03599