Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Math is running out of problems (medium.com/jpolak)
62 points by vouaobrasil on July 16, 2024 | hide | past | favorite | 89 comments


Math is mind-boggingly huge. We've been at it for thousands of years. The area of our knowledge is enormous. But so is its boundary. We aren't running out of problems any time soon. Including "interesting" problems.

> These days, I can look at any of these journals and find at most one or two papers that are even remotely amusing, and algebra was my specialty. On the other hand, I can take a journal in biology like the Journal of Animal Behavior and still find quite a few papers in each journal that are interesting to me even though I’m not even a research biologist! Keep in mind I still like mathematics a lot, and I still enjoy algebra.

I'm sure that if you had more than passing knowledge in animal behavior, you would also find most of the papers dull. Learning completely new things is of course always a blast. Learning about the latest bleeding edge advances in a field where you already know a lot is not as exciting. I'm not sure what point you were trying to make there. When I read papers in physics I'm always thinking "holy shit electrons are so cool and crazy" because I'm always discovering something new at basically every paragraph. But for an expert the novelty eventually wears off.


I second this last paragraph. I want to add my pet theory that it is not a purely psychological phenomenon. It's like that thing where we only remember the good movies (the "classics"), and the others are forgotten. When you first get started on the field, you're reading just the "classics". When you get to the current state of the art, the classics have not been sorted out yet, and so you get a lot of trash.


> I'm sure that if you had more than passing knowledge in animal behavior, you would also find most of the papers dull

Disagree. I read papers in biology and ecology regularly, and the relevance of them is much greater and at least easier to appreciate them. (I do some science popularization now and people care much more about recent research in biology than math graduate students care about the latest papers in math in neighbouring fields.)


The article suggests this test to establish that math is running out of interesting research: "Take a fairly generalist journal, like the Journal of Algebra (take a topic in which you have expertise — my doctoral thesis was in algebra). Look at some of the papers. How many of them are truly interesting to you?".

But it seems to me fairly likely that applying the same test to math journals from 100 or 200 years ago would produce similar results. Most published papers will not be of great interest to any particular one person.


No way, if you take a generalist algebra journal from 50 years ago, chances are many more graduate students interested in abstract algebra will be able to understand and appreciate the nature of the research. I mean, the best and most interesting papers from algebra were published between 1950-1990, IMO.


Is the test about ability to understand or about finding it very interesting? Of course it's easier to understand what has already become old hat and popularized in the field.


I bet Salami slicing[1] explains part of it. Before publish or perish culture, It was more common to see more complete papers, that gave a holistic view of the subject, easier to understand and appreciate. Now those papers are sliced into N least publishable units, less interesting and harder to get the big picture in isolation, but hey, why to have one paper when you can have five?

[1]. https://en.m.wikipedia.org/wiki/Salami_slicing_tactics


I would say both are dwindling. Few people can understand modern results and few people find them interesting.


Again, it's not clear to me that if you asked 1950s mathematicians to look at math journals in the 1950s, they would react very differently. You haven't presented any evidence of this.

It's certainly true that math is a more fragmented specialized field today than it was in the eras of Euler or Gauss, but without some more concrete evidence or objective claim, I don't know what is so bad about math journals today compared to 1950.


I will attempt a more rigorous study with citation figures and data analysis and write part two then.


For what it's worth, I also have a PhD in mathematics and I also ultimately left academia with some disappointment at the gap between what it is and my sense or fantasy of what it once was or could be.


Do tell more... does it have to do with hyperspecialization, not being able to get fluent in a large enough proportion of the field as was the case in Euler's time, say?


No, while it may have been fun to be a generalist in Euler's era, that wasn't bothersome to me. To be clear, the issues I found in academia had little to do with math specifically, and affected academia broadly. The usual issues you've likely heard about dwindling ability to make a comfortable career of it without a great deal of luck.


The 20th century was just a completely abnormal period, where people decided to change their understanding of math at the same times that physics, statistics, and engineering were demand more and more different ideas fro it. On top of that, CS was created and branched from math.

Most of our history wasn't like that.


Is that based on the average level of maths at the time? I would argue that there are far more mathematicians now who understand the results from that period now than there were then because our mathematical literacy, especially in higher education and in the developing world has increased significantly.

The fact that those results are easier to understand is because of our increased literacy. Trigonometry was the cutting edge of maths at one point and math literacy was even less then. Now it’s material for tweens.


it could be that the buffers are emptying, more people working on solutions are draining them faster than new problems are input?


I don't think so because even all the recent solved problems are rather dull.


My take on this is that each narrow field goes through an S curve with a very exciting exponential beginning followed by a long tail of slow and boring progress. Then there is a new field with a new S curve.

The current hotness is experimental and theoretical investigations of large language models. This is just maths, and the papers coming out recently have been amazing.

I’ve read papers showing things like that neurons pack information into nearly-orthogonal spaces with beautiful geometric symmetries.

Just yesterday there was a paper showing that the middle layers of a deep language model can be interchanged and still work!


I think this is a great point. The author seems to have a selection bias. All the “great” problems in maths are the ones that have remained hard to solve.

All the stuff in the middle has been solved and then taught and is no longer “interesting”. Or, are we build on those results the new problems are a bit further from the fundamentals so you have to look in specific more specialized domains to find new areas.

What the author seems to forget is that most of the stuff we take for granted now were at one point the cutting edge of maths and obscure to all but the leading mathematicians of the time.


I think this note also misses that there are idiosyncratic factors related to the Journal of Algebra. This used to be a quite good generalist journal focused on algebra -- the Tits Alternative appeared there in the 70s, for example. Elsevier greatly increased the page count in the ensuing decades and it's now mostly dreck. These are papers that might be good to have in print for the sake of completeness of the literature, but nobody is going to send an actual interesting result in algebra there anymore. - An algebraist


I will say after looking at hundreds of journals and papers, it's also true of the others. -Another algebraist.


I think if you look at a new algebra paper in a good journal, it's as likely to be interesting as a random algebra paper in a good journal from 1980. (Of course neither is anywhere near 100%, there were many boring papers back then too.)

If you look at a new algebra paper in J Algebra, of course it's not going to be interesting, what do you expect.


Dude, I spent seven years doing nothing but that in hundreds of journals. I still maintain my view.


I don't think the average people from the 1600s cared much about John Napier and his treaty on logarithms published in 1614.

But we care a lot about logarithms now.

Maybe people never cared about current mathematics. Maybe that's just the pace of progress.

If most of our current problems are solved by results from 50 years ago, could it just be that our future problems will be solved by results from right now?


Maybe not the average butcher or baker, but to engineers and scientists it was incredibly important.

The slide rule was invented just a few years later based on Napier's work and was used continuously for the next 350 years, until the invention of the modern calculator/computer.

I don't think I disagree with your overall point I just think you chose the worst example :)


So average people from the 2000s care about John Napier and his treaty on logarithms? Or even just logarithms?


Ah shit, I guess not

Maybe the average 17 year old but that's it


Firstly: this can probably be neither proven nor disproven, but my intuition tells me that it's by definition impossible for mathematics to run out of problems.

Secondly:

> It cannot remain healthy with its incredible publication rate today of mostly useless generalizations.

So the issue isn't that mathematics is running out of problems. The issue is that there are more publications than there are new problems being discovered / solved, and, ergo, the majority of publications are of limited value / interest. And that isn't an issue unique to mathematics, that's just how academic research is in the 21st century!


My definition of "running out of problems" as I stated was "running out of problems that more than a handful of people care about", and I think this is definitely true in math.


I don't want to be snarky, but I do seriously wonder how many people really cared about calculus when Newton/Leibniz developed it. It honest couldn't have been more than a handful because Newton slept on it for the better part twenty years.

I honestly think Math as a field has always been defined by "problems that only a handful of people care about".

The only exception I can think about is maybe basic addition and multiplication.


Yes, exactly. A whole lot of math problems were not interesting until we got relatively fast digital computers as they would have taken till the end of the universe if done by hand. Then suddenly it becomes a product you can implement in a library and perform simulations of reality on. Now suddenly a lot more people are interested in the math because improving the algorithm could lead to millions in power saving, or it could lead to far higher accuracy.

Simply put, it's very hard to predict the usefulness of math at the time it's created.


Which problems people work on is dictated to a large extent by the need to publish to keep your job. There is a lot of incentive to work on publishable low-hanging fruit problems. Hence the abundance of “write-only” journals in mathematics.

I don’t think there is by any means a shortage of hard, interesting problems. But working on them directly comes with significant career risk.



Well, a general statement about technology and invention is not exactly the same as a highly specific branch of knowledge becoming mature and not having anything innovative left to add to it.


If there's one thing that history has taught us, regardless of which field, it's that anything we manage to answer raises at least half a dozen other questions. Saying that math, of all fields, is running out of problems is one of the most absurd statements in this day and age. I was blown away by the capabilities of dumb SVMs in university and today SVMs look like a child's play - just over a decade later.

For as long as humanity has existed, whenever we feel like we've reached our peak, something happens and completely shatters our understanding. Say we have prime numbers and their occurrence is completely unpredictable - the same way Aristotle was convinced that objects come to a rest because they get tired. It's not a specific branch of knowledge, it's an outrageous claim like many have already pointed out. Could it be that math is running out of problems? Yes - in the same way that the universe might vanish tomorrow. Both of those claims are equally absurd.


What about organic chemistry? Do you think there are a lot of open and interesting problems there?


I am sort of surprised that no one has countered with the quote often misattributed to Lord Kelvin. More interesting was the advice given to Max Planck:

One of Jolly's students at the University of Munich was Max Planck, whom he advised in 1878 not to go into theoretical physics.[5] Nevertheless, Planck's later work lead to the discovery of quantum mechanics.[5] Later in life Planck reported:[2][6]

As I began my university studies I asked my venerable teacher Philipp von Jolly for advice regarding the conditions and prospects of my chosen field of study. He described physics to me as a highly developed, nearly fully matured science, that through the crowning achievement of the discovery of the principle of conservation of energy it will arguably soon take its final stable form. It may yet keep going in one corner or another, scrutinizing or putting in order a jot here and a tittle there, but the system as a whole is secured, and theoretical physics is noticeably approaching its completion to the same degree as geometry did centuries ago. That was the view fifty years ago of a respected physicist at the time.

https://en.wikipedia.org/wiki/Philipp_von_Jolly

Related: https://en.wikipedia.org/wiki/Timeline_of_geometry#20th_cent...

Math is not running out of problems just like physics didn't stop advancing (or merely become more precise measurement) in the 19th and 20th centuries. Something new an innovative might be around the corner — it might not. It might result in a new field entirely. It might not.


Math is one of the least specific fields of all…

I’m struggling to think of a field that is more general. Information theory? Oh wait, that’s a sub-field of math. Communications? Eh it’s pretty general but math has it beat handily I think.

The idea that it is “done” or “out of interesting problems” is just absurd.

I’m open it the possibility that it’s harder than it was before to find and articulate interesting problems, but that is a very different claim.


> Or take a look at any undergraduate text in mathematics. How many of them will mention recent research in mathematics from the last couple decades? I’ve never seen it. Now take an undergraduate text in biology and you’ll still find quite a few citations to modern research.

That’s because, in the natural sciences, a lot of what was considered knowledge long ago has been found out to be incorrect.

If you study Galen (https://en.wikipedia.org/wiki/Galen) or Hippocrates (https://en.wikipedia.org/wiki/Hippocrates), or Newton’s works on alchemy, you aren’t studying medicine or chemistry, but the history thereof.

On the other hand, look at the Pythagorean theorem. There has been a bit of chipping at its corners when non-Euclidean geometry was discovered/invented, but it remains true in large branches of mathematics.

And this isn’t a matter of centuries. A lot of genetics work that predates the discovery of the structure of DNA isn’t worth studying anymore.

> At what point can we still say with a straight face that it makes sense to pour millions of dollars into mathematics research when its only objective seems reaching the next highest peak of hyper-specialization?

Luckily, lots of mathematics research is fairly cheap. As Alfréd Rényi said (https://en.wikipedia.org/wiki/Alfréd_Rényi#Quotations) it runs on coffee.


> It runs on coffee

Or for Erdös, amphetamine.

Also I find it ironic to fro them to gripe about alleged "hyper-specializatiin" given that part of the beauty of Math is discovering how seemingly unrelated areas are in fact connected AND discovering generalizations that can be easily applied.

Caveat, I am no where close to being a mathematician.


> Or take a look at any undergraduate text in mathematics. How many of them will mention recent research in mathematics from the last couple decades?

The Einstein Tile was discovered in 2022, and that's received a decent amount of press


Well, first few undergraduate texts mention the Einstein tile, and secondly, my point was that almost no modern results are mentioned in undergraduate texts, whereas many undergraduate biology texts mention modern results.


Probably because it was too recent a discovery to be in undergrad texts yet


My Theory of Computation text mentions a lot of unsolved problems. Some are in CS of course, but I was just editing an example about whether for all n there is a prime between n-squared and (n+1)-squared. That seems like a problem a US sophmore could appreciate.


That's not really a "lot" of problems though.


If you run out of (solvable!) problems in your given logic space, just start branching out your space. Until you find yourself in such esoteric spheres, not even your best math co-researcher knows anymore what's happening and vice versa.


Well, that is what mathematicians do, which is why I said "interesting" problems. I mean, I can think of several problems, like "classify projective modules over rings of global dimension 4 that have no zero divisors".

Even popular problems in Langlands like "explicitly find a trace formula for theta groups" will appeal to the fifteen people in the world that can understand what I'm even talking about.


What about tetration with arbitrary real or complex bases and exponents, or is that too fringe?

Anyway, if mathematicians indeed have less to do, perhaps they could start working on standardizing tau over pi, to make radian angles less confusing for everyone.


I'd say it's kind of a cool topic and would probably lead to some interesting recreational math but it probably won't lead to new, significant research, which means a new theoretical understanding of some abstract structure or another.


While I agree with the author broadly, he is definitely overselling his thesis. Going from 'fewer grad students now care about problems in Journal of Algebra' to 'Math is running out of problems' is quite a stretch.

Take any period of time - some subfields will run hot & others will be fallow. Doesn't mean we have run out of problems.Trace formula for theta groups will appeal to only fifteen people - ok so what's the issue ? Math isn't some popularity contest. We have a ballroom at the university which is reserved for talks from visiting professors. When we have an economics lecture, usually it is jampacked. All 100 seats are taken, not even standing room. Then the next talk is by some topologist. The room practically empties out in real time. If you watched it live, you would be shocked at how fast people are rushing out of the room - you would think some stinkbomb was thrown. Finally, nobody is left other than the topologist himself & 5 grad students, 4 of whom look like they literally jumped out of bed & grabbed a coffee mug on the way. That's math for you. That's how its always been.


mathematicians work on problems which are interesting to other people, usually mathematicians. Sone people dont seem to be aware of this, but this is a graph with a general direction trend from the pure to the applied. Show me any math paper and I can tell you how this could potentially help solve real world problems.


Okay: "Integral p-adic Hodge Theory" [1]. Go ahead.

[1] https://link.springer.com/article/10.1007/s10240-019-00102-z


I am not an expert on this so I can obviously not give a good judgement but this is about Cohomology and p-adic numbers. Whatever it is they may prove, they are clearly furthering our understanding of these subjects. When you want to apply a theory it really helps if the theory is well-developed, people know different approaches to define things, what are the standard results, what things are equivalent, etc. These kinda of theoretical results work in the background to allow people to effectively and comfortably apply their linear algebra or calculus, for example.

So now I will argue that Cohomology and p-adic numbers are interesting and useful.

Cohomology and hodge theory are about geometry and partial differential equations. This can have applications in AI for example, since data lies on manifolds. I saw some paper a while ago that layers in neural networks fold the data manifold onto itself to reduce its topological complexity and this can be measured by computing some "Betti numbers", which are related to homology and thereby also cohomology. Now is this really true or useful? I don't know but having a mathematical theory makes it possible to even start thinking about such ideas. Also, partial differential equations have obvious applications. By the way, most theories have finite-dimensional/discrete analogues, for example discrete Hodge theory exists, and usually when you understand something about the smooth version of a theory then there are some equivalents for the discrete theory. So if you have data as a graph then you may want to investigate some discrete forms of homology/cohomology on that and may wonder how different types are equivalent etc.

P-adic numbers are related to modular arithmetic and therefore pretty useful just because of computers and cryptography and these things.


Speaking as a math amateur, I find good problems are entirely subjective. If math isn't scratching some itch, go play in another science or do some engineering for awhile and it will come back.


My claim is that the number of people who find professional level problems interesting in any given field is only a few dozen and hence doesn't deserve public funding.


Meanwhile, the economy is investing billions of dollars into researching and using neural networks, an almost purely mathematical construct that nobody yet really understands.


They do it because people believe there is an immediate economic advantage: namely, to strangle ordinary people and concentrate all that is worth trading in the hands of the already rich.


You’re changing the subject! There is a really good problem in there.

Why does this stuff work so well, for what classes of problems, and can we say anything about the fundamental energy/accuracy limits of this approach?

I think of it as an engineering problem, but there are mathematical rephrasings of this problem, as Michel Talagrand’s work has shown.

Don’t let the greed/stampede obscure the very interesting technical principles here.


The author of the Medium article is running out of imagination!


Should have started with the last paragraphs first, and used the top paragraphs to support it's argument -- Of course; this approach would not be clickbait.


Well, I think my statement is not clickbait because it is just what I believe to be true. I was even at a talk where a senior mathematician said this also, that "math is running out of problems".

I could have said "You won't believe what these mathematicians are doing to obtain new problems WAAH! -- Now that would be clickbait :)


The title is far more general than what you actually mean. What's the reason for that if not click bait?


> How many of them will mention recent research in mathematics from the last couple decades

I think computer science, especially TCS, will mention recent research from the last couple of years. Technically, TCS is a branch of maths too.


If you’ve ever read Harvest and Sowings in conjunction with Pursuing Stacks then you’re already aware of this problem.

Possibly the greatest intellectual troll of all time. Rip to a real one. Miss you Grothy baby.


> To see this, here’s an exercise you can do yourself, if you have any training at all in advanced mathematics. Take a fairly generalist journal, like the Journal of Algebra (take a topic in which you have expertise — my doctoral thesis was in algebra). Look at some of the papers. How many of them are truly interesting to you?

> These days, I can look at any of these journals and find at most one or two papers that are even remotely amusing, and algebra was my specialty. On the other hand, I can take a journal in biology like the Journal of Animal Behavior and still find quite a few papers in each journal that are interesting to me even though I’m not even a research biologist! Keep in mind I still like mathematics a lot, and I still enjoy algebra.

Can't you also say this is directly disproving his point as well? It might be that there are so many open interesting problems that we can become highly picky what problems get solved to the point these preferences are shares between less people. Indicating an expansive set of problems instead of an exhausted one instead.


Math is an art form? On art/science demarcation I'd definitely rate math a science. I'm unsure what they're going for in that statement.


nowhere capable of solving them, but my understanding is that we still have a sea of unchartered territories when it comes to research here. but it has become apparent now more than ever that what seems alien or niche today might become seminal in the future.

long story short - we just need the link between theory and real life. you will find plenty problems, interesting even (at least for someone).


As long as people are getting paid to be mathematicians I'm sure this will never be the case.


That is true, and that was part of my point.


Might I suggest adding Mo Money


We now go life to our expert on running out of problems https://en.m.wikipedia.org/wiki/Georg_Cantor


"Math is running out of problems you can get famous for solving" FTFY.


I would replace "famous" with "known beyond thirty researchers".


I routinely attend conferences where way more than thirty people take several hours out of their days to listen to the new problems and their solutions. In a single field of math. In my little corner of the world. With people who managed to free themselves that week and find travel money.


one of the funniest titles.


t. Kelvin


This is one of the most obviously untrue blog posts I've read on medium which is some sort of achievement I suppose.

The null hypothesis has to be that the number of interesting and important open research problems in mathematics is expanding without limit. If the author thinks that's not the case it's up to them to actually justify their position rather than just blandly state it with a "No true Scotsman" addition that the number of problems that are interesting to "a fair number of people" is diminishing on the basis that they find "The Journal of Algebra" to contain things that are not interesting to them.

Most mathematicians I know seem accutely aware that the field of mathematics as a serious intellectual endeavour is over 3000 years old at this point and therefore are aware of its maturity as a field.


Well, it is difficult to obtain data on this because it would require a survey of mathematicians, or a rigorous study of citation figures but maybe I'll do it if I have time.

Incidentally, I am a mathematician, or at least was a practicing one for some years and have published in the field...and I can say that from personal experience, math is exceptionally fragmented.

Yes, you point out a valid criticism but I don't really have the time to collect data on this.

Edit: I also wonder if the immediate denial of my statement is due to the emotional attachment that some people have to the purity of and beauty of mathematics. I also think mathematics is beautiful but that doesn't mean it doesn't have problems.


> Yes, you point out a valid criticism but I don't really have the time to collect data on this.

You did have time to make a blog post, though. Apparently, it's more important to you to broadcast your opinion than to make sure it is correct?


> You did have time to make a blog post, though. Apparently, it's more important to you to broadcast your opinion than to make sure it is correct?

Well, I know it's true from experience as a researcher.


And I know it's false from experience as a researcher. What now?


You don't believe him and that's it.

It's a blog post and not a published research paper for a reason. And reality also exists outside the scope of what's published in journals.


I guess nothing. I'm trying to convince people of one thing, you can do another.


While I also disagree with the post, my take on it was it is more a critique of the increasing hyperspecialization of mathematical problems. Of course the number of mathematical problems is endless, but I also feel that they are becoming more and more inaccessible.

In my current university (and others) undergraduate math students do not even have to write a thesis anymore because of the number of people who came at the end without any new results to show. Instead, to obtain their degree they are tasked with things like rewriting existing papers into chapters for lecture notes (what the author suggests). For graduate students theses are still a thing but the situation is not much better. Talking to my peers I got the impression that many (but not all of course) in their theses are either generalizing something they themselves already tought was too general or are overanalyzing an obscure problem using tools that can only be found in maze of unreadable papers.

I'm not saying that papers should be easy, but if even most graduate students can't read the research material there is probably an underlying problem.


I'm not sure if that makes it obviously untrue or obviously not backed up by something solid


I will do some serious data analysis on this topic and get back here with part two when I have time.


I think you missed the point.


I thought math was obliterated at the foundations by Gödel's incompleteness theorems. Did they fix that?


There being some limits and some improvable stuff doesn't mean the whole domain is doomed or broken and isn't huge, or even infinite.

Throwing away math because of those limits would be throwing the baby with the bathwater.

What would you like to see fixed?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: