Some context: the Invariant Subspace Problem is an important open problem in functional analysis [1]. There is a MathOverflow thread discussing its significance [2].
For those not familiar with functional analysis, it is basically a generalization of basic linear algebra fact that, in complex finite-dimensional vector spaces, every matrix (or to be more precise, every linear operator) has at least one eigenvector.
Cowen and Gallardo announced [3] that they proved the theorem in December, but apparently the proof is wrong, so the problem is still open.
> For those not familiar with functional analysis, it is basically a generalization of basic linear algebra fact that, in complex finite-dimensional vector spaces, every matrix (or to be more precise, every linear operator) has at least one eigenvector.
I sense some sarcasm :) Luckily these things have nice geometrical
interpretations, let me try to clarify this better, hoping that I can
stimulate some curiosity on the subject.
Imagine a linear transformation in the 3-dimensional space; that
means that rotations, "flips", and scalings are allowed. For example
take a rotation: points that are in the axis of rotation remain on
that line. Likewise, think of the plane orthogonal to the axis of
rotation, then points that are on that plane rotate, but they stay on
that plane. The axis of rotation and its orthogonal plane are both
invariant subspaces.
In a 3-dimensional space, and in general in every odd-dimensional
space, every linear transformation has an invariant subspace. This is
not true for even-dimensional spaces, such as the 2d plane: a
45-degree rotation for example has no invariant subspace (except for {0}
and the whole space, of course). However, if you allow the coordinates
to be complex numbers, instead of real, then there always is an
invariant subspace, although it is harder to visualize.
To express this in linear algebra terms, every complex matrix has an
eigenvector. This is a fundamental fact that has applications pretty
much everywhere in mathematics.
The Invariant Subspace problem generalizes this statement to spaces
that are infinite-dimensional. Infinite-dimensional spaces are
infinitely messy, so we restrict our attention to spaces that have
some additional structure; for example Hilbert spaces, those addressed
in the paper, are those where a formula very similar to the
Pythagorean theorem is true.
It's a lot to digest. As I explained in another reply, my mathematics is very poor indeed.
> Likewise, think of the plane orthogonal to the axis of rotation, then points that are on that plane rotate, but they stay on that plane.
Why only the orthogonal plane, I don't see how any set of points behaves differently to any other. For example the stars could be said to rotate around the earth (I know, but pretend). They don't move in relation one another though. They only change relative to the earth. I think I'm grossly misunderstanding. Does 'rotation' have a special meaning in this context?
> 45-degree rotation for example has no invariant subspace
because you can rotate points on a line 45 degrees and the only one that stays the same is 0,0, which you call {0}?
So.. someone tried to prove that for n dimenional (odd?) spaces with n<=infinity, there is always some subspace which doesn't differ when you apply complex transformations to it? But was wrong.
To get better intuition without a math degree I'd recommend: Concepts of Modern Mathematics which only requires elementary algebra, but will give you great intro treatments of everything from groups to number theory.
The important part about the orthogonal plane is that when you rotate it, all pairs of points that were in the orthogonal plane before are still in the orthogonal plane.
Same thing is true of the axis of rotation.
All other pairs of points will be in a different plane than they were before the rotation.
A linear transformation is one that leaves the origin where it is, and maps straight lines to straight lines. It is a theorem that every complex linear transformation has some line that gets mapped to itself - an eigen-line.
I think so, thanks. My maths skills are generally very poor indeed, but OTOH I have an MSc in Computer Science* so I should understand some of this sort of stuff.
I always feel like I'm missing detail, eg:
> every complex linear transformation
Is that "complex" important?
> line that gets mapped to itself
So a trivial example might be a 360 degree rotation? Or am I just way misunderstanding this stuff?
* and I do mean real CS, not software engineering.
>> line that gets mapped to itself
> So a trivial example might be a
> 360 degree rotation?
That's a very trivial example, and therefore good. Less trivial examples are a sheer, an expansion, or a rotation in 3D space.
A rotation in 2D space apparently only leaves the origin fixed, and that's where we need complex numbers. To find a fixed line in that case we need to find roots of a quadratic with negative discriminant, hence needing sqrts of negative numbers.
Complex numbers are just numbers that can have an imaginary component. Like 5+3i. All real numbers are complex (they just have an imaginary component of 0), but the converse is obviously not true.
I've bookmarked the wiki page on this stuff for later. Otherwise my next question would just be "what's an imaginary component?"
I don't think my tutors at Oxford really understood exactly what I meant by "I can't do all this maths". Basic gaps in my knowledge. Still passed though, fuck yeah! :D
"Imaginary" numbers are the square roots of negative numbers.
Because (-1)^2 = 1^2 = 1, these numbers don't exist on the regular ("real") number line. Instead we wave our hands and posit the existence of the magical number i, the imaginary unit, that has the property i^2 = -1. Through simple algebra, this unit is enough to give us square roots for all the imaginary numbers. For example, (3i)^2 = -9.
Imaginary numbers are "independent" of the real numbers, which means that the sum of 5 and 3i is simply 5+3i. That expression cannot be simplified further. Such sums are called "complex" numbers. The complex numbers can be imagined as a plane, with the horizontal axis consisting of the pure reals, and the vertical axis consisting of the pure imaginaries.
Complex numbers support all the regular arithmetic operations. You can figure them out yourself, so long as you remember i^2 = -1. In the complex plane, addition looks like vector addition, and multiplying by an imaginary number is a rotation.
It is possible to briefly explain enough to people to fool them into thinking that they have a sense of the discussion. However this is a form of intellectual entertainment, and not actual explanation.
Several times I began trying to write an actual explanation of what a vector space is, a linear function, a subspace, a Banach space, a Hilbert space, and what the actual conjecture is. I got bogged down every time.
I don't like the snarky tone of your comment. However, I do want to encourage learning. So if you are genuinely curious, try asking for a better explanation here: http://www.reddit.com/r/explainlikeimfive/
I don't like the patronising tone of your comment. However, I do want to encourage discourse. So if you're genuinely interested in helping people understand, try rephrasing your comment to not imply that five year olds should be conversant with linear algebra.
PS: Now that's a snarky tone. My original comment was merely lightly sarcastic.
I don't know many six year olds that would have no issue with that description. ot's reply clears things up nicely, but again, I doubt to the simplicity which you are suggesting he needs.
Dr. Cowen works, coincidentally, at the same institution as Dr. Louis de Branges, http://www.math.purdue.edu/~branges/site//Papers , who has claimed a proof of the Riemann hypothesis for many years now. You can read his "Apology for the Proof of the Riemann hypothesis" (Apology in the sense of "defense") to get an idea of who can be blamed for gaps in proofs.
(de Branges also claims a proof of the invariant subspace problem, but since his proof of RH is disputed and his proof of ISP uses the same technique, to my knowledge few people have examined it.)
I guess the alternative would be to claim that your results are incomplete rather than wrong. This does happen from time to time. Vinay Deolalikar claimed a proof that P is not equal to NP which received intense scrutiny online two years ago and multiple errors were found. He has never withdrawn the claim.
The more objective the discipline, the less you see that, because metaphorically, the light is too bright to hide. Mathematics is, arguably, the most objective discipline there is, so I don't think you see much of this. Particle physics is the most concrete of the real physics, so you see admirable scientific discipline there too, for instance, with the recent FTL neutrinos.
As you get fuzzier and fuzzier, you get more and more blabbering, until, ahem, eventually the "science" consists of little more than blabber; I'll leave it as an exercise to the reader as to where to draw that line.
By Kant's reckoning. Math is Analytic and A Priori, meaning that it is neither dependent on facts about the world, nor does the exploration of math require knowledge of the world to interrogate.
You start with some axioms, and you just go from there.
Fields like physics are synthetic and a posteriori, and the search for a grand unified theory which pull together the fundamental forces we know about are intellectual endeavors that are about the universe, and contingent upon the nature of the universe. If the universe were different, the facts and conclusions we would arrive at would be different.
None of this has to do with objectivity per se. Physics is objective(ish), but it's not an analytical field (as Kant defines it).
And you can claim that biology or linguistics or sociology aren't "science". But biology was still a field of scientific inquiry before Watson & Crick, or Darwin (although a less rigorous or well developed field). We have ways to know the world, and we should avail ourselves of them, and endeavor to improve their rigor.
So, part of what i'm saying is that it'd be nice if people would stop dismissing other fields as "not science" :P
Social scientists try very hard to capitalize on the epistemological clout associated with the word "science", epistemological clout derived mostly from the successes of physics, chemistry, and biology in the industrial/medical/policy arena.
The "not science" squabbles are an inevitable result of physics/chemistry/biology not wanting their brand diluted, and the social sciences wanting to, IMO, inflate the value of their own brand.
You weren't imputing to me a claim that some other fields are "not science", I hope? I have opinions, but I left it as an exercise for the reader on purpose; I only care that there is a line somewhere, not about defending a particular one.
(This is long, but the td;lr is that there actually is a surprising and reasonable criteria for what is a science, and it probably doesn't draw the line where you want.)
So, part of what i'm saying is that it'd be nice if people would stop dismissing other fields as "not science" :P
Are you, then, going to argue that we should call Scientology a science? If not, why not? And aren't you calling other fields "not really science" once you have done so?
I have an unambiguous definition of "science" that seems kind of obvious. What journal would people in your field aspire to publish their best research in? Of the people who publish there, what journal would they like to publish their best work in? And so on. If this chain does not lead in short order to the journal Nature, you are not a science.
By this criteria, mathematics is a scientific field. (That's the biggest downside of this classification - math is not a science. But here gets lumped in as one.) So is physics. So is chemistry. Biology. Climate science. And many more. But linguistics and sociology are not. Nor is any "social science". Nor, of course, is Scientology.
Why would that work? The reason goes back to an observation of Thomas Kuhn's. In a mature science, there is a shared paradigm, creating shared beliefs about what work is likely to be important, what work is not. Because of that paradigm, work of very different kinds can be compared reasonably fairly. (Not perfectly, but fairly enough that you can reasonably identify the top papers, and have a top journal.) Furthermore the relative unanimity from that field over time leads to people outside that field believing that people in that field actually have made progress. Eventually that can lead to a paradigm shared across that field and others, which allows a fair amount of agreement about the relative value of research in very, very different areas.
The integration of a field into the cross-disciplinary value systems shared across sciences will lead to that field's top papers becoming acceptable to broader cross-disciplinary journals. And, once that has happened, you have been fitted into a chain of relative prestige that goes right up to what is generally recognized as the top journal of all time, Nature.
This structure used to be less obvious. But in recent decades the emergence of the scientific citation index, and the measurement of journal impact factors have put precise numbers on "how prestigious is this journal" and "how broadly is this journal read". This has lead to a feedback loop making researchers more inclined to put papers into top journals, which leads to better sorting by value. And this same work has made the differing citation structures between "real sciences" and less mature fields of study abundantly clear. (A clarity that was promptly used by publishers. Now that everyone knows the relative value of given journals, prices could be raised for top ones. Giving rise to the serials crisis.)
One important disclaimer. Nothing in what I say should be taken by people in "soft sciences" as a blanket indictment of the quality of their work. Their lack of shared paradigms reflects the complexity of their subjects, and the resulting difficulty for someone taking approach A to present convincing arguments to people taking another approach B that this is a better way to look at things than approach B is. This does not mean that there is not really good work going on with both groups. But it does not result in agreement, simple externally visible signs of progress, or shared value judgements about the relative importance of different kinds of research. And outsiders don't have easy access to any sign posts suggesting what work is likely to be high quality, and what work is mostly BS.
And a final note. In my experience, what I just said is obvious to people in the hard sciences, and unbelievable to people in the social sciences. For people in the hard sciences, having a reasonably shared value system about very different research across a variety of fields is just how things are. For people in the social sciences, it doesn't make sense that two people studying radically different things will have any kind of shared value system. As a fun example, go to one psychologist, and ask for a list of the 10 most important discoveries in psychology in the last 40 years. Take that list to another random psychologist, see how few of them the second has even heard of. Repeat the same experiment with two physicists, and see how many of the first one's top 10 would be in the top 10 list for the other. (The exact order will vary, but there will be substantial agreement.)
As a Mathematics graduate I was going to call you out on your comment, because I feel the subject does not lend itself to the kind of wiggle room necessary for maneuvering such as you describe. Then I remembered this: http://www.futilitycloset.com/2013/01/23/the-indiana-pi-bill..., and thought better of it.
As a mathematical physicist, it's fascinating to me that multiple people in this thread have reacted in this way. From a mathematician's point of view, there simply isn't such a thing as "deflecting blame" for errors in a proof: if someone identifies a flaw in the reasoning, there's no sensible way of denying it. To the extent that they're based on math, the sciences often work in the same way (so physics comes closer than biology, for example).
But clearly people who aren't specialists in math don't recognize that math is "special" in this way: it's just more academics arguing over ideas. I wonder if this is part of what feeds into public distrust in science, when that arises.
Another recent example: Edward Nelson of Princeton retracted his claim of having a proof of the inconsistency of Peano Arithmetic. Replying to Terence Tao, he wrote: "You are quite right, and my original response was wrong. Thank you for spotting my error. I withdraw my claim."
I came here to say just that :) Good thing we have set the right expectations in the software industry - everything can be explained by "it's a computer, duh" because users expect everything to be full of bugs anyways :)
For those not familiar with functional analysis, it is basically a generalization of basic linear algebra fact that, in complex finite-dimensional vector spaces, every matrix (or to be more precise, every linear operator) has at least one eigenvector.
Cowen and Gallardo announced [3] that they proved the theorem in December, but apparently the proof is wrong, so the problem is still open.
[1] http://en.wikipedia.org/wiki/Invariant_subspace_problem
[2] http://mathoverflow.net/questions/48908/is-the-invariant-sub...
[3] http://aperiodical.com/2013/01/the-invariant-subspace-proble...