Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why should it be in linear algebra? You mean as in doing linear algebra with functions in infinite-dimensional spaces? What I recall mostly (not) learning from linear algebra were various classifications of matrices and their properties (which made very little sense to me at the time too).

I'm not exactly sure where FT should belong in the math syllabus. It's heavily related to trigonometry of course but it is an integral transform, so needs a bit of calculus. Although discrete versions are probably easier to grasp with just multiplications and sums, and there is quite rarely much actual integrating as in find-the-closed-form-antiderivative going on.

Maybe trying to have too "unified" syllabus for engineers/scientists is not the best way in practice. Instead there could be more discipline-specific teaching (something TFA hints to too). With ML (and 3D graphics earlier) something like this seems to be happening to linear algebra. The "mathy" linear algebra (that I studied at least) is mostly about properties and decompositions of (complex) matrices, but in ML/3D engineers very rarely use such things (but do need stuff outside "traditional" linear algebra like tensor products, projective coordinates and rotation groups).




In fact, there is a lack of linear algebra in infinite-dimensional spaces on the general curriculum, and it is an important subject for physics and a few engineering areas besides the relevance for mathematicians. It wasn't clear to me how the dependency between calculus and algebra was supposed to be, but your comment makes it crystal clear.

The discrete transformation is quite fitting for finite-dimensional algebra, and honestly I can't understand how anybody every thought teaching the continuous transformation first and the discrete one never was a good idea. The only explanation is that since everything is thrown on the calculus package without any consideration, there's only time for one, so people kept the most general one.


Infinite-dimensional spaces do indeed come up in e.g. Gaussian processes and the kernel trick. I have to admit I've never really understood them the rigorous form. For ML for example it would be likely more useful than e.g. most matrix decompositions.

I'd guess math uses continuous forms because it's where the mathematical tools are and many things tend to get simpler in mathematical sense when you let something go infinitesimal or infinite. This could maybe be different if digital computers would have been invented before calculus.

I've learned to appreciate that mathematicians think of maths quite differently to engineers or scientists. To them the interest is in the "mathematical objects" and their (provable) properties, not the applications or relationships to "the real world" (and this is probably a good thing in itself) and for engineers/scientists it's the opposite. Maybe something like how linguists vs novelists approach language.


It also comes up a lot in the foundations of RL - the basis of how it is justified (or in some cases proven) to work is contraction mappings and functional operators.


Is there a good book for learning ML Linear Algebra? I've taken linear algebra math courses in uni and most of it went in and out one ear.

I'm looking for a book that helps me understand ML algebra, so that when I read research papers I'm not just lost and nodding my head aimlessly. Such a book may not exist, maybe it's a group of books, but if ANYONE would have pointers in this direction, I would be in your debt




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: