Hacker Newsnew | past | comments | ask | show | jobs | submit | evanb's commentslogin

In case the author is reading this, if you're going to introduce the complex-valued harmonics you should be careful to put the complex conjugate in the inner product

    <f, g> = ∫ f(ω)^* g(ω) dω
which does match the corresponding linear-algebra inner product if the vectors are over the complex numbers

    p . q = Σ_i p^*_i q_i
which guarantees that p.p ≥ 0 even for complex p (and does not change the only-real case).

what level of math do I need to understand this? or the rest of the math in the post, something I can catch up on in a weekend? barely remember the last math class I took seriously, trig like 18 years ago

I think the steps would be like this:

- get understanding of ordinary vector linear algebra.

- understand what vector dot product does and why

- understand why an orthogonal set of basis vectors for the space you're working in is useful / what properties it has / how its used. like basic euclidean 3d space (1,0,0) (0,1,0) (0, 0, 1) basis vectors.

- get a refresher on basic calculus, in particular integrals

- understand this inner product, it's a generalization of dot product, except you can think of your vectors having infinite number of dimensions now.

- the properties of the dot product you know (like that two vectors are perpendicular if their dot product is 0) work for the inner product too. or perhaps its better to say that the general inner product is defined to have similar properties

- there are functions that are orthogonal to each other in the same way vectors can be orthogonal to each other, and you can use the inner product to tell which ones.

- spherical harmonics are constructed / by design orthogonal to each other. how to show this and where the intuition for finding them could come from is a whole topic...

- but once you have it, just like you can project vectors onto basis vectors (to essentially transform them into the coordinate system described by those basis vectors), you can project functions into the coordinate system represented by those orthogonal functions.

- then you have to figure out why you would even want to do this. in short is has a lot of useful properties/applications. in the graphics case you can compress some quite complex functions into just a few coefficients using this (not perfectly, there is some 'information loss', but still). integrating over two functions becomes cheaper when they are projected to SH basis. it lets you do some unintuitive stuff like combine light that goes into different directions into one common set of coefficients.


> what level of math do I need to understand this?

A basic understanding of differential equations is all that's really necessary, but knowing about orthogonal polynomials would be helpful too.

> something I can catch up on in a weekend?

Probably not. If you know any calculus (even a basic high school class should be enough), two weekends would probably be enough; if you don't know calculus, then double it.

My advice would be to use an introductory level quantum physics textbook or an advanced chemistry textbook, since the spherical harmonics are used quite a bit in those fields. You could use a math textbook too, but those will tend to focus on details that are irrelevant to you.

An alternate path would be to learn about Fourier series/transformations, then what's discussed in the article will follow as a natural consequence. This is probably a harder option, but there's lots of really good learning materials online for Fourier transformations (and comparatively little for spherical harmonics), so it may end up being easier for you.


This is basic integral calculus, and the sigma symbol indicates discrete summation.

You could package all your data into a zip using this language but you would also have a worthless stretch of memory seemingly filled with noise / things you’re not interested in.

Why do you think so? The code example shows that you can do RLE (run length encoding) without noise / additional space. I'm pretty sure you can do zip as well. It would just be very hard to implement, but it wouldn't necessarily require that the output contains noise.

[1] https://topps.diku.dk/pirc/?id=janusP


Hmm. As a physicist my intuition is that information-preserving transformations are unitary (unitary transformations are 1-to-1). If a compression algorithm is going to yield a bit string (the zip file, for example) shorter than the original it can't be 1-to-1. So it must yield the zip file and some other stuff to make up for the space saved by the compression.

I don't remember much about music theory but I know enough about symmetry to know that there's a mistake in the diagram at 9 o'clock.

Yeah but only in the heart shaped circles. The middle of the hearts are the classic circle of fifths. In this circle they show where the half steps in the scales are.

Bug report: I tried 6.999999̅ and got false. So there's some nonstandard model of the reals being leveraged here.


I have always anthropomorphized my computer as me to some extent. "I sent an email." "I browsed the web." Did I? Or did my computer do those things at my behest?


I think this is a relatively unique outlook and not one that is shared by most.

If you use a tool to automate sending emails, unrelated to LLMs, in most scenarios the behaviour on the receiver is different.

- If I get a mass email from a company and it's signed off from the CEO, I don't think the CEO personally emailed me. They may glanced over it and approved it, maybe not even that but they didn't "send an email". At best, one might think that "the company" sent an email.

- I randomly send my wife cute stickers on Telegram as a sort of show that I'm thinking of her. If I setup a script to do that at random intervals and she finds out, from her point of view I "didn't send them" and she would be justifiably upset.

I know this might be a difficult concept for many people that browse this forum, but the end product/result is not always the point. There are many parts of our lives and society in general that the act of personally doing something is the entire point.


Of course that's true, but (in the context of the GP) code's bespoke artisanal nature is not the one most people value.


I drove to the supermarket!


As a term-rewriting system the rule x-x=0 presumably won’t be in Simplify, it’ll be inside - (or Plus, actually). Instead I’d expect there to be strategies. Pick a strategy using a heuristic, push evaluation as far as it’ll go, pick a strategy, etc. But a lot of the work will be normal evaluation, not Simplify-specific.


Mathematica has Infix [0], which expresses the adjacency with a ~ (because Mathematica reserves pure blankspace for multiplication). But it works fine to do eg. `"hello"~StringJoin~" world"`; I was always surprised we could only have the predefined operators in many other languages and we couldn't define our own.

This seems like a great attempt. I would be worried about how much parsing and backtracking might be required to infer the infix precedence in a totally general system (like garden-path sentences[1]) or actually ambiguous parse trees (which is cured by adopting some rule like right precedence and parens, but what rule you pick makes some 'natural language' constructions work over others).

[0] https://reference.wolfram.com/language/ref/Infix.html

[1] https://en.wikipedia.org/wiki/Garden-path_sentence


Haskell supports user-defined operators (made up of symbols) and also lets you use functions in infix position by surrounding the name in backticks, e.g.

    2 `plus` 3
rather than

    plus 2 3


In Postgres you can define custom operators via `create operator`[0]

    -- infix
    select a <!!!> b;

    -- prefix
    select <||> a;
A lot of custom types end up using this [1].

    select @-@ '[(0,0),(1,0),(1,1)]'::path;
    -- 2

[0] https://www.postgresql.org/docs/current/sql-createoperator.h... [1] https://www.postgresql.org/docs/current/functions-geometry.h...


Similarly, Agda has a well-typed mixfix operator syntax – you define a function like (_foo_bar_baz) and can automatically write "X foo Y bar Z baz". It does mean that the operator parser has to be extensible at runtime, but it's not a huge cost for a dependently-typed language.


Mathematica has symbolic and infinite-precision addition, so you can't automatically take advantage of obvious compiled code.


What? Arbitrary precision arithmetic implemented in a compiled language will be faster than the alternative. This is no great mystery. The same is true of essentially all low-level symbolic or numerical math algorithms. You need to get to a fairly high level before this stops being true.


Of course. The point is whether you interpret a call to arbitrary_precision_add or compile the call doesn't matter much.


> The initial focus is to implement a subset of the Wolfram Language so that it can be used for CLI scripting and notebooks.

If you have Mathematica installed you can write CLI scripts and notebooks.


They did consider it, got a contract that affirmed that the military would be bound by the same pre-existing terms of service as every other user, and want to resist the military's pressure to renegotiate.

Surely that might be naive but the entire issue is that they want to stick to the original contract, which is of course the purpose of a contract in the first place.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: