Hacker Newsnew | past | comments | ask | show | jobs | submit | more oddthink's commentslogin

Looks interesting, but it's not at all responsive. I can't view it without making my window larger than my screen and manually sliding it back and forth. Even basic TUIs know their output size. I know it's a UI demo, but that seems pretty basic.

How is this still so hard? Tk basically had this figured out 25 years ago.


It's a super early release and the project itself is pretty fresh, and that's also how it was announced today at FOSDEM, so we can expect improvements.


It goes way too far, IMHO.

It ends up sounding like a smarmy Sunday-morning talk show conversation, with over-exaggerated affect and no content.

So far I've just fed it technical papers, which may be part of the problem, but what I got back was, "Gosh, imagine if a recommender system really understood us? Wow, that would be fantastic, wouldn't it?"


Already in the sample embedded by Simon. "Gosh", "wow", "like", "like", "like", "[wooooaaaawiiiing, woooooooaawiiiiiiing]", "Oh my god", "I was so, like...".

https://www.youtube.com/watch?v=ssDdqq_9TzI&t=34s [April Ludgate meets Tynnifer, Parks and Rec]


While it's impressive, I agree that it tends to make over the top comments or reactions about everything. It could probably make a Keurig machine sound like a revolutionary coffee maker.


I always come back to Schutz's Geometrical Methods of Mathematical Physics as my reference for notation, but I agree. I came to this by way of General Relativity, so that colors my perceptions. The few treatments of GA that I've looked at (briefly) weren't very clear about the distinction between 1-forms and 1-vectors and seemed to assume Euclidean metric everywhere, so I left thinking that it seemed a little weird and not quite trusting it.

In any case, my experience is that the coordinate-free manipulations only go so far, but that you pretty quickly need to drop to some coordinates to actually get work done. d*F=J is nice and all, but it won't calculate your fields for you.



I know at least one team is at work is using the Assistants API, and I'm talking with another team that is leaning pretty heavily towards using it over building a custom RAG solution themselves, or even over other in-house frameworks.


Interesting! Just over the past month, I've reduced my emacs usage a lot. I'd switched to VSCode for most coding a while ago, but I still kept my daily journal notes in org-mode, but I recently switched to Logseq for that.

I don't know if I'll stick with it, or if it'll get sluggish after a while, but right now the searching, rich-text, and pdf-annotation seems refreshing.


Thank you! The Lagrangian as projection of energy-momentum actually makes sense, unlike the "let's just subtract potential from kinetic. No reason, it just works" story. I'd been idly wondering that for a while (and this is as someone with a physics degree, though in astro, which is a good bit more applied).


Hamilton has introduced what he has called the "Principal Function S", which is used in his variational principle on which the Lagrangian formulation is based.

Nowadays this function is frequently called "Hamilton's action", though this is not a good idea because it causes confusions with what Hamilton, like all his predecessors, called "action", which is the integral of the kinetic energy.

The "Principal Function S", which is a scalar value, i.e. a relativistic invariant quantity, is the line integral of the Lagrangian over the trajectory in space-time, i.e. it is the line integral of the energy-momentum 4-vector over the trajectory in space-time.

Like any line integral of a vector, the line integral of the energy-momentum 4-vector is equal to the line integral over the trajectory of its projection on that trajectory.

This is why the Lagrangian is the projection of the energy-momentum 4-vector. Hamilton has found the correct form of this line integral in relativistic theory, even if that was about 3 quarters of century before the concept of 4-vectors became understood.

The "Principal Function S", i.e. the integral of the energy-momentum, can be considered as a more fundamental quantity than the Lagrangian, which is its derivative (the energy-momentum vector is its gradient). In quantum mechanics the "Principal Function S" is the phase of the wave function, so it is even more obvious that it must be an invariant quantity.


There is a way of _arriving_ at that subtraction, rather than just throwing it out there.

A resource I created:

Calculus of Variations as applied in physics: http://cleonis.nl/physics/phys256/calculus_variations.php

Hamilton's stationary action: http://cleonis.nl/physics/phys256/energy_position_equation.p...

In that resource I show why it works.

In an earlier answer I gave more information about that resource. To find that earlier answer: go up to the entire thread, and search on the page for my nick: Cleonis


This is the exact point that confused me a lot (and still confuses me) when I tried to read the "The Theoretical Minimum: What You Need to Know to Start Doing Physics" : "Hey, let's just fix/define the lagrangian as T - V and you'll see that after some magical math stuff in the following chapter, we'll find back newtonian equations. Trust me for now".

If anyone has a reference/book/paper that allows you to learn this concept more intuitively, I'd be grateful.


I have created a resource for the purpose of making Hamilton's stationary action transparent.

It is possible to go in all forward steps from F=ma to Hamilton's stationary action; that is what I present.

The path from F=ma to Hamilton's stationary action consists of two stages: (1) Derivation of the work-energy theorem from F=ma (2) Demonstration: when the conditions are such that the work-energy theorem holds good then Hamilton's stationary action will hold good also.

I recommend that you first absorb the presentation of the subset of Calculus of Variations that is applied in physics: http://cleonis.nl/physics/phys256/calculus_variations.php

Discussion of Hamilton's stationary action: http://cleonis.nl/physics/phys256/energy_position_equation.p...

These presentations are illustrated with interactive diagrams. Each diagram has one or more sliders for manipulation of the contents of the diagram. That way a single diagram can offer a range of cases/possibilities.

About my approach: I think of Hamilton's stationary action as an engine with moving parts. To show how an engine works: construct a model out of translucent plastic, so that the student can see all the way inside, and see how all of the moving parts interconnect. My presentation is in that spirit.


Thank you.


Why would you? For most forward-looking calculations, the uncertainty of the future completely swamps any cent-rounding.

Even for plain-vanilla bond price calculations, floats are the right tool for the job. Say you have a bond that pays $5 every year for 10 years, then $100. What's that worth today?

Well, you have a forecast yield curve of interest rates. Say it's quoted as continuously compounded rates, so then you get something like price = sum_{t=1..10}($5*exp(-r(t)*t)) + $100*exp(-r(10)*10).

But wait, say you actually have 1000 different potential paths of interest rates, and you want to average over all of them.

Oh, and there's a 1% chance of default every year.

Oh, and actually these are mortgages, so there's a path-dependent chance of them refinancing every year, if the rates get low enough.

And then there's an overall economic forecast, so if you have a bunch of mortgages, there's a bigger chance they'll all default at the same time.

And so on. Rounding the cents isn't really worth the worry, once you're putting noisy forecasts through `exp` (or worse special functions).

This applies for vanilla bond valuation, any option, any future. More so if you want risk measures (what if rates go up 0.10%? volatility increases?), and so on.

Floats work just fine for this.


This is true in complex scenarios but not true in other finances scenarios. For example, there is a reason why any electronic exchange will use integers with implied decimal precision as the wire format and will continue to use such representations before and after encoding/decoding. We do not need to do hugely complex operations, it is mostly simple comparisons and some simple maths operations. We absolutely need exact precision and speed and it is difficult to get that when using doubles.

In parts of the stack where things are more complex and outside of the critical path, then yes, you use floating point.

Also, it isn't just that rounding the cents isn't worth it, it is that if you work with implied decimal integers with an implied 2DP, then you're going to end up with massively inaccurate results after a few operations.


Is the selling point for this easier custom widgets than in other frameworks?

Maybe my standards are too low, but for quick internal tools, it's hard to beat 30-year-old Tcl+Tk.


The selling point is described quite well under "The Pitch" section of the README.

Unlike most other toolkits, this one doesn't actually handle its own rendering/contexts. It outputs the low-level vertex buffers and textures that you can pull into your own graphics pipeline. Which means you can integrate it into any sort of 3d application or backend that you want.


This may be too late to this thread to matter, but I've been dabbling with GT for a few months, and I like it so far.

I'm using a few of the modalities mentioned in the article.

I use it as for "Pharo development" / Smalltalk IDE. I've tried a few times to learn Pharo and Smalltalk, but I've stuck with GT more than I ever had with raw Pharo. It feels nicer graphically, and, most importantly, it lets me write notes about what I'm doing and exploring and save them as part of the image. That means I can open GT and continue right from where I left off, even if it's been a month since I looked.

I use it for "Personal knowledge management", mostly. Little projects and analyses that occur to me. Nothing fancy, things like writing up why it's easier to get heads-tails in a sequence of coin flips than heads-heads, with a bit of simulation, a writeup of the Markov chain, derivation of expected times, etc. I've done that many times before, but it feels good to have it all together in an aesthetically-pleasing form. Bits on game odds, leetcode-ish algorithms exploration, general computational fiddling.

Once I figured it out, I like the git integration for both code and notebook pages. That goes a long way to reassuring me that the things I do won't be lost. I still don't quite get the Metacello / BaselineOfX dependency management, but I'll get there.

I did a little API browsing, mostly following the "Exploring the GitHub REST API in 7'" video by Oscar Nierstrasz (https://www.youtube.com/watch?v=-vFwfwy5WZA). That series is excellent and I think does a good job of illustrating the power of the system and how it's different.

But, in general, it feels like a good "tool for thought".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: