Hacker Newsnew | past | comments | ask | show | jobs | submit | DHolzer's commentslogin

I love chatterbox, it's my favourite. While the generation speed is quick, i wonder what performance optimization i could try on my 3090 to improve throughput. It's not quite enough for realtime.


somewhere


how is that technological progress not fueling resource scarcity?


without wanting to sound overly sceptical, what exactly makes you think it performs so much better compared to claude and chatgpt?

Is there any concrete example that makes it really obvious? I had no such success with it so far and i would really like to see the clear cut between the gemini and the others.


I was thinking that too. I am really not a professional developer though.

OFC it would be nice to just write python and everything would be 12x accelerated, but i don't see how there would not be any draw-backs that would interfere with what makes python so approachable.


what do you mean?

production is not distribution. also there are plenty of analog options to musicians if they want to go that route.


I was under the impression that one could only do real-time stuff with analog tech in this day and age, but apparently people still use tape recorders and stuff. That's just outside of everything I can imagine. I work in signal processing and everything around here is always immediately sampled and discretized.


I see, yeah, i am repairing a 16 channel tape machine right now. it's really nice.

For me it's about the recording and mixing process itself. I have diagnosed ADHD and i have a really hard time to focus on the production when i am working on my PC. I dont commit, i always tweak and i dont really care about the big view of the song because there is always some detail i can polish for the next 3 hours.

On a mixer and tape, i can turn down the lights and just sit there and listen and record. I dont care about the analog sound, i dont care about the inherent noise, i only care about the fact that i can sit down and actually be write music. I can basically do it all without even opening my eyes.


consider telling us upfront about the login requirement and/or have the three demo stories prepared in all combinations, saves you money too.


buying HDDs these days currently feels a little like navigating the dark web.


a tax evading soft spot that is.


That, and cultural.

Irish-Americans are less ambivalent about their roots than many other people of European descent - being excited about being part-German in Missouri is tantamount to being excited about watching paint dry, for example - and the fact that a man of Irish descent was President of the United States within two centuries of the "famine" and after centuries of oppression by the English boosts the credibility of America as a "land of opportunity", even if only in retrospect.


The EU could change that if it wanted to. I imagine more money is lost due to tax avoidance than from US tariffs.


> The EU could change that if it wanted to.

They definitely can't, given that tax decisions require unanimity. Like, if they didn't get this sorted when they had Ireland over a barrel in 2011, it's probably not going to happen. Full disclosure: I am an Irish citizen.


Still it would not be very hard for individual countries to find indirect ways to tax them individually. Like putting 50% VAT on extensive phones to target Apple, or on application stores revenue, etc.


Again no. VAT is also managed at EU level. The digital services tax could be done. The best thing would be to implement the full beps agreement but the US don't want that.


Is it? It looks like it ranges from 17% to 27% depending on countries, so I assumed it was still national decision.


I recently checked out UV, and it's impressively fast. However, one challenge that keeps coming up is handling anything related to CUDA and Torch.

Last week, I started developing directly in PyTorch containers using just pip and Docker. With GPU forwarding on Windows no longer being such a hassle, I'm really enjoying the setup. Still, I can’t shake the feeling that I might be overlooking something critical.

I’d love to hear what the HN crowd thinks about this type of env.


I assume you've seen this:

https://docs.astral.sh/uv/guides/integration/pytorch/

If the platform (OS) solution works for you that's probably the easiest. It doesn't for me because I work on multiple Linux boxes with differing GPUs/CUDAs. So I've use the optional dependencies solution and it's mostly workable but with an annoyance that uv sync forgets the --extra that have been applied in the venv so that if you "uv add" something it will uninstall the installed torch and install the wrong one until I re-run uv sync with the correct --extra again. (uv add with --extra does something different) And honestly I appreciate not having hidden venv states but it is a bit grating.

There are some ways to setup machine/user specific overrides with machine and user uv.toml configuration files.

https://docs.astral.sh/uv/configuration/files/

That feels like it might help but I haven't figured out how to configure get that to help it pick/hint the correct torch flavor for each machine. Similar issues with paddlepaddle.

Honestly I just want an extras.lock at this point but that feels like too much of a hack for uv maintainers to support.

I have been pondering whether nesting uv projects might help so that I don't actually build venvs of the main code directly and the wrapper depends specifically on certain extras of the wrapped projects. But I haven't looked into this yet. I'll try that after giving up on uv.toml attempts.


I've used uv with pytorch and cuda fine. What problem have you had?

I also use it in docker to build the container.


At least my kind of problems were solved by

https://docs.astral.sh/uv/guides/integration/pytorch/#instal...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: