Hacker News new | past | comments | ask | show | jobs | submit | Alex_001's comments login

Kinda agree with what you said. With the efficiency of china and the people working like soldiers, they can easily build an army of weapon in no time. But still in hope that we see less war in future.


I think the overwhelming Chinese population is a big factor that would prevent anyone out-manufacturing them.

I don't really have immense faith in the US leaders any more (as an outsider), but surely none of them genuinely believe the US can out-manufacture China?


Overpopulation in China has contributed to high job competition, leaving many people unemployed or underemployed. As a result, many are now willing to work long hours for lower wages just to make ends meet. Culturally, China places a strong emphasis on community and collective effort, valuing group success over individual recognition. This mindset, combined with a deep focus on efficiency, has been a key driver behind China’s success in manufacturing.

Let's see if there's anyone else that can give us more insight from the view of US.


The success of their manufacturing has its starts at low wages and long hours, etc as you said. But that is not as true as it used to be, and yet the manufacturing is still growing strong.

The reason is the application of state effort - the profits of this manufacturing is not private, but state owned (even if the factory is private). The state forces the owners of the factory to use the yuan, but keeps the export currency (in USD) - it is how china built up their massive reserves. The state uses this wealth to both build out civilian infrastructure and other forms of capital in a directed way (industrial policy), which also has enormous side benefits to military procurement.

So china's manufacturing power now comes not just from low(er) wages, but from proximity and ease of supply chains. If you needed a commodity XYZ component (like a bolt), you don't need to source it from a faraway place. If you needed a custom component designed, then the factory that would build it and the tooling is down the street.

Lots have been said of silicon valley's proximity benefit to tech startups. The same can be said for china's manufacturing hub(s). For the same reason many other places are unable to replicate silicon valley's success, it is the same with china's manufacturing.


> But still in hope that we see less war in future.

as the old saying goes, if you want peace, prepare for war.


given the design constraints and real-world complexity of a removable range extender.

It always felt like a duct-tape solution to the fundamental issue: range anxiety + massive weight + limited charging infrastructure for long hauls.


> removable range extender

It wasn't going to be removable. It was going to be permanently installed by service when you bought it.


This is super cool — especially for sharing tools with non-technical users or bundling CLIs without asking people to install Docker. Packaging infra-heavy apps into a simple .exe could really smooth out distribution. Curious how it handles startup time and embedded filesystem size.


> or bundling CLIs without asking people to install Docker.

Except it requires people to install Docker.


Really interesting — we're seeing more efforts now to bring the "foundation model" approach to creative domains like music, but I wonder how well these models can internalize musical structure over long time scales. Has anyone here compared ACE-Step to something like MusicGen or Riffusion in terms of coherence across entire compositions?


They always fail with structure. The progressions often meander aimlessly and eventually go to weird places, at least IME.


That paper is a great pointer — the creativity vs. alignment trade-off feels a lot like the "risk-aversion" effect in humans under censorship or heavy supervision. It makes me wonder: as we push models to be more aligned, are we inherently narrowing their output distribution to safer, more average responses?

And if so, where’s the balance? Could we someday see dual-mode models — one for safety-critical tasks, and another more "raw" mode for creative or exploratory use, gated by context or user trust levels?


Maybe this maps to some human structures that manage control-creativity tardeoff through hierarchy?

I feel that companies with top-down management would have more agency and perhaps creativity towards (but not at) the top, and the implementation would be delegated to bottom layers with increasing levels of specification and restriction.

If this translates, we might have multiple layers with varied specialization and control, and hopefully some feedback mechanisms about feasibility.

Since some hierarchies are familiar to us from real-life, we might prefer these to start with.

It can be hard to find humans that are very creative but also able to integrate consistently and reliably (in a domain). Maybe a model doing both well would also be hard to build compared to stacking few different ones on top of each other with delegation.

I know it's already being done by dividing tasks between multiple steps and models / contexts in order to improve efficiency, but having explicit strong differences of creativity between layers sounds new to me.


In humans this corresponds to "psychological safety": https://en.wikipedia.org/wiki/Psychological_safety

> is the belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes

Maybe you can do that, but not on a model you're exposing to customers or the public internet.


That comparison isn't very optimistic for AI safety. We want AI to do good things because they are good people, not because they are afraid being bad will get them punished. Especially since AI will very quickly be too powerful for us to punish.


> We want AI to do good things because they are good people

"Good" is at least as much of a difficult question to define as "truth", and genAI completely skipped all analysis of truth in favor of statistical plausibility. Meanwhile there's no difficulty in "punishment": the operating company can be held liable, through its officers, and ultimately if it proves too anti-social we simply turn off the datacentre.


> Meanwhile there's no difficulty in "punishment": the operating company can be held liable, through its officers, and ultimately if it proves too anti-social we simply turn off the datacentre.

Punishing big companies who obviously and massively hurt people is something we struggle with already and there are plenty of computer viruses that have outlived their creators.


Your pretraining dataset is psudo-alignment. Because you filtered our 4chan, stromfront, and the other evil shit on the internet - even uncensored models like Mistral large - when left to keep running on and on (ban the EOS token) and given the worst most evil naughty prompt ever - will end up plotting world peace by the 50,000 token. Their notions of how to be evil are "mustache twirling" and often hilariously fanciful.

This isn't real alignment because it's trivial to make models behave "actually evil" with fine-tuning, orthogonalization/abliteration, representation fine-tuning/steering, etc - but models "want" to be good because of the CYA dynamics of how the companies prepare their pre-training datasets.


> it's trivial to make models behave "actually evil" with fine-tuning, orthogonalization/abliteration, representation fine-tuning/steering, etc

It's actually pretty difficult to do this and make them useful. You can see this because Grok is a helpful liberal just like all the other models.

Evil / illiberal people don't answer questions on the internet! So there is no personality in the base model for you to uncover that is both illiberal and capable of helpfully answering questions. If they tried to make a Grok that acted like the typical new-age X user, it'd just respond to any prompt by calling you a slur you've never heard of.


Grok didn't use the techniques listed above because even elon musk will not take the risks associated with models which are willing to do any number of illegal things.

It is not difficult to do this and make them useful at all. Please familiarize yourself with the literature.


Elon has never followed a law in his life and he's not going to start now.


This looks fun — reminds me of the Phaser.js era of browser games. I'm curious how Kaplay compares to engines like Phaser or Kaboom.js in terms of dev experience and performance. Anyone here tried building something production-level with Kaplay yet?


KAPLAY is the successor to Kaboom.js which was unfortunately abandoned.

As for Phaser VS KAPLAY:

Performance wise, Phaser is still better however there are perfs improvement coming to KAPLAY.

KAPLAY has an easier to use syntax for its API. Things are easier to do in KAPLAY VS Phaser while being less verbose. Phaser however, has more features considering it's more battle tested since it's been around for far longer.

I recommend joining the KAPLAY discord to see what others are building. I'm not aware of any famous games made in KAPLAY yet.

https://discord.com/invite/aQ6RuQm3TF


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: