Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 512GB of RAM

Keep these things the hell away from the people who develop Chrome and desktop JS apps.



In 2025 the question isn't "will it run crysis", it's "will it run a simple CRUD app".


Will it run Electron?


Gonna pile on:

At this point we may need TSMC to make a specialized chip to run Electron.



This was discussed before and interesting but apparently the name of that instruction is misleading. Someone had chimed in and talked about how having Javascript in its name is entirely unnecessary as that exact same floating point representation is commonly used outside Javascript as well.

If you disassemble some armv8 binaries that aren't dealing with Javascript, you do see still see FJCVTZS.


It has been done for Java [1] and as we make smaller and smaller chips who knows.

[1] https://en.wikipedia.org/wiki/Jazelle


There are already specialized instructions in the Apple Silicon chips. IIRC there's something tailored for the Objective-C runtime, and something useful for Javascript runtimes.


Uncontended acquire-release atomic operations are basically free on Apple Silicon, which synergizes with the Objective-C (and Swift!) runtimes, where every retain/release is an atomic increment/decrement.

https://web.archive.org/web/20201119143547/https://twitter.c...


    > Uncontended acquire-release atomic operations are basically free on Apple Silicon
While I don't doubt you, the poster, specifically, how is this possible? To be clear, my brain is x86-wired, not ARM-wired, so I may have some things wrong. Most of the expense of atomic inc/dec is "happens before", which essentially says before the current core reads that memory address, it will be guaranteed to be updated to the latest shared value. How can this be avoided? Or is it not avoided, but just much, much faster than x86? If the shared value was updated in a different core, some not-significant CPU cycles are required to update L1 cache on current current with latest shared value.


EDIT:

    > some not-significant CPU cycles
should say:

    > some not-insignificant CPU cycles


JSOC - javascript on a chip


You need an AWS Region for that...


In the future, we’ll decide HTML, CSS, and JS are too much of an inconsistent burden; so every website will bundle their own renderers into a <canvas> tag running off a WASM blob. Accessibility will be figured out later - just like it was for the early JavaScript frontends.

I am looking forward to the HTML Frameworks explosion. You thought there were too many JS options? Imagine when anyone can fork HTML.


<canvas> is already a middle finger in the direction of accessibility. You don't need wasm to put blind people at an extra disadvantage. SVG Accessibility anyone? No? What a surprise. Classical web accessibility has basically ended. We (blind people) are only using sites which are sufficiently old to be still usable.


I'm genuinely trying to do something about <canvas> element accessibility. Whether it's enough ...? Probably not. But if I can do the work to try and show that <canvas> elements can be made more accessible, then there's no excuse for developers working on far more popular JS canvas libraries from making an attempt to better my efforts.

I do strongly agree that <canvas> elements should not be used to replace HTML/CSS! My personal web hierarchy is 1. HTML/CSS/images; 2. Add (accessibility-friendly) JS if some fancy interaction is useful; 3. More complex - try SVG/CSS; 4. use <canvas> only if nothing else meets the project requirements.

https://github.com/KaliedaRik/Scrawl-canvas


You are blind? Could you perhaps point me to a good resource to make my websites or apps more accessible, perhaps even to test it in these regards?

I’ve found some resources but when I look at them I also hear stories of blind people saying these guidelines only make things worse.


Well, I am not a web dev... At least, my know-how ends when SPAs begin. All I can point you to are the WCAG, but I am sure you already know about them...

Regarding the vague criticism you mention, I'd need something more concrete to tell you if the rumors are truish...


Ah my bad. Yes I was aware of the WCAG, but I also read some criticisms regarding them. I guess it’s still a good starting point then, thanks!


There has been some exploration around developing a JavaScript API for accessibility. If implemented, that would allow <canvas> renderers to be accessible. I hope people will consider that blocking for shipping canvas renderers, but we'll see.


Deaf person here working full time to try and make some consumer websites not terrible at the least.


Great, thanks. Keep up the good work, we need everybody motivated!


Why stop there? LLMs will free us from the shackles of having to ship actual code, instead we'll ship natural language descriptions and JIT them at runtime. It may use orders of magnitude more resources and only work half of the time but imagine the Developer Velocity™


The LLM created code will then be consumed by my AI agent which will rewrite the application to filter all of the bullshit and be fit for my minimalist preferences like a Reader Mode for CRUD apps.


In fact, with AI becoming more powerful, the <canvas> tag might soon become even more viable; because nobody will need ARIA tags or similar to tell them what’s on screen. The AI screen reader will look at the website as a whole and talk to the user. With accessibility no longer required, and with any UI being just a dumb framebuffer, we’ll finally see perfect chaos.


And blind people will be the first test subjects for the "we see everything you read" project. Sweet. A small enough group that has no way out. Besides, after the initial giveaways, imagine the revenue if you can charge for every single pageview.


can't wait for all the imaginary features


You can hallucinate them right now already. Just ask WebGPT.


The state of web deployment in 2025 is the universe punishing me for calling java applets and other java web deployment tech "heavyweight" back in the day.


What would we dev areas are important to you?


> every website will bundle their own renderers into a <canvas> tag running off a WASM blob

Isn't that Flutter?


Not that I intend to scale this in any way, but I'm working on an in-game UI rendered on the canvas, and I am thinking I might be able to hack something together based on this youtuber's library and excellent explainer video[0]. The thought had definitely occurred to me that if someone wanted to really roll up their sleeves and maintain a js port of the library, it would provide a translate-able UI from native C to native JS and back. At least, I can imagine a vite/webpack-like cli that reads the C implementation and spits out a js implementation.

Of course, I could also imagine one that reads the C and provides the equivalent html/css/js. And others might scoff "why not just compile the whole C app into wasm", which would certainly be plenty performant in a lot of cases. So I guess I don't know why it isn't already being done, and that usually means I don't know enough about the problems to have any clue what it would actually take to make such things.

In any case, I'm also looking forward to a quantum leap in web app UI! I'm not quite as optimistic that it's ever going to happen, I guess, but I can see a lot of benefit, if it did.

[0]https://www.youtube.com/watch?v=by9lQvpvMIc


I'm thinking about this space now. Ideally, I want a new browser like platform with stricter security properties than browsers but better rendering out of the box capabilities.


You jest, but isn't this Web Components? Or alternately, Flutter


Web Components was too verbose and nobody could figure it out. Flutter is just the beginning of the newest scheme by RAM manufacturers to bloat our RAM usage. We’ve stagnated at 8GB on midrange computers for too long.


Web Components aren't that bad, but they could definitely use a DX makeover.

For simple components, I much prefer them to firing up the React ecosystem.


Soon it'll be all 3D content anyway... the old world of a graph of documents is going away. The web breathed a sigh of relief when Apple's Vision Pro bombed.


Speaking of CRUD, would Apple’s on-chip memory have significant advantages for running Postgres vs a threadripper with a mobo full of ram?

It seems like vertical scaling has fallen out of fashion lately, and I’m curious if this might be the new high-water mark for “everything in one giant DB”.


Better get to the bottom of the mystery surrounding Apple's ECC patents on LPDDR ECC or you will have to make a leap of faith that your database on their chips won't wind up cruddy in a Bad Way. All we have now are assumptions and educated guesses about what they may be doing. It's also going to be an issue with AMD 395+ and nVidia+MediaTek GB10 (but I would assume NO ECC on those SoCs, based on their history).

It may only be a few mm to the LPDDR5 arrays inside the SoC, but there are all sorts of environmental/thermal/power and RFi considerations, especially on tiny (3-5nm) nodes! Switch on the numerical control machine on the other side of the wall from your office and hope your data doesn't change.


There are already big servers designed for huge single databases, for example the 8-socket Xeon types. Tbh I don't understand exactly why RAM is such a concern, but these machines have TBs of it.


Woah, 8x Xeon CPUs on a single motherboard. That is a new record for me.

I found one here from Supermicro: https://www.supermicro.com/en/products/motherboard/X13OEI-CP...

Has anyone see one of these in action? What was the primary use case? Monolithic database server?


I think a bigger business case is virtual machine hosting, say one of these is maxed out (8 Xeons with 56 cores ie 448 cores and 32tb of memory), say it's divided into a 1000 machines you can run each VM with 40% cpu utilization and 3gb of memory, considering many VM offerings have less RAM (and add a bit of overselling on top of it with regards to CPU) it could probably house over 2000 VM's.


You can do that more cheaply with separate machines. The use case for this mega one really is monolothic DB or server.


I'm not sure how this would impact the server market in any way considering that epyc thread ripper has supported 4 TB for over 5 yrs now.

Is it the usual Apple distortion effect where fanboys just can't help themselves?

It's definitely a sizeable amount of RAM though, and definitely enough to run the majority of websites running on the web. But so would a budget Linux server costing maybe 100-200 bucks per month.


The question is about embedded DRAM, not trying to put a Mac in the data center. I am unaware of an apples to apples comparison here, but on the same Intel and AMD platform there can be a performance increase associated with embedded high speed LPDDR5 vs something on an SODIMM, which is why CAMM is being developed for that space.

I would be interested as well in what an on chip memory bank would do for an EPYC or similar system since exotic high performance systems are fun even if all I’ll ever touch at this point is commodity stuff on AWS and GCP.


He edited his comment. The previous version did reference the 512 GB being so big that it'd be a game changer for servers.


Yeah, 512GB was a game changer for servers... with DDR3...

And that wasn’t even where it topped out, there were servers supporting 6TB of DDR3 in a single machine. DDR4 had at least 12TB in a single (quad-CPU) machine that I know of (not sure if there were any 96*256GB DDR4 configs). These days, if money’s no object, there exist boards supporting 24TB of DDR5. I think even some quad-CPU DDR2-era SKUs could do 1.5TB. 512GB is nothing.

(Not directly in response to you, just adding context.)


While I did make a couple of cosmetic edits within a few minutes of posting (before there were any replies), even the original was referring to the speed of the memory ("on-chip"), not its size.

You misunderstood my post, and I don't appreciate the tone of your reply.


I didn't appreciate you removing everything I responded to either, replacing it with something making my comment look entirely out of context.

While I believe you that you meant to write about the different performance profile of on chip memory, that's not what you did at the time I wrote my reply. What you actually did write was how 512 GB of RAM might revolutionize i.e. database servers. Which I addressed.

And if you hadn't written that, I wouldn't have written my comment either, because I'm not a database developer that could speculate on performance side-grades of such kind (less memory, but closer to the CPU)


This is ridiculous, I changed like 3 words. While I did originally mention 512GB, the context (“on-chip”) made it clear I was referring to the speed, not the size.


Will it run Discord?


They should make a “webdev” edition with like 4 GB.


Raspberry Pi 4GB


The Mac Mini M2.


Chrome has to run on chromebooks, quite a few of which are still-supported models with 4GB of non-upgradeable RAM.


So that means it can run with 4GB. Is there a way to block it from using more?


If you have unused ram, why would you want an app not to use it?


Yes, but not one app to use all of it.


You could try to use cgroups to accomplish that.


Now wouldn't that be the dream.


Run it in a VM.


These chromebooks won’t run chrome, they’ll meander it.


I wouldn't even call it meandering.

Know that scene from one episode of Aqua Teen Hunger Force where George Lowe (RIP) is a police officer and has his feet amputated, so he drags himself while pursuing a suspect?

Yeah. It does that.


Hmm, that hasn't been my experience. My Mediatek 4gb Chromebook is surprisingly snappy (and gets incredible battery life, better than my Macbook that cost 10x as much). Starts to slow down a bit if I go over a dozen tabs while having a video playing but otherwise, it's solid.

I can even use VS Code remote on it in a pinch, though that's pushing it...


That’s almost the full deepseek r1!


Almost is a painful word in this case. Imagine if it could actually run R1. They'd make so much money. Falling short by a few dozen GB is such a shame.


my first thought was, "what does it look like fully specced out, 512 GB RAM cannot be cheap" fully specced out it's ~$15k now I bet that'd be a fine $15k AI machine but if I wanted a CPU AI rig a cobbling of multi-core motherboards could get higher performance at a lower cost, and/or some array of used nvidia cards. the good news is 3 or 4 years from now hardware specs such as this will be much cheaper, which is exciting


512GB only available on M3


$10k and up


Who do you think buys these? :)


Renderfarms Animation Studios

We had some hefty rigs at the last studio I worked at.


Are these really cost effective for that usecase?


Not really. Smaller scenes you’d use nvidia GPU, larger scenes you’d probably save money doing normal servers.


You run a render farm made of macs?


Did. Until I locked all the artists out of their work accidentally when I went one lunch break. Screwed up a Fw config.

The old xeon stations were power houses.


I come here for the tech news, but also the assmad potshots you guys always take at JS. Never change, HN.


If legitimate complaints about your faveLanguage hurts your feelings, then how do you survive code reviews?


Not OP, and not my favorite language, but I don’t see how “Apple ships large amount of RAM in expensive workstation” is a legitimate complaint about any language. It isn’t even in the same universe of topics. Completely off-topic JS (and Rust!) drama permeating every single discussion isn’t something that happens in code review. It’s very much an expression of the HN community and its culture. And it’s really tiresome, especially when there are both better complaints and better topical venues for these languages and more.


Everyone knows that an Electron app consumes a lot of RAM. Take Slack for example. Running slack in a tab in a browser uses less RAM than running the Electron app for Slack.

The joke being that Apple realized that so many apps are built in Electron and made a decision to provide a shit ton of RAM to just to handle Electron. It seems very on point to the discussion


A joke whose punchline can be and frequently is retrofitted to any setup… isn’t a particularly funny joke.


A joke is not deemed funny by everyone that hears it. Those that do enjoy it.

At this point, it's more satirical than haha funny. Electron is so bloated that it requires way more RAM than say native apps. To poke fun of its inefficiencies isn't going to win Last Comic Standing, but it is valid criticism even if attempted to be told in a humorous manner. Just because it's stuck in your craw doesn't mean the rest of us are in the same place as you, yet you are unwilling to accept that your view isn't the only view.


It’s not stuck in my craw. It permeates damn near every discussion no matter how remote the connection. It detracts from actual discussion of the actual topic in the process.

I actually almost totally agree with the perspective the “joke” comes from! I just don’t see it as a topic that warrants so frequently disrupting otherwise interesting discussion.


Something I’ve been surprised to find over the years working at software companies is just how many C++ writing, Linux using senior engineers there are who simply do not understand how allocation works and what HTOP is actually telling them.

I really think a sizable chunk of people in the “omg my RAM!” camp are basing it on vibes, backed up by a misread of reported usage.

This reminds a long time ago when I was trying to figure out why the heck my Intel Mac was allocating all my RAM and most of my swap to Preview or Chess.


I’m surprised how many people bring up “Erm that memory is actually not being used” as if there isn’t plenty of knock on effects from how memory pressure is actually done. For example, if I keep chrome open long enough my builds slowly use less and less threads because the build thinks there’s less memory available, so I have to periodically close chrome and reopen and restore last session.

It’s true reported memory allocation does not equal actual memory used and that’s very clever of everyone who brings it up, but it does actually cause real annoyances.


>so I have to periodically close chrome and reopen and restore last session.

I thought ublock was forced out of Chrome months ago... how are you people still using it? I switched back to Firefox a couple years ago already, even if it's occasionally painful.


Yeah I actually use Firefox almost exclusively, I only use Chrome when websites mysteriously don't work in firefox, 99% of the it's lack of firefox support.

But in my example I was thinking of a particular 2-month stretch where this kept biting me and I was using Chrome at that point. In terms of memory usage, Firefox is no better though (at one point it was, but not any more).

Now I'm afraid of saying "memory usage" lest someone pops out to comment "that's not how memory works" like whack-a-mole.


Don't visit sites with ads.


You don't visit youtube?


I do, but I don't get ads because my subscription to that effect.


uBlock Origin Lite


Maybe we're all idiots who look at some irrelevant numbers and declare the sky is falling. Or maybe we notice everything running really slowly because the computer is constantly paging.


Top-ish utilities should just be preconfigured to only show RSS unless you absolutely need to know what's virt. A lot of griping would diminish.

There are many specialized allocation patterns -- especially for larger system things like DBs, virtual machine / runtimes etc. -- that will mmap large regions and then only actually use part of it. Many angry fingers get pointed often without justification.


I think the griping occurs when things actually slow down, or system perceived available memory cripples resources. Maybe they point the finger to the wrong place due to misreading virtual, but I doubt people are getting angry when their system is running smoothly.

And this attitude "oh memory usage problems are a misreading of top" promotes poor memory management hygiene - and I think there's a strong argument that's all good in server applications / controlled environments but for desktop environments this attitude causes all sorts of knock on effects.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: