Hacker Newsnew | past | comments | ask | show | jobs | submit | dehrmann's commentslogin

I had an X1 Carbon like this, only it'd crash for no apparent reason. The internet consensus that Lenovo wouldn't own up to was that the i7 CPUs were overpowered for the cooling, so your best bet is either underthrottling them or getting an i5.

There's something that feels seductive and clever about taking the contrarian, usually pessimist stance—like you're the only one who sees things for how they really are.

> 6 GB/s

Samsung is selling NVMe SSDs claiming 14 GB/s sequential read speed.


> 14 GB/s

Yes, those numbers are real but only in very short bursts of strictly sequential reads, sustained speeds will be closer to 8-10 GB/s. And real workloads will be lower than that, because they contain random access.

Most NVMe drivers on Linux actually DMA the pages directly into host memory over the PCIe link, so it is not actually the CPU that is moving the data. Whenever the CPU is involved in any data movement, the 6 GB/s per core limit still applies.


I feel like you are pulling all sorts of nonsense out of nowhere. Your numbers seem all made up. 6GB/s seems outlandishly tiny. Your justifications are not really washing. Zen4 here shows single core as, at absolute worst behavior, dropping to 57GB/s. Basically 10x what you are spinning. You are correct in that memory limits are problematic, but we also have had technology like Intel's Direct Data IO (2011) that lets the CPU talk to peripherals without having to go through main memory at all (big security disclosure on that in 2019, yikes). AMD is making what they call "Smart Data Cache Injection" which similarly makes memory speed not gating. So even if you do divide the 80GB/s memory speed across 16 chips on desktop and look at 5GB/s, that still doesn't have to tell the whole story. https://chipsandcheese.com/p/amds-zen-4-part-2-memory-subsys... https://nick-black.com/dankwiki/index.php/DDIO

As for SSD, for most drives, it's true true that they cannot sustain writes indefinitely. They often write in SLC mode then have to rewrite, re-pack things into denser storage configurations that takes more time to write. They'll do that in the background, given the chance, so it's often not seen. But write write write and the drive won't have the time.

Thats very well known, very visible, and most review sites worth a salt test for it and show that sustained write performance. Some drives are much better than others. Even still, an Phison E28 will let you keep writing at 4GB/s until just before the drive is full full full. https://www.techpowerup.com/review/phison-e28-es/6.html

Drive reads don't have this problem. When review sites benchmark, they are not benchmarking some tiny nanosliver of data. Common benchmark utilities will test sustained performance, and it doesn't suddenly change 10 seconds in or 90 seconds in or whatever.

These claims just don't feel like they're straight to me.


6 GB/s may well be in the ballpark for throughput per core whenever all cores are pegged to their max. Yes, that's equivalent to approx. 2-3 bytes per potential CPU cycle; a figure reminiscent of old 8-bit processors! Kind of puts things in perspective when phrased like that: it means that for wide parallel workloads, CPU cycle-limited compute really only happens with data already in cache; you're memory bandwidth limited (and should consider the iGPU) as soon as you go out to main RAM.

DMAing as opposed to what?

Issuing extremely slow PCIe reads? GP probably meant “all” rather than “most.”

yep that makes more sense.

Letting the CPU burn cycles by accessing memory mapped device memory instead of using DMA like in the good old days? What even is this question?

> accessing memory mapped device memory instead of using DMA like in the good old days

That's not actually an option for NVMe devices, is it? The actual NAND storage isn't available for direct memory mapping, and the only way to access it is to set up DMA transfers.


i think he means simply the cpu speed bottleneck doesnt apply as it related to dma which doesnt 'transfer data' via the cpu directly.

its phrased a bit weird.


What? Nvme dont care about sequential access. If that slows you down then it is the fault of the operating system and the APIs it provide.

In Linux you can use direct IO or RWF_UNCACHED to avoid paying extra for unwanted readahead.


Sequential read speed is attainable while still having a (small) number of independent sequential cursors. The underlying SSD translation layer will be mapping to multiple banks/erase blocks anyway, and those are tens of megabytes each at most (even assuming a 'perfect' sequential mapping, which is virtually nonexistent). So you could be reading 5 files sequentially, each only producing blocks at 3GB/s. A not totally implausible access pattern for e.g. a LSM database, or object store.

Any code that's reading/writing to SSD needs to use multiple cores. The SSD is faster than a single CPU core.

That doesn’t sound right. A single core should more than fast enough to saturate IOPs (particularly with iouring) unless you’re doing something insane like a lot of small writes. A write of 16mib or 32mob should still be about 1 ssd iop - more CPUs shouldn’t help(and in fact should be slower if you have 2 16mib IOPs vs 1 32mib iop)

Do you want to process that data, or just let it hang out in memory?

SSDs are not faster than a DMA core.

Be careful not to confuse using the material and distributing it. There are open legal cases sorting out what fair use means for generative AI. Distribution (seeding in the case of torrents) of this material isn't legal. It got Meta in trouble, and it's getting Anna's archive in trouble.

Apple just reduced Vision Pro production, but Liquid Glass was in motion well before that. What leaves me scratching my head is I never got the impression Apple believed in Vision Pro. It launched because after years of research, management wanted to see if the effort was worth continuing to invest in, but that wasn't a vote of confidence.

I'll have to second this. It's not even on Apple's homepage! I hadn't heard it mentioned for months before today. It had its niche share of users who actually found it useful, but apart from them it seems that the world is not ready for spatial computing (or maybe current spatial computing isn't ready for people, who knows?).

The hardware seems good, but with it being tied to the Apple ecosystem there's just no way.

I'd buy one if I could use it with my Linux (KDE) workstation, but there's no chance I'm going to be using it via a mac.


I'm hoping the new Valve headset will be like, 60% of what the Apple vision is. My boss got the Apple vision on launch day and it is really premier hardware, visuals that are almost exactly like seeing the thing you're looking at in real life, and the hand sensing / interactivity was the best I have experienced, even though it still had flaws.

But being tied to Apple's ecosystem, not being really useful for PC connection, and the fact that at least at the time developers were not making any groundbreaking apps for it all makes it a failure in my book.

If Valve can get 60% of that and be wirelessly PC tied for VR gaming then even if they charge $1800 for their headset it will likely be worth it.


If it weren’t $3500+ I’d love one. The world isn’t ready for that price point.

Exactly. More expensive than a high end desktop or laptop while having less useful software than an iPad. No thanks.

If it were around the $500 point I’d pick one up in a heartbeat. Maybe even $1000. But $3500 is nuts for how little they’re offering. It seems like a toy for the ultra rich.

I assumed the price would eventually come down. But it seems like they’ll just cancel the project entirely. Pity.


I’m assuming Vision Pro is viewed as what the Newton was to the iPhone. It will provide some useful insight way ahead of its time but the mainstream push will only happen after a number of manufacturing breakthroughs happen allowing for a comfortable daily driver UX. Optics and battery tech will need multiple generational leaps to get to a lightweight goggle / sunglasses form factor with Apple-tier visuals, tracking, and battery life…

Magic Leap 2 and HoloLens 2 proved that we still haven't cracked the code on AR/XR. Similar price point, plenty of feasible enterprise use cases for folks willing to pony up money to hire Unity or Unreal devs. And I'm sure there are enough of them tired of being flogged to death by the gaming industry. But they both went splat.

It's going to take a revolution on miniaturization AND component pricing for XR to be feasible even for enterprise use cases, it seems.


Apple can afford to improve and cheapify this thing for a decade.

That’s sort of what they did with the Watch.

It has incrementally improved, and gotten cheaper, to the point that I now see them everywhere. When they first came out, they were pretty expensive. Remember the $17,000 gold Watch (which is now obsolete)? The ceramic ones were over a couple of grand.

But the dream of selling Watch apps seems to have died. I think most folks just use the built-in apps.


The $17,000 Apple Watch was a (rather silly) attempt to compete in the high end watch space. However, they also launched the base "Sport" model at US$349.

Not really anything like the watch, the existence of a stupidly expensive "luxury" version doesn't change the fact that the normal one started at $350.

I think the current rumor is that development of a cheaper XR headset has been shelved in favor of working on something to compete with Meta's AI glasses.


I have a vision pro (obtained on day 1 for development purposes), and have given demos of it to a number of non enthusiast/non techie people.

All of them immediately hate that it’s bulky, it’s heavy, it messes with your hair, messes with your makeup, doesn’t play well with your glasses, it feels hot and sweaty. Everyone wants to take it off after 5-10 minutes at most, and never asks to try it again (even tho the more impressive 3D content does get a “that’s kinda cool” acknowledgment).

The headset form factor is just a complete dud, and it’s 100% clear that Apple knew that but pushed it anyway to show that they were doing “something”.


Did they commit to additional production of the Vision Pro? I read their announcement as quiet cancellation of VR products. They announced some kind of vaporware pivot, but I didn't read a single analyst projection that Apple ever intended to bring another wearable to market. Customer usage statistics of the Vision Pro are so low Apple hasn't even hinted about reporting on them.

Wearable products, outside of headphones, have a decade-long dismal sales record and even more abysmal user retention story. No board is going to approve significant investment in the space unless there's a viable story. 4x resolution and battery life alone is not enough to resuscitate VR/AR for mass adoption.


> outside of headphones, have a decade-long dismal sales record

Outside of headphones and watches


Do they sell many Apple Watches? Maybe it is an euro thing but I only very rarely see people wearing one.

I would see 9 garmins for 1 Apple Watch for instance and many more people wearing cheap casios or no watch at all.


I dunno. I see them all the time here (Seattle). Wikipedia estimates 267 million sold as of 2023.

I see mostly Apple watches, a few Samsungs, a small smattering of Pixel watches, and then rarely other brands like Garmin and what not around me.

That's probably regional then. In my area most people using watches nowadays are usually into sports.

I must admit I don't understand the point of a smart watch when most people have their smartphone in their hand a significant amount of time a day and said smartphones screen sizes have been increasing over the year because people want to be able to doom scroll at pictures and videos and interact with whatsapp all day. I don't know how you can do that from a tiny screen on a watch.

Those like me who don't subscribe to that way of living don't want distractions and notification so they use regular watches and would see as a regression a device that needs to be charged every few days.

Some people said payments but I see peolle paying with their smartphone all the time since they have it at hands or in a pocket very close anytime having it in a watch doesn't look like a sigmificant improvement. I'd be curious to see a chart of smartwatch adoption by country.


Apple watches have the highest marketshare in a lot of the world's markets. According to this analysis[1], watchOS (Apple watches) make up around half of all smartwatches used in Europe. Global sales puts Apple around 20-30% market share, with brands like Samsung and Garmin around 8% [2]. I haven't found good US-only statistics to show what the market share is of watchOS is, but I'd imagine its probably close to 50% or more.

I do agree though, anecdotal experiences will vary depending on the kind of people you hang out with. For the people I know heavily into running and cycling, brands like Garmin are over represented. Meanwhile lots of other consumers practically don't even know these are options.

[1] https://www.mordorintelligence.com/industry-reports/europe-s...

[2] https://scoop.market.us/smartwatch-statistics/


I'm in India and the Xiaomi watches are everywhere (they probably don't sell those in the States/EU?) But also Apple and Samsung.

Claiming watches, phones and absolutely everything else they make are everywhere in Poland.

They need to workout how to drop the price. I want one. But really can’t justify that price.

Recent moves have convinced me that Apple is getting ready to push Vision Pro substantially harder.

In recent weeks, I’ve been getting push notifications about VP.

They hired Alex Lindsay for a position in Developer Relations.

And there’s the M5 update.

Just remember, it’s a lot cheaper than the original Mac(inflation adjusted). Give it 40 years – hell, given the speed of change in tech these days, it won’t even take 10.


I think they bought the metaverse hype and hurried up. If only they had put half the energy on AI, we'd have a createML with something else than yolov2 in 2026

DVD also supports 352x480. These pixels are very non square.

Why would you want this? VHS. NTSC has 480-ish visible scanlines, but VHS only has bandwidth for 350 pixels.


I remember the HDD shortage after flooding in Thailand. There was a price surge for a year or so, capacity came back online, and the price slowly eased. If AI crashes, prices might quickly collapse this time. If it doesn't, it'll take time, but new capacity will come online.

https://en.wikipedia.org/wiki/Bullwhip_effect


I would think so because fab capacity is constrained, and if you make an on-die SoC with less memory, it uses fewer transistors, so you can fit more on a wafer.

But bigger chips mean lower yields because there's just more room for errors?

This doesn't scale, though. It can work if you're a superpower or a bloc, but most countries don't have enough resources to each run their own cloud, mines, energy production, and food production.

> We’re building Helix, an AI platform where autonomous coding agents work in cloud sandboxes. Users need to watch their AI assistants work. Think “screen share, but the thing being shared is a robot writing code.”

This feels like a fast dead end. Agents will get much faster pretty quickly, so synchronous human supervision isn't going to scale. I'd focus on systems that make high-signal asks of humans asynchronously.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: