Hacker News new | past | comments | ask | show | jobs | submit | ajconway's comments login

Building complex apps is hard. User-facing, feature-rich apps--especially so. It takes a lot of engineering effort, but also management (which implies some kind of a corporate structure). Coincidentally, it also doesn't align well with open (or any) standards.

There is also a technique called logarithmic depth buffer (which should be self-explanatory): https://threejs.org/examples/?q=dept#webgl_camera_logarithmi...


That's a pretty stunning visualization too


I wasn't aware that logarithmic depth buffer could be implemented in WebGL since it lacks glClipControl(). It's cool that someone found a way to do it eventually (apparently by writing to gl_FragDepth).


> apparently by writing to gl_FragDepth

If they do that, this disables early Z rejection performance optimization implemented in most GPUs. For some scenes, the performance cost of that can be huge. When rendering opaque objects in front-to-back order, early Z rejection sometimes saves many millions of pixel shader calls per frame.


And not to mention, floating point numbers are already roughly logarithmically distributed. Logarithmic distributions are most important in large differing orders of magnitude, so having the piecewise-linear approximation of logarithm is good enough for proximity buffers.


Indeed, logarithmic depth is pretty much useless on modern hardware, but it wasn’t always the case.

On Windows, the support for DXGI_FORMAT_D32_FLOAT is required on feature level 10.0 and newer, but missing (not even optional) on feature level 9.3 and older. Before Windows Vista and Direct3D 10.0 GPUs, people used depth formats like D16_UNORM or D24_UNORM_S8_UINT i.e. 16-24 bits integers. Logarithmic Z made a lot of sense with these integer depth formats.


Yeah, I agree, but I guess it's fine for a demo, which otherwise would not have been possible.


> which otherwise would not have been possible

I wonder is it possible to implement logarithmic depth in the vertex shader, as opposed to pixel shader? After gl_Position is computed, adjust the vector to apply the logarithm, preserving `xy/w` to keep the 2D screen-space position.

To be clear, I have never tried that and it could be issues with that approach, especially with large triangles. I’m not sure this gonna work, but it might.


I haven't really studied this either so I could be mistaken, but I think it's because OpenGL does the perspective divide between the vertex and fragment shaders, going from clip space to NDC space (which is in the [-1,1] interval and can only be changed by glClipControl()).

The value in gl_FragDepth, if written to, is final, but gl_Position is not and will go through the clip/NDC transformation and since the trick to get the extra precision is to put the depth range into [0,1] instead of [-1,1], this would fail.

So my guess is, it probably wouldn't work on WebGL/OpenGLES without also setting gl_FragDepth, which is, as you mentioned, impractical performance-wise.


At least on my GPU this is extremely slow compared to disabling it.


I'm getting obvious Z-fighting issues on that.


Quest 3, the DPI looks somewhat similar to a non-retina Apple display.

It felt really futuristic to log into my office computer from a synthesized apartment via Virtual Desktop. The latency was fine for typing. Even YouTube video/audio was bearable. But the device itself is still too heavy. The fixed lens system can't be good for your eyes.

Apple Vision Pro is even heavier. Even as a hardcore VR fan I cannot imagine the current generation hardware to be useful in productivity tasks.


What is the main cause of the heavy weight? Battery? Auxiliary hardware?

I wonder if those could moved to some package attached to e.g. your waist and connected via cable. If done right it shouldn't be too irritating.


AVIF does not support progressive encoding.


Assuming you meant "de"coding: it does, albeit I like JPEG XL's more. To me, that point seems an argument for things being the other way around anyways (i.e. general-purpose but not a good delivery format).


> Assuming you meant "de"coding

Is there something wrong with the other phrasing? You need to have a progressively encoded image before you can decode it progressively.



I am not sure how exactly the "progressive avif" works, but it looks like basically something optional that comes at a cost in the final file size, whereas in jxl some basic progressive encoding is always available (at least for lossy images) and doesn't come at a cost in file size.


iOS and Android allow apps to process push notifications before displaying them.


> what percentage of those billions are correctly using the ridiculously long numbers (60 decimal digits) used to represent identities (WhatsApp calls them "security codes") in that system to ensure they are actually communicating end to end?

Assuming one of those billions users is a motivated security enthusiast, WhatsApp is not able to perform MITM attacks at scale, as it would be trivial to prove. If WhatsApp decides to MITM your chats, it can't do so retroactively due to the properties of the protocol. If you're a high-profile target, you should verify your keys.


>If WhatsApp decides to MITM your chats, it can't do so retroactively due to the properties of the protocol.

Can't they just set you up as a new device? The user wouldn't know if they left the notification at the default setting.

Whatsapp would not MITM ever single user. They would carefully target particular individuals.


Well it's difficult for WhatsApp because it's closed source, so they can do whatever they want.

But let's assume the client app was open source, and WhatsApp decided to reset the key for some targeted users. Most users wouldn't realize, but if one did, then that would be very bad for WhatsApp. It would be all over the media. That's why it cannot be done at scale.

That's why it cannot be done at scale with Signal, too. Even if the users mostly ignore the "new key exchange" notification. If Signal MITM conversations and one person manages to prove it, then Signal is done. That's a pretty strong incentive for them not to do it.


I think they could do literally anything, because it is closed source; including forging random keys or ignoring the notification setting, ...


Cargo is insufficient if a project has cross-language dependencies.


But M1 did provide fundamental increase in performance. It had only 4 performance cores, yet it could build my code 1.5 times faster than the mightiest i9 of the time.


> But M1 did provide fundamental increase in performance.

No it didn't. Look at the Mac mini benchmarks.[0] It is a smooth curve of increasing performance from the oldest models up to the M1. Performance is increasing faster, as should be expected, but the increases in performance from model to model are incremental, always resulting in "experts" being surprised and disappointed. M1 Mac mini is not ten or five times faster than the 2018 Mac mini, and not even twice as fast. It is 26% faster in multicore performance and 36% faster in single core performance. These are increments, not massive leaps in performance.

[0] https://browser.geekbench.com/macs/mac-mini-late-2020


Are you looking at the link you posted? A 36% (quoting your number) single-core increase over the 2018 model would mean a score jump from 1098 to 1493. The real number is 1715 (a 56% increase instead). You may argue that's just incremental too, but what would your threshold be?

And let's be honest — synthetic benchmarks are bullshit. In this thread, you have a number of different people describing their experience of how much the perceived performance gains were, and how they didn't feel incremental. You are bending over backwards to try and dismiss those, and I don't get why. My perception, between having a maxed out 16" Intel MBP and getting an M1 mini at the time, was nothing short of "holy shit, this thing smokes my $4000 machine". Call it incremental; call it whatever feels right to you, but I know what it felt like. My sample size of 1 analysis is: it was not incremental at all.

(An aside: M1 to M2? definitely incremental.)


I agree that claims of 5x and 10x are probably exaggerated a bit. An awful lot of people were trading in a dual core i5 (Air), or a quad core i3 (mini) or i5 (MBP or Mini) for an M1. You could pay more for a six core i5 or i7 in the mini, and those models performed much better than the lower price points...

Coming from the lower end of the product line, the M1 was an incredible upgrade, and the price point really didn't change (I paid 997 for the Intel, $949 for the M1). I swapped an Intel Air for a M1 Air, and could build Go and Node apps in 25% of the time it took the Intel Air... and could do so all day on battery.


36% increase inbetween generations is absolutely massive though

Prior steps were about 10% or less.


It is an incremental increase in performance compared to the last incremental release. Performance increases should get larger and larger, and more and more frequently, that is how computing technology advances, along a logarithmic curve yet bounded by Moore's Law and the limits of what can be done with the technology of the hour. But there was never once a 100% increase in performance or even a 50% increase in performance between one model and its immediate next revision. And that many seem to be expecting this is tremendously unrealistic.


A consistent 10% improvement at a regular cadence is already exponential growth. Ever-increasing improvements at an ever-increasing frequency would look exponential on a log plot. I don't know how you could possibly expect that and then complain that other people are crazy for expecting exponential growth.


For the last 5 years and the next 5, I expect less than 30% performance increases from one gen model to the next. But it took 20 years to get to that level. A decade ago it was 15% increases in performance between gens.


I'm not sure if this changes anything here but when I think about performance at least for laptops but even for other products I think not just actual CPU operations but energy efficiency as well.

I do agree that Apple's CPU releases are incremental updates (and mostly always have been), but when I used an M-series Mac for the first time it was a step grade upgrade from the previous gen even if it's somehow not accounted for in CPU performance metrics (though I think it will be if you account for energy as well).


> I think not just actual CPU operations but energy efficiency as well.

Since when? Seriously. Since the first Apple Silicon release, undoubtedly.


Yup. Apple really set the bar. I’ve never expected battery life like this from my laptop. Now that I know I can get the same(or better) performance with much higher efficiency, I’m never going back.

My laptop doesn’t heat up anymore. It doesn’t die. If it has fans, they’re absolutely silent. All this, and I haven’t had to change my workload at all.


No one cared (not entirely true, but for most) until Apple made power efficiency relevant. There were efficient processors prior to Apple Silicon. No one (again, exaggerating) cared until Apple made them care.


> No one cared (not entirely true, but for most) until Apple made power efficiency relevant. There were efficient processors prior to Apple Silicon. No one (again, exaggerating) cared until Apple made them care.

I don't think that's really true.

The major cause for the failure of the macbook was a lack of power efficiency. Intel just didn't make a performant enough low power chip to make the concept work outside of a small group of people who value portability over everything else.

But even beyond that, there have been frequent complaints of Apple's laptops running hot. Those complaints don't necessarily show up in benchmark numbers, but I've run across them many, many times.


> The major cause for the failure of the macbook was a lack of power efficiency

In what world or category did the Macbook fail? It's consistently top selling and top rated.


I think he's referring specifically to the most recent machine branded simply MacBook, with no Air or Pro suffix. That was a 12" fanless notebook introduced in 2015, a few years before the MacBook Air got a Retina Display upgrade.


for all of the commenters, i'm fairly sure the macbook being referred to is the 12-inch macbook. that was absolutely a failure due to lack of a power efficient and performant cpu


> for all of the commenters, i'm fairly sure the macbook being referred to is the 12-inch macbook.

Yeah - sorry. I should have said "12 inch Macbook". It's unfortunate that Apple's naming in that instance is so confusing.


If by failure you mean the best selling laptop of its model year, sure.


Mac mini is a poor example to use here since it was not thermally throttled to nearly the same extent, if at all, as the MacBook Pro.


Choose any other model. It is true for all of them, always. Though performance gains are getting larger and larger over time, each subsequent release of any model is an incremental performance bump from the last. Not just true with Apple hw, true for all hw. IOW, we have not seen a leap in performance of 100% or even 50%, from one model to its next revision, and idk that we will.


that's just not true. Take any of the last Macbook Pro Intel (maxed out) models and compare it to a similarly priced M1 Max. The difference in real world usage is night and day - although a lot of it is caused by the horrible thermal throttling, that made the Intel models almost unusable. It was definitely the biggest performance boost I have ever seen going from one generation to the next.

Example: On the last Intel MBP I could barely run Teams with video, the Intel Macbooks (I tried many) got immediately super hot and started throttling to a point that made the machines unusable. The M1 Max doesn't even turn on the fan.


> Take any of the last Macbook Pro Intel (maxed out) models and compare it to a similarly priced M1 Max.

What you have done is leap frog a couple generations there. Compare the benchmarks[0] of one model of the last Intel MBP, say the 2019 13" MBP (MacBookPro15,4) and the very next generation of that model, the 13" 2020 M1 MBP (MacBookPro17,1) and you will see it is an incremental increase in performance, a 36% performance increase in single core and a 40% increase in multicore performance. Impressive, but these are not exponential gains in performance nor even a doubling of performance, like what everyone seems to expect between the benchmarks of M1 and M2, and in fact this foolishness is not new, and has been going on since 68k models were new, all through the PPC era, into the Intel era up to today.

[0] https://browser.geekbench.com/macs/macbook-pro-13-inch-retin...


I'm talking about this 1 generation jump (the 13" models have usually been limited in multi-core options, only now with the introduction of the 14" MBP they offer the exact same options as the 16"):

https://browser.geekbench.com/macs/macbook-pro-16-inch-late-...

https://browser.geekbench.com/macs/macbook-pro-16-inch-2021-...

And the real world usage implication feel even more crazy than the numbers look like.

Also, even the one you linked looks crazy. Almost 40% increase - when did you ever get something like this in a year to year upgrade?

Additionally these benchmarks don't really tell about the thermal throttling problems the Intel machines had, which are completely gone with Apple Silicon. So all in all I'd definitely not call the Apple Silicon jump incremental - for the Apple hardware it was revolutionary.


tbf the mightiest i9 mobile chip at the time was pretty bad, thermal wise and throttled extremly fast, besides that it was basically offered for ultrabooks... I mean the m1 did draw only half or even less power than the mightiest i9 mobile chip at the time (and still does lol)


The M1 is fast for a few reasons:

* It uses a fabrication process that noone has gotten their hands on yet. These have always been massive jumps, and giving access to that node to AMD and Intel gets them to similar performance (actually, they already are at similar performance with the same gen).

* You're buying an un-upgradeable SoC. Soldered on everything means fast interconnect, while others have to play ball with standards that allow me to change components whenever I want.

* It's a pretty damn good CPU.

So, out of these three, Apple is responsible for 1/3. Dump an i9 on a SoC with a 3nm process and it'll eat the M1 alive. There's no "fundamental" increase.


I'm not sure this is true, looking at say the i7-1260P in the XPS 13 it appears to turn in performance similar to the base M2 and half the battery life. M2 Max is twice as fast as the i7 and still 2x-3x the battery life, though also twice as expensive. I don't see any case that Intel's designs are somehow better than anyone else's let alone enough to "eat alive" any competitor design on equal process.


the fab process is their for grabs not by accident, they made the right call to invest in a longterm relationship with tsmc long time ago, so they have dibs on all cutting edge tech.

un-upgradable SoC is a strategic design choice so which is basically part of the third reason which is their responsibility as you pointed out.

So basically their success with m1,m2, etc is a well done implementation of a very ambitious strategy to disrupt the market. I don't see why it should be disputed.


I think you’re saying I shouldn’t be impressed that my M1 machine is much faster than my previous Intel machine, while having 3x the battery life, because Apple cheated or something.

I don’t care. I’m impressed.


Yep. A lot of this thread feels like I’m being gaslit into believing the M1 machines were not the ridiculously huge jumps we all knew they were at the time.

I don’t really care about benchmarks, these things have allowed me to do twice the work with half as much pain. That’s not incremental.

Maybe they’re not great processors and it’s just Apple cheating in software/process-node/whatever. Great! Let me know when other manufacturers figure out how to cheat in software/process-node/whatever and I’ll consider them.


They're not a ridiculously large jump, not on the overall scale of performance. There's nothing in an M1 that we couldn't do before, or haven't done before. High end SoCs have always had insane performance. However, slap literally any high end CPU of today on a SoC and you get the same results. Apple didn't invent new tech, didn't create performance out of thin air.

However, they did just force you to buy a new $2000 machine next time you want to upgrade in 3 years because it's a single, monolithic block.

>Let me know when other manufacturers figure out how to cheat in software/process-node/whatever and I’ll consider them.

Unfortunately, they all accept to be part of a greater ecosystem that doesn't attempt to fuck you over by being un-upgradable and incompatible with each other, so cheating is out of the way.


> slap literally any high end CPU of today on a SoC and you get the same results.

They should do that!

> they did just force you to buy a new $2000 machine next time you want to upgrade in 3 years because it's a single, monolithic block.

You’re absolutely right. I love my really fast, cold-running, forever-battery monolithic block. I’m very happy to pay ($2000/365/3 = $1.82) per day for it, minus the resale value it’ll still have.

I understand the value of the ideals regarding end-user upgrades, but at the end of the day the tradeoffs just don’t make sense for me.


i think some people exaggerate their performance gains but the battery life they provide is ridiculous


I hope you are correct as I’m all for competition and don’t really care about Architectures. In my own experience it’s the thermals and power use that are the most impressive parts or the M-series.


What about ZCash-like networks with anonymous transactions?


That's a privacy coin. It mostly attracts agorists, not businesses.


But, given a choice, wouldn't you choose your own transactions to be untraceable?


Of course. I used to like Monero (XMR) despite its shortcomings, but Lightning is now more promising. It essentially solves all the flows in Bitcoin: scalability, fungability and privacy.


Those single-frame next-gen formats don't support progressive rendering.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: