Hacker Newsnew | past | comments | ask | show | jobs | submit | RossBencina's commentslogin

out of interest, what would the physical setup look like? Hard to imagine you could achieve isotropic temperature approaching +/- 0.01 degree over the size of a typical PCB.

Does it have to be isotopic though? The temperature must be constant over time, but a spatial gradient shouldn’t influence the stability of the crystal.

BTW, checkout my other comment in this thread about a GPSDO PCB with a resistor grid on the backside to evenly heat it.


A spatial gradient between the crystal and the temperature sensor, if it varies, can cause an error.

But is the error constant or does it vary over time? If it's the former, it can be calibrated away. If it's the latter, what is the mechanism behind it?

It'll vary over time with the ambient temperature. When you set up a temperature control loop, you have a heater which creates a temperature gradient between it and the ambient temperature. This temperature gradient depends on the power that the heater is putting out, the thermal resistances of the box and any insulation around it. You then have a temperature sensor, which you will presumably put somewhere in the box, hopefully near the crystal, but it won't exactly be the crystal. Then, as the ambient temperature sensor varies, the power output of the heater will vary to try to keep the temperature that the sensor is seeing constant. But because that power also changes the thermal gradient, the thermal gradient in the system will also change, and so the temperature of the crystal is never completely insensitive to the ambient temperature, even if the temperature sensor reading doesn't change at all.

How big and/or meaningful this effect is depends on what the thermal resistances in the system are, and where the temperature sensor is relative to what you want to temperature control. Generally you want a very conductive box that's then insulated very well, since this means the temperature won't vary much across the box (all of the temperature gradient between the heater and ambient is 'taken up' by the insulation). But if you're talking sufficiently high precision this can be quite difficult to achieve.


Thanks for the explanation. Having thermal element (say, resistors) spread as a regular grid all over the backside of the PCB would help with reducing the gradient.

It makes me wonder if it would make sense to have a slow rotating fan inside the box. Not to get rid of excess heat but to compress the gradient against the well-insulated wall of the box. It's probably overkill for most cases, since there are other factor that influence frequency stability...


4. Voltage controlled filter, (diode ladder VCF), as used in the Roland TB303

Diode ladder, but also in various Sallen-Key designs like the Steiner-Parker Synthacon which we all now know from the Arturia Minibrute (Yves Usson probably made more of these filters than Nyle Steiner ever did!) and as I've mentioned elsewhere the Korg MS50. I think the Yamaha GX1 filters used a diode bridge too, probably using discrete transistors similar to the Korg 700S filter.

The C++ (draft) standards are open source:

https://github.com/cplusplus/draft

Last time I looked I could not find an equivalent repository for the C standards.


There isn't one. They publish completed drafts on the working group website:

https://www.open-std.org/jtc1/sc22/wg14/www/wg14_document_lo...


.pdf unfortunately. The C++ sources that I linked to are machine readable .tex.

I reckon it makes some sense for Apple users. You have to be willing (and financially able) to upgrade when Apple says. Apple forcefully obsoletes their products way too quickly to be a viable option if you care about longevity[0]. I have five excellent-condition still-perfectly-working Apple products next to me, none of which have current operating system support from Apple.

[0] EDIT: for reference, my previous ThinkPad lasted me 14 years.


Out of about a dozen Apple devices in our household, none of them can be updated to the latest operating system. It's a huge problem with the Apple ecosystem, I'd say one of the biggest problems. Their hardware vastly outlasts their software, comically so.

14 years as your main driver ? Because that what we’re talking about.

14 is a indeed very long. Let’s instead assume 12, it’s 2013 and you got a top specced T440 with 4th gen i7. That’s actually not bad and the build quality is like a tank as all Thinkpads. Nothing I would use as daily driver myself but having used many other thinkpads of that generation I can see why others are still getting by with it today.

Since we are talking about OS support. 4th gen Intel isn’t supported by Windows 11, so you’d have to upgrade to Linux.


Out of curiosity, how much of that thinkpad were you able to upgrade? Could that be the difference between 5 and 14 years here?

Describe “forcefully obsoletes”?

I ran a 2008 MBP until 2019. Then…gave it to my wife who used it until 2022. Finally retired it after the battery swelled. I suspect I could have replaced the battery and she could have got another couple of years out of it if I really needed.

Not once did that device ever feel obsolete.


I think we have different definitions of obsolescence. Mine is basically "the ability to continue to use the device for the original purpose that I purchased it for." In my case, developing Mac OS software (that's the only reason I own Macs, I do all of my day to day computing on Windows and Linux.) I completely agree with you that the hardware is in no sense obsolete, the software is a different matter. For example my original Retina iPad started feeling obsolete as soon as Safari stopped rendering modern websites, even though the device hardware does not feel obsolete. Another example is that every Mac Mini that I have bought was useless to me as soon as I could not upgrade the OS and run the latest XCode on it.

In my view the forcefull obsolescence mechanism comprises the following strongly interacting practices:

1. Not supplying operating system updates for "older" hardware (actually not that old). Depending on your security posture this point alone may be sufficient.

2. Aggressively deprecating APIs and nudging developers to use the new APIs (i.e. nudging applications to not support the operating systems that run on the older OS that you have to run on the old hardware -- see point 1)

3. Ratcheting operating system upgrades with new hardware. There is no way to control the OS version that you use independent of hardware: replacing a machine always forces an OS upgrade.

4. Requiring the latest OS to run the latest development tools.

In combination, these practices create a treadmill that keeps everything "new" and anything older than 3 years not compatible. There's probably more to it than this but that's what I could write down quickly.


It makes sense for some people, and doesn't for others. Not particularly surprising or insightful.

Meh, I've been a mac user for 15 years professionally, usually alongside some desktop pc for gaming, and I upgrade when I damn well please, which is typically when they have a notable leap in performance, my laptop gets stolen, or my needs change, which should hardly be surprising in terms of progression through a career.

Their recent hardware is proving much more capable as tools than the budget i5 I had before, so I upgraded. In terms of machinery expenses, it's more than I'd like to spend on RAM and ssd than I'd prefer (their pricing ladder is comical) but the product is amazing. I'm going to wait as long as possible before I upgrade to Tahoe though, seems almost DoA


>I have five excellent-condition still-perfectly-working Apple products next to me, none of which have current operating system support from Apple.

If they're working perfectly, why does it matter if they have current operating support? It doesn't seem like you're dependent on Apple.


I use Macs to develop Mac OS software. I need a recent OS to run the latest XCode and I need to test on the latest OS that my customers are using.

Software drops support for certain OS versions even if the device still can run it.

The first iPad Pro can’t run adobe products for example.

The Mac is a bit more resilient to this, but it’s still worrying as yearly improvements become subtler.


Yea, this is the bigger problem: 3rd party software developers drop support for "too old" operating systems WAY too early. Especially on mobile. Some developers only support one major previous version, which is insane.

So, Apple leaves old hardware high and dry by not supporting them with operating systems, and 3P software leaves users high and dry by dropping support for operating systems. It's like they are working together to create e-waste.


> The first iPad Pro can’t run adobe products for example.

That’s more on Adobe than Apple though right?


I'd argue it's a bit of both. In my case, I have an iPad 3, which runs iOS 9 compatible apps, but iOS itself doesn't backup the app files, so when various developers pulled their files off the app store for those old iOS versions, I lost access to the old software that did work, which really doesn't make want to buy another iOS device. Less of a problem on mac though.

I guess if it was a paid app I can see the irritation that would cause. Free apps not so much.

In terms of what I'd hypothetically feel entitled to, absolutely, but it only matters because that basic level of control isn't there

Why would the actor model require malloc and/or threads?

You basically have a concurrency-safe message queue. It would be pretty limited without malloc (fixed max queue size).

> I don't think they're claiming surjectivity here.

What definition of invertible doesn't include surjectivity?


Many? Just add a "by restricting to the image, we may wlog assume surjectivity".

The question is usually more about whether the inverse is also continuous, smooth, easy to compute....etc.


Can you enable TSO for ARM executables?


Yes but I don't see how that is relevant


> Not all GCs are born alike.

True. However in the bounded-time GC space few projects share the same definitions of low-latency or real-time. So you have to find a language that meets all of your other desiderata and provides a GC that meets your timing requirements. Perc looks interesting, Metronome made similar promises about sub-ms latency. But I'd have to get over my JVM runtime phobia.


I consider one where human lifes depend on it, for good or worse depending on the side, real time enough.


Human lives often depend on processes that can afford to be quite slow. You can have a real time system requiring only sub-hour latency; the "realness" of a real-time deadline is quite distinct from the duration of that deadline.



> f(z) = (z − 1)/(z + 1)

Also known as the Bilinear Transform https://en.wikipedia.org/wiki/Bilinear_transform

Used for mapping between s-plane and z-plane when discretising using the trapezoidal rule.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: