Imagine something like writing a server with an /metrics HTTP endpoint that Prometheus can then scrape -- but you bind it on separate port only inside a tailnet, with an ephemeral tailnet key and name it "metrics-service-blahblah".
Now you can simply write a script that uses the tailscale API to find all "metrics-service-*" nodes in your tailnet, and then adds their IP/DNS to your prometheus scraping list. Run it every 60 seconds. Done, now you can just deploy your app anywhere on any cloud and it will get scraped and that route will never be exposed to the outer internet.
This will basically just let you attach bespoke applications and not just "computers" to your network. I suspect I will get a lot of use from it.
Tailscale and Wireguard are great. I'm an OpenZiti maintainer and I've written/spoken about application embedded zero trust for many, many years. Still it seems most devs don't think it's important for whatever reason... It'll make me happy if Tailscale is successful here and can spread the word out to get more devs interested in embedding the secure connectivity directly into the apps instead of relying on the classic underlay network and bolting on security. If that sort of thing interests you, you could check out OpenZiti. It's not Wireguard-based for better or for worse you can decide (if you do end up checking it out)
Yes, almost all JJ users do this constantly. Just "track" the particular branch. JJ has an idea that only some commits are immutable, the set of "immutable heads", and the default logic is something like "The main branch is always immutable, remote branches are immutable, 'tracked' remote branches are mutable." In other words, tracking a remote branch removes it from the set of immutable heads.
Jujutsu is not "VC funded". But some of the developers, including me, work at East River Source Control (I worked on Jujutsu before that, too). The majority of the code in the project doesn't come from us -- or Google, for that matter. We don't allow people to approve patches when the author is from the same company, anyway.
jj is not "one guy who works at Google" and the vast majority of submitted code comes from non-Google developers. Even if Google were to stop developing jj (they won't) the project would be healthy and strong.
There's some legal annoyances around e.g. CLA which was a result of being a side project of Google originally. Hopefully we'll move through that in due time. But realistically it's a much larger project at this point and has grown up a lot, it's not Martin's side project anymore.
Part of the reason Astral as a team is so well liked is precisely because they are not part of the main fold or related to "Core Python"; they are an independent vendor, one that delivered high quality code and listened directly to users and their own (extensive) experience to do so, and they succeeded at that repeatedly. Python packaging has {been seen as, actually been} miserable for years, and so by the same token the capacity to believe in/buy into solutions from the "core project" has dwindled. "If it took Astral to fix it, why would it be any different going forward?"
So that's all it really comes down to; uv isn't loved just because it's great but because it is in good hands. This real/perceived change of hands pretty much explains all the downstream responses to the news that you see in this thread. Regardless of who bought them, any fork is going to have very, very big shoes to fill, and filling those shoes appropriately is the big worry.
Does that not also suggest (cautious, make sure we back it up with our actions) optimism about this acquisition? We're not breaking up the band. These tools will be in the same hands as before. And it would be extremely value-destructive to bring in a team like ours and then undermine what made us valued and successful.
Just to be clear: I think uv and the other Astral projects will probably be just fine! I don't really think this the end at all.
I was just trying to explain why people are so upset by the perceived change of hands; that perception isn't perfect of course, it's a mixture of fear, honesty, skepticism, truth etc. I think some people here are just being absurd (e.g. the idea that community projects are magically more sustainable by the fact they are community projects is literally just wishcasting with a mix of Red-Dots-On-Plane syndrome). But I can definitely understand the source of it.
Fair enough. But that does seem like something that'd depend more on the people than the organization. Whoever forks it will need to be trusted to continue to be "good hands", whatever organization they operate under the auspices of.
Not quite, just the pixel/vertex shaders and the algorithm is public domain. Slug "the software package" is not open source (you can get a copy of it along with C4 Engine for $100 to take a peek if you want, though).
I feel like that was much more true in the past but the X925 was only spec'd 18 months ago(?) and you can buy it today (I'm using one since October). Intel and AMD also give lots of advance notice on new designs well ahead of anything you can buy. ARM is also moving towards providing completely integrated solutions, so customers like Samsung don't have to take only CPU core and fill in the blanks themselves. They'll probably only get better at shipping complete solutions faster.
Honestly, Apple is the strange one because they never discuss CPUs until they are available to buy in a product; they don't need to bother.
When you do a cache lookup, there is a "tag" which you use as an index during lookup. But once you do the lookup, you may need to walk a few entries in the corresponding "bucket" (identified by that tag) to find the matching cache line. The number of entries you walk is the associativity of the cache e.g. 8-way or 12-way associativity means there are 8 or 12 entries in that bucket. The larger the associativity, the larger the cache, but also it worsens latency, as you have to walk through the bucket. These are the two points you can trade off: do you want more total buckets, or do you want each bucket to have more entries?
To do this lookup in the first place, you pull a number of bits from the virtual/physical address you're looking up, which tells you what bucket to start at. The minimum page size determines how many bits you can use from these addresses to refer to unique buckets. If you don't have a lot of bits, then you can't count very high (6 bits = 2^6 = 64 buckets) -- so to increase the size of the cache, you need to instead increase the associativity, which makes latency worse. For L1 cache, you basically never want to make latency worse, so you are practically capped here.
Platforms like Apple Silicon instead set the minimum page size to 16k, so you get more bits to count buckets (8 bits = 256 buckets). Thus you can increase the size of the cache while keeping associativity low; L1 cache on Apple Silicon is something crazy like 192kb, and L2 (for the same reasons) is +16MB. x86 machines and software, for legacy reasons, are very much tied to 4k page size, which puts something of a practical limit on the size of their downstream caches.
Look up "Virtually Indexed, Physically Tagged" (VIPT) caches for more info if you want it.
It’s not a hard limit, especially if you aren’t pushing the frequency wall like Intel. AMD used to use a 2-way 64kb L1, Intel has an 8-way 64kb L1i on Gracemont, and more to the point, high-end ARM Cortex has had 4-way 64kb L1 caches since before they even supported 16kb pages.
Yeah, I was more just trying to paint a broad picture. Nvidia in particular I think had fast and large-ish L1 on Tegra (X2?) despite being tied to 4k pages.
"PAYGO API access" vs "Monthly Tool Subscription" is just a matter of different unit economics; there's nothing particularly unusual or strange about the idea on its own, specific claims against Google notwithstanding.
Of course, Google is still in the wrong here for instantly nuking the account instead of just billing them for API usage instead (largely because an autoban or whatever easier, I'm sure).
I am afraid of using any Google services in experimental way from the fear that my whole Google existence will be banned.
I think blocking access temporarily with a warning would be much more suitable. Unblocking could be even conditioned on a request to pay for the abused tokens
Knights Landing is a major outlier; the cores there were extremely small and had very few resources dedicated to them (e.g. 2-wide decode) relative to the vector units, so of course that will dominate. You aren't going to see 40% of the die dedicated to vector register files on anything looking like a modern, wide core. The entire vector unit (with SRAM) will be in the ballpark of like, cumulative L1/L2; a 512-bit register is only a single 64 byte cache line, after all.
True! But even if only 20% of the die area goes to AVX-512 in larger cores, that makes a big difference for high core count CPUs.
That would be like having a 50-core CPU instead of a 64-core CPU in the same space. For these cloud native CPU designs everything that takes significant die area translates to reduced core count.
You're still grossly overestimating the area required for AVX-512. For example, on AMD Zen4, the entire FPU has been estimated as 25% of the core+L2 area, and that's including AVX-512. If you look at the extra area required for AVX-512 vs 256-bit AVX2, as a fraction of total die area including L3 cache and interconnect between cores, it's definitely not going to be a double digit percentage.
Now you can simply write a script that uses the tailscale API to find all "metrics-service-*" nodes in your tailnet, and then adds their IP/DNS to your prometheus scraping list. Run it every 60 seconds. Done, now you can just deploy your app anywhere on any cloud and it will get scraped and that route will never be exposed to the outer internet.
This will basically just let you attach bespoke applications and not just "computers" to your network. I suspect I will get a lot of use from it.
reply