Piggdekk in Norway are equivalent to North American studded tires. When I lived in the northern parts of the U.S., I had a set of these for times around freezing rains.
Beyond the questions of winter weather properties, there are adjacent tradeoffs between the tire types (outside of studded):
1. Fuel economy
2. Noise
3. Degree of particulate pollution emission
I'm sure that the all-season tires probably have some negative tradeoffs in these regards to, which yields a choose the most optimal product for the time of year. All-season tires to me seem like a convenience food for places where the weather can be legitimately bad.
One other difference that is hard to articulate to North American drivers with respect to understanding Scandinavia and roads: there are places where snow and ice will literally not be removed (maybe not even removeable) from the road when plowed (I presume until spring melt). It just becomes a thick ice pack over the course of weeks. I never encountered any roads in my life (including Northern Minnesota) that were this inclement. North American roads tend to be cleared (plowing or melting) to asphalt or pavement.
All-season tires aren't simply a matter of convenience, they offer a safety benefit. If you aren't driving at normal highway speeds, even if it's the dead of winter and the air is below freezing, your tires will heat up and the winter tires won't have as much traction. The disadvantage on dry roads can be several times what the advantage was on contaminated roads, including during the winter.
Driving discipline, culture, and rules in North America are Mickey Mouse.
The reality of car dependency there means that there are people driving and owning cars who can't really afford to do it properly, nor do they know they need to do it properly (e.g., having a second set of tires for the winter). You can see this evidenced by the rust buckets on the road that look like they are one pothole away from losing part of the vehicle body. Deferred maintenance and investment everywhere and in everything …
The United States also covers a vast difference in climate. What good are snow tires for people in South Florida, or Texas, or New Mexico? Where I live I switch between summer and all season cause we get enough snow to justify snow tires once a decade for a couple
of days. This year has been the worse with two weekends with a decent amount of snow that was cleaned off the roads by Tuesday.
Yeah, it was interesting to see some above-ground-to-the-premises power delivery in some of the smaller Norwegian villages above the arctic circle. Things looked rather robust, though.
I lived in the Oklahoma and in Minnesota, and the difference there is already stark:
* OK suffered from plenty of storm-induced winter power outages (massive freezing rain cycles were common in my life). My mother's cotton bath robe, which she kept using until late in her life, had burn marks from when she reached for something over a lit candle during a power outage when I was four years old.
* MN suffers some, but people knew to develop meaningful contingency plans.
Both states have variegated buried-power-to-the-premises usage. It's not really to be expected as the norm in either place, but MN has far more than OK (funnily enough I grew up in a place in OK with it). Either way, the infrastructure robustness in North America looks like it arose from a dismal cost-benefit analysis versus a societal welfare consideration.
I left North America about 14 years ago for Europe. The difference is stark. We've only had one significant power interruption in that time (not even in winter); whereas stochastic neighborhood outages were commonplace in North America. What really freaks me out about the situation in North America is just the poor insulation of the structures and their low thermal mass. They will get cold fast.
Aside: A lot of friends and family in North America balked at the idea of getting a heat pump due to performance during a power outage: "when the power goes out, I can still run my gas." When I asked them whether the house was heated with forced air or used an electronic thermostatic switches, the snarky smile turned to a grimace.
When you live in a cold place, you learn to do things differently. You're naive if you don't pack warm blankets and water in your vehicle, for instance. You never know when you might find yourself stranded somewhere due to vehicular breakdown …
> whereas stochastic neighborhood outages were commonplace in North America
I believe this has to do with the design of the North American split phase vs European three-phase grid. The European grid has more centralized, larger neighborhood step-down transformers, whereas the US has many more decentralized smaller pole-mounted transformers. NA proponents say any given outage will affect fewer people, EU proponents say it's easier to maintain fewer pieces of infrastructure.
(That said I live in Japan where we have a US-style grid and have only had like 2, <5 min outages during typhoons and nothing else so maybe it's just the quality of the maintenance)
or might find SOMEONE ELSE stranded somewhere due to vehicular breakdown.
yes, obviously "put on your own oxygen mask before helping others" (so you remain an asset instead of a liability), but please remember the "helping others" part (so you remain an asset instead of a liability).
If you are going to get into the business of introducing order dependence to test cases through global state (see my other reply on the parent), you will always want the cleanup to work correctly.
1. Using (testing.TB).Cleanup is a good defensive habit to have if you author test helpers, especially if the test helpers (see: (testing.TB).Helper) themselves do something (e.g., resource provisioning) that requires ordered teardown. Using (testing.TB).Cleanup is better than returning a cancellation or cleanup function from them.
2. (testing.TB).Cleanup has stronger guarantees about when it is called, especially when the test case itself crashes. Example: https://go.dev/play/p/a3j6O9RK_OK.
I am certain that I am forgetting another edge case or two here.
Generally nobody should be designing their APIs to be testable through mutable global state. That solves half the problem here.
One of the worst things a developer accustomed to Bazel (and its relatives) can do with a modern language (say Go or Rust) is to model code and organize it through the Bazel concept of a build target (https://bazel.build/concepts/build-ref) first and then second represent it with the language's local organization concepts versus the other way around. One should preferentially model the code with the language-local organizing concept in an idiomatic way (e.g., a Go package — https://go.dev/ref/spec#Package_clause) and THEN map that instance of organization to a build target (e.g., go_library).
When you do this in the wrong order, you end up with very poorly laid out concepts from a code organization standpoint, which is why vagaries like this needed to be written:
In languages that operate on a flat namespace of compilable units (e.g., C++ or Java), build target sizing and grouping in Bazel (and its relatives) largely doesn't matter (from a naming the namespace and namespace findability+ergonomics perspective). But the moment Bazel starts interfacing with a language that has strict organization and namespacing concepts, this can get rather hairy. The flat namespace practice with Bazel has (IMO) led to code organization brain-rot:
> Oh, I created another small feature; here, let me place it in another (microscopic) build target (without thinking about how my users will access the symbols, locate the namespace, or have an easy way of finding it).
— — —
Note: The above is not a critique of Bazel and such. More of a meta-comment on common mispractices I have seen in the wild. The build system can be very powerful for certain types of things (e.g., FFI dependency preparation and using Aspects as a form of meta-building and -programming).
Within the Google codebase and in projects using Bazel, directory layout for Go code is different than it is in open source Go projects: you can have multiple go_library targets in a single directory. A good reason to give each package its own directory is if you expect to open source your project in the future.
:o :o :o
are there really people saying that "giving each package its own directory" is in any way optional?? it is literally part of the language spec, what on earth would make anyone think otherwise??
edit: ok so bazel folks are just on a completely alternative timeline it seems
Bazel ignores go.mod files, and all package dependencies must be expressed through deps attributes in targets described with go_library and other rules.
Maybe better put: Bazel (and its predecessor) do support Go, but they don't support the traditional directory structure-to-import path semantic that we've come to expect in the outside world. And even then, the terminal directory needn't match the package name, but accomplished Go developers seldom violate that convention these days — thankfully.
All of this makes it paramount for developers of Go tools to use a first-party package loading library like package packages (https://pkg.go.dev/golang.org/x/tools/go/packages), which can ameliorate over this problem through the specification of a GOPACKAGESDRIVER environment variable to support alternative build systems and import path layouts (the worst thing someone can do is attempt to reverse engineer how package loading works itself versus delegating it to a library like this).
> One of the worst things a developer accustomed to Bazel (and its relatives) can do with a modern language (say Go or Rust) is to model code and organize it through the Bazel concept of a build target (https://bazel.build/concepts/build-ref) first
And that's exactly what I was arguing against in the article! I've seen this happen a few times already (in Java and TypeScript specifically) where Bazel's fine-grained target definitions are pushed as "best practice" and everybody ends up hating the results, for good reasons.
There are _different_ ways in which one can organize the Bazel build rules that go against those best practices (like the 1:1:1 rule for Java), and I think you can end up with something that better maps to first principles / or what native built tooling does.
You tend to end up with way too many targets that don't actually "mean anything" to a human. In one codebase I have to deal with, the Bazel build has ~10k targets whereas the previous non-Bazel build had ~400. Too many targets have an impact in various dimensions. Some examples:
* The build files are unreadable. If targets don't mean anything to a human, updates to build files become pure toil (and is when devs ask for build files to be auto-generated from source).
* IDE integrations (particularly via the IntelliJ Bazel plugin) become slower because generating metadata for those targets takes time.
* Binary debugging is slower because the C/C++ rules generate one intermediate .so file per target and GDB/LLDB take a long time to load those dependencies vs. a smaller set of deps.
* Certain Java operations can be slower. In the case of Java, the rules generate one intermediate JAR file per target, which has a direct impact on CLASSPATH length and that may matter when you do introspection. This tends to matter for tests (not so much for prod where you use a deploy JAR which collapses all intermediate JARs into just one).
My intuition was wrong, my naive understanding was that:
* Non-human intermediate targets would either be namespaced and available only in that namespace, or could be marked as hidden, and not clutter auto-completion
* IDE integrations would benefit, since they only have to deal with Bazel and not Bazel + cargo/go/Makefile/CMake/etc
* I thought C/C++ rules would generate .o files, and only the final cc_shared_library would produce an .so file
* Similar for .jar files
I guess my ideal build system has yet to be built. :(
> * Non-human intermediate targets would either be namespaced and available only in that namespace, or could be marked as hidden, and not clutter auto-completion
This is actually possible but you need the new JetBrains-owned Bazel plugin _and_ you need to leverage visibility rules. The latter are something that's unique to Bazel (none of the other language-specific package managers I've touched upon in these replies offers it) and are even harder to explain to people somehow... because these only start making sense when you pass a certain codebase size / complexity.
> * I thought C/C++ rules would generate .o files, and only the final cc_shared_library would produce an .so file
> * Similar for .jar files
These are possible too! Modern Bazel has finally pushed out all language-specific logic out of the core and into Starlark rules (and Buck2 has been doing this from the ground up). There is nothing preventing you from crafting your own build rules that behave in these specific ways.
In any case... as for dynamic libraries per target, I do not think what I described earlier is the default behavior in Bazel (we explicitly enable dynamic libraries to make remote caching more efficient), so maybe you can get what you want already by being careful with cc_shared_library and/or being careful about tagging individual cc_libraries as static/dynamic.
For Java, I've been tempted to write custom rules that do _not_ generate intermediate JARs at all. It's quite a bit of work though, so I haven't, but it could be done. BTW I'll actually be describing this problem in a BazelCon 2025 lighting talk :)
indeed, no idea why so many folks seem to think `bazel` is like some co-equal alternative to language-native build tooling/processes, it's a fine tool for certain (niche) use cases but in no way is it ubiquitous or anything approaching a common standard
Seriously. I don't know if folks remember this Java desktop research project from 25-some years ago: https://en.wikipedia.org/wiki/Project_Looking_Glass. To say that it was slow was an understatement (it was a real PITA to get this installed and built at the time; I spent an afternoon in college doing that out of boredom).
I imagine FyneDesk is plenty fine for what it is doing in comparison.
Also this was mostly interpreted back then, without JIT compiler support.
Also to note,
> Regardless of the threat, Sun determined that the project was not a priority and decided not to put more resource to develop it to product quality. The project continued in an experimental mode, but with Sun's finances deteriorating, it became inactive in late 2006
Written from a Java userspace powered mobile phone, with 75% worldwide market share.
That was a really cool project but yeah the Java couldn’t hack it.
FyneDesk aims to compete on performance with the light weight window managers whilst offering the rich experience of complete desktops.
We are close on performance in most areas, once Fyne v2.7.0 is out we will do a new release which is going to blow our previous out of the water. Just a few thread handling bugs to iron out for optimal results first…
Java is fast enough for having legions of kids playing games written in it, and a full OS userspace, it is a matter of implementation, and how much use gets done in JNI, no different than reaching out to CGO or Plan 9 Assembler, while keeping most of the code in Go.
Oh yes, I didn’t mean to knock the language - I also worked on amazing things in Java before I moved to go.
But the runtime of a Go app is, by default, faster than Java and my experiences have shown much, much better performance with the sort of multi-window full screen throughput we need for building a desktop.
youre implying that Stage Manager is Java. I dont think thats true though?
Isnt it only the _design_ of stage manager somewhat resembles some design choices by project looking glass?
this design has also been adopted by the other OS's like windows+tab has previously (in win7 days) created a similar looking view - though it no longer looks like it nowadays.
> this design has also been adopted by the other OS's like windows+tab has previously (in win7 days) created a similar looking view - though it no longer looks like it nowadays.
Looking Glass-like switchers are still available in Plasma
The Project Looking Glass UI != The Project Looking Glass
They are talking about the UI which could have inspired Stage Manager. Apple also had the purple window button before Project Looking Glass so there is that.
Beyond the questions of winter weather properties, there are adjacent tradeoffs between the tire types (outside of studded):
1. Fuel economy
2. Noise
3. Degree of particulate pollution emission
I'm sure that the all-season tires probably have some negative tradeoffs in these regards to, which yields a choose the most optimal product for the time of year. All-season tires to me seem like a convenience food for places where the weather can be legitimately bad.
One other difference that is hard to articulate to North American drivers with respect to understanding Scandinavia and roads: there are places where snow and ice will literally not be removed (maybe not even removeable) from the road when plowed (I presume until spring melt). It just becomes a thick ice pack over the course of weeks. I never encountered any roads in my life (including Northern Minnesota) that were this inclement. North American roads tend to be cleared (plowing or melting) to asphalt or pavement.