Excellent post, I enjoyed it quite a lot. I often tell people that if they want to meet new people, get a dog. If you want to meet girls, get a puppy. When I was in college I would see other guys my age (early 20s at the time) walking a new puppy around campus and they became magnets for girls.
Plus, humans were MEANT to have dogs, and dogs were meant to have humans. We grew up together; dogs are one of our first forays into genetic engineering. We created an animal which was perfectly acclimated to human companionship
Since puppies turn into full grown dogs quite quickly, how often do you suggest I replace the puppy?
> We created an animal which was perfectly acclimated to human companionship
This is why I think having a cat is so much more satisfying. A dog loves you unconditionally, not by choice but because it was literally bred to do so. Despite this you still have to keep it on a leash. Cats by contrast stay with you because they want to, despite having every opportunity to move in with someone else.
I hate to say it, but pet cats kill an estimated ~390 million animals annually (in Australia). They should also be on a leash and not allowed to roam freely.
Cats, like any other predators, are born to kill other animals. It’s a natural order of things. My cat regularly brings back mice, birds, lizards and little snakes. I do have to get rid of them, but he makes me proud.
Natural, in the same sense that air conditioning and pickup trucks are natural.
Feral cats mostly live in cities, or are fed by farmers to encourage them to stay where they can protect feed stores. Like modern corn (maize), they don't naturalize into the wild.
Don't worry too much about it. Women are nearly as easily impressed by a full-grown dog.
Building a loving bond with a dog still takes work and energy. It's just that, genetically and instinctually they have a communication advantage: they can read humans better and emote to humans in more obvious ways than cats (having even evolved special muscles around the eyes to do so!). It's not like earning a dog's respect and love is inherently trivial. It's just that most people haven't moved past the "he's plotting to murder me" type assumptions about cats.
You have to keep a dog on a leash because it wants to explore the world around it. You can train a dog to be by your side no matter what which you definitely cannot do with a cat which definitely needs a leash no matter how hard you’ve worked for its approval.
Cats won’t alarm you if a stranger sneaks into your tribes living area or your apartment. I have a 10 pound mix and I guarantee you she would take on a thousand pound grizzly knowing it was fruitless but in order to protect her pack including me. A cat is going to just run away from things like that and yeah I’ve seen the cat saves toddler from bobcat stories but those are the exception and probably more related to territorial. Who do you want a person who was raised to be a loyal friend and would sacrifice themselves for you or a person who is fickle and hard to win approval of and even when you do win their approval they start eating you within a few hours of your death.
Not all dogs can be trained to stay next to you. At least, not by all owners
I have a husky mix whose prey drive means he will charge at other dogs, kids, usually playful but not always. Positive reinforcement is meaningless because he cares more about the chase than any treat.
Also dog owners don’t seem to understand that the sweet loving animal they have at home can be very aggressive towards anyone not part of their pack
I have had off-leash dogs start scrapping with mine on walks and I GO OFF on those owners. 100% of the time they never apologize and tell me how gentle their dog is
Well lady, your dogs charged mine, barking and growling. And and I don’t like breaking up dog fights.
Point is dogs and owners come in all kinds. Lots of bad ones out there.
You can go on walks with a cat without a leash. I do it every night. It's a bit different to walking a dog, sure, but he never strays too far from me. It might not be for everyone, but it works for us.
> Since puppies turn into full grown dogs quite quickly, how often do you suggest I replace the puppy?
I’ll bite - you replace the puppy when the one you have expires after 14 years. Don’t be that guy.
Cats have also been engineered for companionship in areas where dogs weren’t economical. Cats were big in Egypt. However, like you said, cats have their own agenda - murder. Your outdoor cat has killed more small animals than your vet. Neuter/spay them and keep them inside.
Obviously you shouldn't get a pet if you can't take care of it. But getting a dog to meet people seems more fruitful than using dating apps, which seem almost completely pointless to me
Buying a pug, I’d agree because it more directly comes from the breeder. As far as I know there’s no way a breeder or previous owner would know their dog is being adopted unless they’re actually making an effort to check in with the shelter. Even if they choose to, they no longer legally own the dog so that info is confidential. Regardless, they get no money out of sending them to a shelter.
Also dogs that don’t get adopted are often killed making it an ethical choice to adopt.
I think Europe is moving g toward digital independence even more. They've still got Instagram and WhatsApp and everything, but I can see them moving more aggressively toward Facebooks core business model (ie, programmatic adverising) and essentially creating a European internet bubble. Would be nice for Europeans to be able to have their own social media sites which operate less aggressively on your attention span - like Facebook circa 2009 or even MySpace. Chronological feeds, less invasive advertising practices.
We are building a social network with chronological feed.
I am thinking how to make it sustainable without raising VC funding but not doing aggressive ad targeting. Obviously people are not going to pay for a social network. So maybe just very generic sponsered links?
Maybe the model should be more like public roads - for-profit companies are involved in building parts of them but they don't control the whole system.
Republicans do not have principles, only an unceasing desire for power. Any time they quote some principle at you, they are lying. They are trying to manipulate your sense of fairness to cynically get what they want. They will stab you in the back at the first opportunity. Republicans can not be trusted under any circumstances
docker and podman expect to extract images to disk, then use fancy features like overlayfs, which doesn't work on network filesystems -- and in hpc, most filesystems users can write to persistently are network filesystems.
apptainer images are straight filesystem images with no overlayfs or storage driver magic happening -- just a straight loop mount of a disk image.
this means your container images can now live on your network filesystem.
Do the compute instances not have hard disks? Because it seems like whoever's running these systems doesn't understand Linux or containers all that well.
If there's a hard disk on the compute nodes, then you just run the container from the remote image registry, and it downloads and extracts it temporarily to disk. No need for a network filesystem.
If the containerized apps want to then work on common/shared files, they can still do that. You just mount the network filesystem on the host, then volume-mount that into the container's runtime. Now the containerized apps can access the network filesystem.
This is standard practice in AWS ECS, where you can mount an EFS filesystem inside your running containers in ECS. (EFS is just NFS, and ECS is just a wrapper around Docker)
Scale of data we see on our HPC, it is way better performance per £/$ to use Lustre mounted over fast network. Would spend far too much time shifting data otherwise. Local storage should be used for tmp and scratch purposes.
It's called caching layers bruv, container images do it. Plus you can stagger registries in a tiered cache per rack/cage/etc. OTOH, constantly re-copying the same executable over and over every time you execute or access it over a network filesystem wastes bandwidth and time, and a network filesystem cache is both inefficient and runs into cache invalidation issues.
yes, nodes have local disks, but any local filesystem the user can write to is ofen wiped between jobs as the machines are shared resources.
there is also the problem of simply distributing the image and mounting it up. you don't want to waste cluster time at the start of your job pulling down an entire image to every node, then extract the layers -- it is way faster to put a filesystem image in your home directory, then loop mount that image.
> yes, nodes have local disks, but any local filesystem the user can write to is ofen wiped between jobs as the machines are shared resources.
This is completely compatible with containerized systems. Immutable images stay in a filesystem directory users have no access to, so there is no need to wipe them. Write-ability within a running container is completely controlled by the admin configuring how the container executes.
> you don't want to waste cluster time at the start of your job pulling down an entire image to every node, then extract the layers -- it is way faster to put a filesystem image in your home directory, then loop mount that image
This is actually less efficient over time as there's a network access tax every time you use the network filesystem. On top that, 1) You don't have to pull the images at execution time, you can pull them immediately as soon as they're pushed to a remote registry, well before your job starts, and 2) Containers use caching layers so that only changed layers need to be pulled; if only 1 file is changed in a new container image layer, you only pull 1 file, not the entire thing.
there generally is no central shared immutable image store because every job is using its own collection of images.
what you're describing might work well for a small team, but when you have a few hundred to thousand researchers sharing the cluster, very few of those layers are actually shared between jobs
even with a handful of users, most of these container images get fat at the python package installation layer, and that layer is one of the most frequently changed layers, and is frequently only used for a single job
1. Create an 8gb file on network storage which is loopback-mounted. Accessing the file requires a block store pull over the network for every file access. According to your claim now, these giant blobs are rarely shared between jobs?
2. Create a Docker image in a remote registry. Layers are downloaded as necessary. According to your claim now, most of the containers will have a single layer which is both huge and changed every time python packages are changed, which you're saying is usually done for each job?
Both of these seem bad.
For the giant loopback file, why are there so many of these giant files which (it would seem) are almost identical except for the python differences? Why are they constantly changing? Why are they all so different? Why does every job have a different image?
For the container images, why are they having bloated image layers when python packages change? Python files are not huge. The layers should be between 5-100MB once new packages are installed. If the network is as fast as you say, transferring this once (even at job start) should take what, 2 seconds, if that? Do it before the job starts and it's instantaneous.
The whole thing sounds inefficient. If we can make kubernetes clusters run 10,000 microservices across 5,000 nodes and make it fast enough for the biggest sites in the world, we can make an HPC cluster (which has higher performance hardware) work too. The people setting this up need to optimize.
100 nodes.
500gb nvme disk per node. maybe 4 gpus per node. 64 cores?
all other storage is network. could be nfs, beegfs, lustre.
100s of users that change over time. say 10 go away and 10 new one comes every 6mths.
everyone has 50tb of data.
tiny amount of code.
cpu and/or gpu intensive.
all those users do different things and use different software. they run batch jobs that go for up to a month. and those users are first and foremost scientists. they happen to write python scripts too.
edit:
that thing about optimization.. most of the folks who setup hpc clusters turn off hyperthreading.
Container orchestrators all have scheduled jobs that clean up old cached layers. The layers get cached on the local drive (only 500gb? you could easily upgrade to 1tb, they're dirt cheap, and don't need to be "enterprise-grade" for ephemeral storage on a lab rackmount. not that the layers should reach 500gb, because caching and cleanup...). The bulk data is still served over network storage and mounted into the container at runtime. GPU access works.
This is how systems like AWS ECS, or even modern CI/CD providers, work. It's essentially a fleet of machines running Docker, with ephemeral storage and cached layers. For the CI/CD providers, they have millions of random jobs running all the time by tens of thousands of random people with random containers. Works fine. Requires tweaking, but it's an established pattern that scales well. They even re-schedule jobs from a particular customer to the previous VM for a "warm cache". Extremely fast, extremely large scale, all with containers.
It's made better by using hypervisors (or even better: micro-VMs) rather than bare-metal. Abstract the allocations of host, storage and network, makes maintenance, upgrades, live-migration, etc easier. I know academia loves its bare metal, but it's 2025, not 2005.
Well, call them lazy but once you have i.e. biocontainers in which individual bioinformatics programs are prepackaged, hardly any scientist in that field would be reinventing the wheel and often just waste te time trying to install all the requirements and compile a program already running "good enough" using downloaded SIF.
Sure, at times with say limited resources one can try to speed up some frequently used software creating SIF from scratch with say newer or more optimized Linux distro (if memory serves me right containers using Alpine Linux/musl library were a bit slower than containers using Ubuntu). But in the end splitting the input into smaller chunks, running i.e genome mapping on multiple nodes then combining the results, should be way faster than "turbo-charging" the genome mapping program run on a single node even with a big number of cores.
the "network tax" is not really a network tax. the network is generally a dedicated storage network using infiniband or roce if you cheap out. the storage network and network storage is generally going to be faster than local nvme.
I'm so glad to hear that from someone unprompted. I tried WPF and it was a million times harder to use than WinForms, and I couldn't even be bothered to try out MAUI (although I accept it as an apology for WPF lol). I'm still using a WinForms application every day (Git Extensions) and have been able to contribute to it not least because it's the good old familiar WinForms.
This is not to say that WinForms isn't without its problems. I often wonder what it could be like if all the effort of making WPF and MAUI had gone into maintaining, modernizing and improving it.
I think that the native GUI development APIs provided by OS vendors need a kind of "headless" implementation first, where you can build UI in pure code like winforms, and then they should offer a framework on top of that. I, personally, hate XAML. It's stricter than HTML/CSS and very opinionated about how to organize your application. I feel that XAML frameworks should have a common Winforms-like API behind them that you can switch to any time you want. But I've found that using the C# code-behind APIs manually for WPF, UWP, MAUI, etc, is far more verbose than Winforms was.
My only major problem with winforms is that it's still using GDI under the hood which, despite what many people believe, is actually still primarily software-rendered. If they could just swap out Winforms for Direct2D under the hood (or at least allow a client hint at startup to say "prefer Direct2D") it would really bring new life to Winforms, I think.
I would also like a C++ native GUI API that's more modern than MFC
"C# Markup" [1] [2] sounds a lot like what you are looking for. As the only "second party" option in this space it's interesting that it is so MAUI only/MAUI focused, but I suppose that's the "new hotness".
There have been similar F# libraries and third-party C# libraries for a while that seem nice to work with in similar ways.
Unfortunately that is something Microsoft seems incapable of.
MFC was already relatively bad versus OWL. Borland[0] kept improving it with VCL and nowadays FireMonkey.
There there is Qt as well.
Microsoft instead came up with ATL, and when they finally had something that could rival C++ Builder, with C++/CX, a small group managed to replace it with C++/WinRT because they didn't like extensions, the irony.
With complete lack of respect for paying customers, as C++/WinRT never ever had the same Visual Studio tooling experience as C++/CX.
Nowadays it is in maintenance, stuck in C++17, working just good enough for WinUI 3.0 and WinAppSDK implementation work, and the riot group is having fun with Rust's Windows bindings.
So don't expect anything good coming from Microsoft in regards to modern C++ GUI frameworks.
[0] - Yes nowadays others are at the steering wheel.
Borland was pretty good on the GUI front, I think we're forgetting how easy it was to get something rolling in Delphi. It's baffling Microsoft still hasn't gotten their stuff together on this. They've been just releasing new frameworks since the WinRT era and hoping something sticks.
Firstly, that nobody believes them when they swear that {new GUI framework} will be the future and used for everything. Really. Because this time is not like those other times.
Secondly, pre-release user feedback. Ironic, given other parts of Microsoft do feedback well.
Imho, the only way MS is going to truly displace WinForms at this point is to launch a 5-year project, developed in the open, and guided in part by their community instead of internally.
And toss a sweetener in, like free app signing or something.
We spent the better part of a calendar year researching what framework to update our MFC app to. We really liked the idea of staying first-party since our UI is explicitly Windows-only, and we looked at every framework - MAUI, winforms or WPF with a C# layer, WinUI3...
It quickly became apparent that WinUI3 was the only one even close to viable for our use case, and we tried getting a basic prototype running with out legacy backend code. We kept running into dealbreakers we hoped would be addressed in the alleged future releases, like the lack of tables, or the baffling lack of a GUI UI designer (like every other previous Win framework).
Agreed it is the easiest, however it is also possible to use WPF on the same style as Forms, with more features, no need to go crazy with MVVM, stay with plain code behind.
Having said this, from 3rd parties, Avalonia is probably the best option.
While I think Uno is great as well, they lose a bit by betting on WinUI as foundation on Windows, and that has been only disappointment after disappointment since Project Reunion.
My guess is that this is for impatient people; people who think that the prescribed use cases are somehow necessary for their "workflows"; people who subscribe to terms like "cognitive friction" within the context of these use cases; people who are...sort of lazy.
That's a really good question. Maybe it's because laziness is associated with a lack of intellect? And certain technologies, like AI and other software, are meant to augment our intellect.
These fancy words carry an intellectual/productive effect. When they're put to use it probably makes people feel like they're getting things done. And they never feel lazy because of this.
Smartphones replaced laptops. A huge amount of people don't own a laptop or desktop PC - they do all computing via smartphone or maybe tablet. My wife almost never opens her laptop, nor does my mom
It is hard to say when the peak of laptops in circulation was, right? Because simultaneously the tech has been maturing (longer product lifetimes) and smartphones have taken some laptop niches.
I’m not even clear on what we’re measuring when we say “replace.” Every non-technical person I know has a laptop, but uses it on maybe a weekly basis (instead of daily, for smartphones).
It depends on how good they are. I'd pay a little to not have to prompt and wait for a bunch of icons, a little more to not have to curate and reroll, and more to not have to train a lora, all assuming those are buying me quality thresholds I care about.
Personally I wouldn't use ai generated images in production. To me AI generated images are simply a curiosity or toy. Maybe placeholders while you get actual art created, but to use them in a final product is just anti-humanist
I see this sort of argument a lot, but I think it's too oversimplified, there's definitely a category difference between the kinds of automation we were doing before the recent generative AI boom and that we are now doing with generative AI. The type of stuff we were automating before was largely (though obviously, not entirely) stuff that I think nobody really wanted to have to do in the first place, like manually inputting a bunch of data or what have you. If there wasn't a category difference, then this wouldn't have been a big deal worth so much VC funds and market cap.
That said, that's not my real reason to not use generative AI. My real reason to not use generative AI is that it still kinda sucks at fine details, and that annoys me greatly. These images have a great consistent style to them, but you can see that they're not really that clean. It's possible to tell at a glance, but really possible to see when you zoom in on them, especially depending on the icon. Whether this matters to you is up to taste; Personally, I'd rather have less detailed vector icons that are less technical but are very clean. If I really could do it, though, what I'd actually prefer is hand-crafted icons that are similar to these but with careful attention to detail and no weird artifacts when you pay too close of attention.
I can see that people largely don't care. Some people just have no taste and will jam ugly image generations with obvious, blatant artifacts into their blog posts; you do you. Others will use generative AI carefully in a way where you're not immediately sure if it's gen AI or not, but you probably suspect it; I kinda dislike this, but I can 100% understand it. Thiings is kind of in that group.
Very possible that some day soon genAI will be able to just produce perfect looking icons like this, no text errors, no weird artifacts, maybe even produce them in flawless looking vector SVGs or something. Maybe. For now though, it's tempting, but probably not for me.
I would happily use these as inspiration or reference, though. The broad strokes are good, it's the details that bug me to no end.
It only doesn't require work if you want a one-off amusement. If you want something specific that matches the surrounding graphics and shows exactly what you want, you're going to spend time and effort speccing it and iterating.
Plus, humans were MEANT to have dogs, and dogs were meant to have humans. We grew up together; dogs are one of our first forays into genetic engineering. We created an animal which was perfectly acclimated to human companionship
reply