Just adding that JavaScript is also the best at something - running code sandboxed - especially in a web browser. Like SQL for databases, JS has no competition for web development (because the web needs full compatibility, and adding another language doesn't have a good value proposition - just compile to JS) and thus is immensely popular.
Most languages have something they are the best at. SQL is probably THE language with the strongest value proposition - relational databases are even more important and ubiquitous than web browsers. But why doesn't SQL have any competition?
I would love to see an alternative to SQL in the style Jamie suggests. Maybe SQL would immediately not be the best anymore?
However, if you expect that in the future the population will keep growing by orders of magnitude, what does that imply?
A) you're just in an unlikely position
B) the population will rapidly shrink and never recover
C) this is the most "interesting" time in history, and there are so many simulations of it by the people of the far future that we are more likely to exist in this time.
B; the argument that if humans are going to take over the galaxy and become a multi-trillion population species and you throw a dart anywhere in the population of all humans who ever lived, chances are the dart would land in the region of most population, and therefore where you live is probably the time of highest population, so we never do become a galaxy-spanning species we only dwindle from here.
And it's a daft argument because if you don't have a soul, you are the product of your environment. You couldn't be born as someone else, or somewhere else, or somewhen else, just like the River Amazon couldn't be on Mars or in Pangaea, because it's defined as "the thing in Brazil, currently". You couldn't be born in the Wild West because you are defined as "the child of your parents" and they weren't there, then. You didn't end up /in/ that meat body, you /are/ that meat body.
(And if you do have a soul, and they are randomly assigned to meat bodies, this argument is still like saying "roll two dice, the most likely combined outcome is a 7, I got two dots and one dot so that must be what 7 is")
There was early work on using IPFS as a substitute distribution mechanism, but the API changed and nobody has picked up the existing work yet. But I agree that this would be a great feature and I hope someone will feel motivated to pick up and assemble the pieces.
The implementation requires that you have at least one authorized substitute server advertising the same hash.
In simplified terms, if ci.guix.gnu.org advertises a substitute for /gnu/store/abc123-foo, with the checksum "xyz789" (and the cryptographic signature of that advertisement checks out), your daemon can safely download that file over P2P.
Ah, I think I'm misunderstanding the intent here. Clearly P2P distribution of checksummed binaries can be safe, I was just wondering if there were a solution to the build farm being behind. It seems like you can't really trust the first build of any artifact unless it comes from a central source.
There have been discussions of an "N of P" distribution, i.e. if 80% of available peers (or substitute servers) advertise the same build result, then treat it as safe.
I expect that both will be implemented, and the choice left up to the user.
I would also like to know. I think "massive amount of drawbacks" is actually "one not-very-important-in-practice drawback". Which is just that when changing the behaviour of a class, you have to change its API.
Mutation is not merely a performance optimization, but makes code more concise and expressive. Mutation makes things like `cfg.tabs[2].title = "Suspended"` possible to write in one line. Trying to achieve the same thing with only immutable data structures forces you to clone title, then clone cfg.tabs[2] with a new title, then clone cfg.tabs with a new [2], then clone cfg with a new tabs.
You don't need mutation to write that expressively; you can achieve the same thing with lenses. With the advantage that if you ever get confused about what's happening you can break the lens down into what it's "really" doing and reason about that, in a way that you just can't with language-level mutation.
Syntax shortcuts are good when they're built on a rigorous foundation. But if you build a shortcut directly into the language, you'll never be able to retrofit a reasonable model for how it works.
cfg.lens(tabs).composeOptional(index(2)).lens(_.title) set "Suspended"
You could certainly define a shortcut to simplify the `composeOptional` part, but doing it explicitly like this makes it clear that we actually have to make an important choice: what do you want to happen when there is nothing at index 2?
Thank you. To me that seems significantly more verbose, is it possible at least to pack it into a generic function and apply it to most/all assignments?
The mixing of the array access (which is another piece of language-level special-case syntax) with the properties is what makes it verbose - "normally" you could just do something like
config.lens(tabs[2].title) set "Suspended"
Admittedly that relies on a macro, but the macro is pretty lightweight syntax sugar - if you wanted to do it in 100% vanilla code you'd need a .lens at every step and a slightly more explicit way to name the "properties", i.e.
config.lens(_.tabs).lens(_(2)).lens(_.title) set "Suspended"
> is it possible at least to pack it into a generic function and apply it to most/all assignments?
I don't quite understand? You can certainly write generic functions that work for any lens whose "target" is a given type.
I've heard about lenses, but never actually worked with them. This syntax seems to construct a "path" of sorts from the root to a field. What language is this? What type does the `set` operator/keyword return?
It's Scala with Monocle (and it's off the top of my head, so apologies for any mistake). `set` usually returns a function for transforming values of the root type, but this simplified syntax will apply it immediately to `config`, so it'll return a copy of config with the modification applied to it.
You're right, it's the same synchronisation/consistency problem at a different scale, which I honestly find pretty mindblowing. However there is one major difference. When you add fault tolerance requirements (which you typically do in distributed databases and web applications, but don't always in multithreaded applications) then it changes the approaches and algorithms.
Thank you for your advice. I put this list together. There are actually some libraries I haven't used, and I've considered using the GitHub API for searching. But you know, GitHub has a huge user base and repositories, and I haven't tried that yet.
This would be solved if python used an (OS-specific) cache directory for its .pyc files. I have always disliked .pyc files... here's a concrete reason!
Question: what does python do if it doesn't have write permission in the current working directory? Not write the cache?
E.g., if you want to back up your home dir, and omit caches, since they can be regenerated. It's a lot easier if programs write their cache data to ~/.cache / $XDG_CACHE_HOME than if they intermix it / scatter it about.
But how does it help to have one directory for all Linux-based OSes (or one directory for RHEL, one for Ubuntu, one for Debian, etc.), and one for FreeBSD and one for OpenIndiana?
At least, that's what I interpret "OS-specific" to mean.
More along the line of the original comment you're replying to, if the cache was in, say, ~/.cache, then it won't get swept up in the repository's commits, since the cache data is no longer inside the repository's working directory. Then, it never gets uploaded to GitHub, and this security issue never happens.
I have seen a surprising number of people — some who are engineers by profession too, and ought to know better — just git add everything, and then commit it all without looking. One should review the diff one has staged to see if it is correct, but alas…
That's possible with 3.8's PYTHONPYCACHEPREFIX, yes?
Perhaps it's worthwhile for someone to blog about this more/promote this as a best practice? Though what's missing is the hook to connected it as appropriate for the given platform.
I see now that "OS-specific" was meant to be interpreted as "the OS-defined mechanism to find a cache directory", not "a cache directory which differs for each operating system".
I would not have been confused by the term "platform dependent", which is what Python's tmpdir documentation uses, as in: "The default directory is chosen from a platform-dependent list" at https://docs.python.org/3/library/tempfile.html?highlight=tm... .
That env var is new in 3.8. I've been looking forward to it to stop docker writing root owned cache files back to a bind mounted file system. I might set it to $HOME/.cache/python globally as well.
> what does python do if it doesn't have write permission in the current working directory? Not write the cache?
In the case where the interpreter can't find any place it has write access to to write the cache, it will not write it, yes. That means it will have to re-parse the source file into bytecode (and fail to write the bytecode to cache) every time it is loaded.
That sounds like a recipe for even more people accidentally storing and distributing that bytecode without wanting to do so, because it won't be immediately visible that it's there, many tools don't show or highlight the existence of those streams.
Most languages have something they are the best at. SQL is probably THE language with the strongest value proposition - relational databases are even more important and ubiquitous than web browsers. But why doesn't SQL have any competition?
I would love to see an alternative to SQL in the style Jamie suggests. Maybe SQL would immediately not be the best anymore?