Joe was such a creative thinker, and he was never (that I could tell) embarrassed by the prospect of an idea of his proving to be a poor one, so it was always interesting to hear him talk.
Genuinely curious, thinking outside the box (writing a new programming language on top of Prolog?), and he treated people with utmost respect. I was a nobody lucky enough to escort him around Chicago one day when he was attending a conference, and we spent a couple of hours talking about art, Erlang, Riak, and man I wish I could remember what else.
Really? And every developer will create custom, broken, non-standard "namespace" system: admin_get_user vs. readers_get_user, or maybe admin.get_user vs. readers.get_user, or maybe get_user_admin, get_user_readers, etc. Surely, this will stir a lot of creativity, but I am not sure we need that.
I've thought a lot about storing program structures in distributed hash tables and came to the conclusion that the only viable languages that can be safely stored are purely functional. If you consider OOP languages, there are many stateful dependencies, for example a method may rely on a constructor initializing a private variable in some specific way, so the smallest unit of modularity cannot be a method. Similarly an entire class could perhaps rely on methods being called in a certain order. Even though classes are designed to be self contained, stateful behavior really mucks up the ability to separate things into constituent parts.
Purely functional on the other hand has none of these problems. This is the approach taken by the Unison Language people, which I think makes the right design decisions.
It's stateful without transactions in the simplest implementation, but there's no reason you couldn't create a procedural language that maintains state and enforces transactional separation.
In either language class, you need to manage transactions, usually implicitly, if you want to do any meaningful work. This is a gap that either language class can easily solve and in many cases, there are working implementations of these ideas that do just that.
Purely functional introduces lots of issues on it's own. (Look at all the monad insanity Haskell has to do to get the equivalent of a print statement).
It's very hard to do that if you want to use higher-order functions ergonomically. In fact that's the entire reason this area of language design is difficult at all. Specifically, if you define a higher order function (e.g. `f :: (a -> b) -> c` in Haskell notation) and you want to specify that it is "pure" if and only if its argument is "pure". How do you do that in D? I bet you can't!
Not the OP but I’m guessing the problem they are referring to is when people think they need to understand what a monad is from a category theory pov before they can write meaningful code. Or worse yet, when they are trying to teach the language to others and think they need to explain that in order when a newcomer just wants to print a string.
It’s like a built in bikeshedding bait for the whole programming language.
It's no more insane than async programming. In fact, it's less insane: it's what you'd get if you let async and await be first-class citizens in your language instead of awkward special cases.
1. You dont need a monad to do io in Haskell, you can do that directly. 2. Monad is a tool that offers a "grip" over io, ie your code still yields easily to formal proofs despite using io for example. 3. The sanity and insanity of controlled and uncontrolled io is imho the exact oposite of what you seem to imply.
You understand the data structure. Probably the chaining rule, even probably the monoid in the category of endofunctors. What you don't understand is the existence of it.
Monads is a solution for dealing with IO in a strongly typed programming language, where we want all functions to be pure, and we don't have access to linear types. (almost) Every other programming language, when it had the chance at doing this, just threw the towel without a fight, and decided to allow IO everywhere. Haskell decided to stick to its principles instead, and produced something different, guided by a different set of constraints. Different means with different tradeoffs in different places. If Haskell instead decided to allow mutability everywhere, like most programming languages, we wouldn't be talking about it. Because it would be another language with weird syntax, and you can find a lot of them in the list of defunct programming languages at wikipedia.
And the juice is worth the squeeze if you want to see breakthrough things in programming, and not just another Algol with updated syntax.
For reference, other samples of groundbreaking (to some degree) stuff:
* Rust: mutability only when the borrow checker allows it
* Coroutines/continuations: functions that depart away from the entry/exit model
* Prolog: a programming language constructed around functions with 4 connection points: enter/exit/fail/redo
Although I disagree with TylerE about the issue that started this thread,
I agree that assuming that someone doesn't understand something is generally considered rude and would advise people against that. I would doubly advise them against it if they are from the Haskell community, because it is one way our community could get a poor reputation. Instead I hope we can be welcoming and tolerant, even of people who may not understand things, in the hope that they will come to understand them through interacting with us, or at the very least enjoy the experience.
> I agree that assuming that someone doesn't understand something is generally considered rude.
I think assuming that someone doesn't understand something is only rude if there are no indicators for a misunderstanding.
javcasas seemed to assume that TylerE doesn't understand Monads because TylerE stated that there is a Monad insanity in Haskell to replicate a print statement.
I would also assume javcasas' comment wasn't made in good faith, but I still don't see that an insult was voiced.
Not every rudeness can be interpreted as an insult I think
You don't need to understand monads to do IO. From [0]:
> There is nothing that has to do with monads at all in printing a string. The idea that `putStrLn "hello world"` is monadic is as absurd as saying that `[1,2,3]` is monadic.
You haven't linked any exemplar tutorials, but I feel confident in saying the ones you're referring to teach the general concept of monads, and not how to do IO with monads.
Quite a number of JavaScript developers learned how to use `andThen` with Promises/A+ back in the day, and they didn't need to learn about monads either. (In fact, when it was raised that promises are really monads, there was serious drama and outrage [1] -- an existence proof if there ever was one, that you can use something productively without understanding it as a monad.)
Likewise, nobody would claim that you need to understand monoids before you can concatenate lists. Concatenating lists is as native to lists as sequencing commands is to IO; it's just part of how those types work. The monad interface abstracts over that idea, and learning that abstraction in its full generality is, typically, what people stumble on.
You seem to have made an uninformed opinion where you saw some people struggling to do one thing, and just assumed they were doing something completely different.
Those people writing monad tutorials (those are written about as often as they are read) aren't trying to learn how to do I/O.
I think there was mostly a problem where people refused to stop trying to explain it with confusing burrito analogies. It's not too bad if you read the type definition and maybe read some compiler IL.
(And of course "it's just a monoid in the category of endofunctors".)
f x y z = do
let
foo = x + y
bar = foo * z
putStrLn $ "Here's bar" <> (show bar)
pure bar
If all you want is to log to the console for debugging from pure code, it would look like
f x y z =
let
foo = x + y
bar = foo * z
in
trace ("Here's bar" <> (show bar)) bar
Not recommended, but if you're dead set on commingling your effectful and pure code, you can freely mix them with the function unsafePerformIO.
In my experience having to write large applications with significant test coverage, the time savings from not screwing around with stubs and mocks and dealing with the external effects of test runs -- the relative annoyance of using a couple "do"s and "pure"s and separating out data transformation from IO is a small price to pay.
Ah, so now your original statement becomes clearer! When you said
> Look at all the monad insanity Haskell has to do to get the equivalent of a print statement
perhaps you really meant
> Look at all the monad insanity Haskell has to do to get the equivalent of a print statement within pure code
I still don't agree as such, but yes, there is indeed ceremony involved in turning something that was pure into something effectful. That is in fact the whole point!
The entire point of Monads, is restricting the ability to do these operations into functions that are tagged with having this ability, precisely so you _cannot_ invoke IO in a random pure function. It's the entire point of the language in fact.
If you want to just write IO, you can just define a function with an IO () value and use it in any other function that resolves to IO (), or call other functions that live in IO *, or any pure functions, etc etc.
I am no Haskell buff, but I believe the common complaint is that "just need to mark your function as IO ()" has non-local effects, you end up needing to propagate a bunch of extra stuff everywhere.
AFAIU the Haskell community recognized the issue as common enough to grant the existence of Debug.Trace.
> I am no Haskell buff, but I believe the common complaint is that "just need to mark your function as IO ()" has non-local effects, you end up needing to propagate a bunch of extra stuff everywhere.
That's a feature. It forces you to separate pure functions from I/O. If you spend time thinking about how you (re)factor your work you can end up with most of the interesting logic being in pure functions that are then very easy to write tests for (because you don't have to mock network services, clocks, etc.), and all the interesting I/O logic gets segregated and made [hopefully] small.
> I believe the common complaint is that "just need to mark your function as IO ()" has non-local effects, you end up needing to propagate a bunch of extra stuff everywhere
Yes, indeed you do. But it's a bit strange that that is a complaint. It's the whole point of fine grained effect tracking. If you change what effects a function does then you have to acknowledge that by changing the code that use that function (directly or indirectly)!
> you end up needing to propagate a bunch of extra stuff everywhere.
Declaring a function as non-IO is a contract to your callers that you don't do IO. You don't need to do it. You can write everything in IO if you choose. You can also call into the wonderful ecosystem of libraries, because IO functions can call other IO functions, as well as non-IO functions.
There is only ever friction if you declare your function to be IO-free. If you declare your function to be IO-free, but call IO from inside it, it's a compile error because of course it is.
So why bother declaring anything IO-free? If you do arbitrary IO in a Parser, you can't back-track. If you do arbitrary IO in Parallel code, you invite race conditions. Transactions is my favourite example, though:
.NET [1]
> Disillusionment Part I: the I/O Problem
> It wasn’t long before we realized another sizeable, and more fundamental, challenge with unbounded transactions [...] What do we do with atomic blocks that do not simply consist of pure memory reads and writes? (In other words, the majority of blocks of code written today.) This was not just a pesky question of how to compile a piece of code, but rather struck right at the heart of the TM model.
Scala: [2]
> ScalaSTM does not have the goal of running arbitrary existing code, which is where most of their problems arose.
Java/Akka: [3]
> STM is considered as a failed experiment
Clojure: [4]
> Very simply the side-effects will happen again. In the above case this probably doesn’t matter, the log will be inconsistent but the real source of data (the ref) will be correct.
Sorry for the rant, but I really needed to highlight the fact that nonIO-calling-IO is not some language design flaw created by out-of-touch academics. It's a fundamental problem.
printGreeting = print "Hello, World!"
main = printGreeting
works every bit as much as
main = print "Hello, World"
The only difference for main is that it's run because it's the entry point, but that's true of most languages and almost certainly not what you're complaining about I think?
I suppose it's only natural, having solved distributed systems with Erlang, Joe would move on to tackling the hardest problem in computer science - Naming Things.
There’s a spectrum of what different programming languages call “modules” and what they use them for, with languages like ML having “true” modules in the conceptual sense, where a module is a unit of abstraction, the way that classes and interfaces in many languages are units of abstraction, though when you dig into it, I believe modules are more fundamental and more powerful… then on the other hand you have the situation of, “Programs are collections of text files, people don’t want to put all their code in one file, let’s call a file a module.”
Point being, some ways that programming languages use the concept of a module are deep and let you do things you couldn’t otherwise do, some are more dispensable.
Then there’s Smalltalk where I believe programs aren’t collections of text files, you actually browse your code in a sort of code browser and it’s stored as runtime objects in a virtual machine image, basically!
Then there’s the matter of how code is released and distributed… packaging and libraries. (In practice, the words “module” and “package” are used in overlapping ways by different languages.) Are the single-function modules/packages of NPM a good thing? In practice it doesn’t seem that way.
It’s sort of analogous to trying to “giggify” all the jobs. Every individual function outsourced.
Wikipedia is probably the most prominent example of a flat namespace that works at scale. But those names are pretty long, particularly when they need to disambiguate. Also, it's an encyclopedia where article editors are forced to collaborate, and it's leveraging large vocabularies that already exist. (One for each language.)
For programming languages, even with a flat module namespace, you get a land rush where good names get taken early by packages that might end up unpopular or abandoned.
Leveraging DNS seems like the answer. Java did it badly, but Go's approach seems fine, perhaps because it leverages GitHub's namespace too (for most modules).
> Leveraging DNS seems like the answer. Java did it badly, but Go's approach seems fine, perhaps because it leverages GitHub's namespace too (for most modules).
O the contrary, Go's approach will lead to lots of problems while Java's is actually simpler and safer. The fact that you depend on the current status of DNS every time you build your code, for each and every one of your dependencies, including transitive ones, is completely nuts. Even Google realized this, and they solved it Google style: they added another automated system on top to try to add some stability (the Go proxy). And then they had to add holes to that system, because it turns out not all dependencies are public and so they can't solve this from on high (GOPRIVATE).
And still, if one of your dependencies decides to switch hosting provider for their source code, or loses their domain name, you have to make (small) changes in every code file that referenced that dependency.
Maven's solution is much simpler for everyone involved: DNS is only involved in registering a new module, it only serves as a form of authentication. After the initial registration, the module name is allocated to your Maven Central account, and it won't be revoked if you later lose that domain. If someone gets access to your domain, they don't also automatically become able to push malware to people who used your module for years, neither retroactively (which Go also handles) nor when they next upgrade (which Go will happily allow).
Been trying to make this point several times now, they always doubt that this problem has been solved, and by Maven of all places. The NIH syndrome is for some reason rampant among package managers/registries.
I'm not sure Go works that way. Are you confusing 'go get' (downloading code) with compiling code?
Maven and Gradle are part of the reason I don't use Java anymore. Java seems to have gone through multiple unfortunate build systems without settling on a good one.
I was thinking more of a CI pipeline build, which would probably do `go mod download`. You're right of course that `go build` itself doesn't download code itself.
What is it about Maven that you think makes it not a good build system? To me, I moved from Java to Go and I still miss Maven to this day.
I'd argue that "mental anguish" is probably less of a problem on the latter design, since the additional parenthetical material is used pure for disambiguation (and so is optional), not for categorization. So, on the latter design we can have:
/Francis Bacon
/Francis Bacon (artist)
without having to answer anguish-inducing questions that would be raised by
/Francis Bacon
/artist/Francis Bacon
e.g. questions such as:
1. "Maybe we should put Lord Bacon under a category also, maybe `philosopher/Francis Bacon`"
2. "But he's also a statesman ... what about `statesman/Francis Bacon`? Is he more of a philosopher or a statesman"?
3. What about ordinary people (who happen to be involved in historical events) that have wikipedia entries, like George Floyd? Should he be assigned something like `person/George Floyd`? If so, should the two Francis Bacons be assigned `person/philosopher/Francis Bacon` and `person/artist/Francis Bacon` instead?
In programming, it’s useful to have namespaces for overall organization and conflict resolution. But it’s a trade off: you are nudged into a hierarchical ontology, with all the implied issues.
Wikipedia titles are free text (more or less). So they can afford not to introduce hierarchical naming and still have nice, easily addressable names without conflicts.
This avoids all sorts of problems. Most things simply can’t be categorized in a strict, hierarchical manner.
Practically speaking, when you want to link to the latter in a markdown that already utilises parens, you encounter more friction - cutting and pasting a url with parens from the address bar needs manual correction if used to create a link in (say) reddit's markdown syntax.
I realise this is somewhat tangential to your point, but shows how easy it is for innocent looking choices to end up creating annoyances.
I have to listen to the Joe's talk on this but from OP this is more accurately "Why do we need Erlang modules at all?" or generously "Modules in FP languages". A key motivating pattern (fib/3) is a pattern in FP.
More directly in terms of Joe's brainstorming:
- I don't see how the versioning matter is simplified by a flat space of functions. Before you had Nm modules to track and now you have Nf functions to track, with Nf >> Nm. Aggregating functions in library/modules to version is actually helping with versioning effort, not hindering it. More generally, the versioning of multi-component systems are complex affairs that can only be addressed by constraints - general engineering systems have standards + catalogs as the means of addressing this general engineering issue.
- Broadly I disagree with conflating modules and libraries. They are distinct conceptually. Modules could have state (and meta-data state), conceptually. Modules potentially could also have active elements internally. Modules can have life-cycles. To sum: modules conceputally are not just collections of (related) functions.
So the general question is 'can we live with just libraries of functions?'
I think PLT excitement here is not 'a k/v bag of richly annotated functions' -- the guaranteed end result of that approach is n variants of elaborate 'structure' encoded into the metadata Joe is talking about -- but rather pushing modules to extreme to make the distinction from libraries crystal clear.
You could go the route of having a zoo of many small libraries and just version the library, so mod-x becomes lib_x. That, dependency management, is not a convincing argument for something 'new' called "module". You can do it with libraries as well.
The question (then) remains: are modules really just libraries? Was it always just about coexistence of related functions?
> contribution to open source can be as simple as contributing a single function
Are there meaningful contributions to open source that are the size of a single function? Isn't this what leads to the terrible failure modes of NPM (hello left-pad)? If you want to contribute a single function, what's wrong with a blog post?
Also, has the browser not solved this with ESM imports from URLs? I can write a web page containing
import {someFunction} from "https://your-website.com/your-script.js";
This also solves the namespacing issues - at least to the extent that the domain name system solves namespacing issues.
(Deno, of course, inherits this ability to import code from any URL.)
I'm being provocative. I've heard people make similar arguments over time, but never understood why anyone would want open source contributions to be a single function.
The API page lists numerous functions, but the bottom of the page says "In most cases, the only thing you need to import from Immer is produce".
My thought is, immer does something novel, but the vast majority of functionality can be covered in one function.
I think another problem with the "left-pad" situation is the lacking "JavaScript standard library", but that situation is improving over time, especially now that IE11 is deprecated so it's a reasonable expectation to develop against Chrome / Firefox / Safari which are actively maintained and continue to implement new JavaScript features.
(Three functions rather than one, but it’s one main entrypoint and a couple of minor variants.)
I think this is a really well-designed API, and greatly preferable to a more fine-grained OO approach. It does a lot of work under the hood but keeps it carefully contained so you don’t get a bunch of dependency sprawl.
The API may be a single function, but that package contains a non-trivial amount of code. The contribution this package makes to your codebase is not "a single function". However, at a glance, I agree with your reasons for liking it! And I'd be unlikely to vendor it directly into an app or dependent library for example.
EDIT: it feels to me like the CRC32 implementation contained in that library is a prime example of what the original article had in mind. There should be one global CRC32 function which every person could pull and reuse, instead of bundling it with their own package!
Code is much harder to maintain than it is to create, overall. Adding lines of code to a project is a net negative in the long run if they are not actively maintained by the original author. So as a project maintainer, you don't really want lots of small contributions from random people who don't stick around, at least not when they are adding new behavior to your project.
And having thousands of one-line dependencies is actually much worse than having one thousand-lines dependency, since you now depend on the whims of each of those creators not to stop distributing their work, not to change their license, not to inject malicious code, to address zero-days etc.. Ultimately external dependencies are a form of collaboration, and it's very hard work to efficiently collaborate with thousands of people.
Counter argument: 1000 one-liners may each need less maintenance. And bugs are more self contained.
The example in the article is the Fibonacci algorithm and general helper methods. A “remove whitespace” method could easily be set and forgotten (esp with testing). A 1000 line library probably needs maintenance because there is more moving parts and if there’s a bug, you’ll have to test/update a lot more things.
I think the solution to the problems raised are not small libraries but a large standard library with good composable tools. I don’t know the specifics of erlang but in many programming languages things like remove a string whitespace is easy to do inline with the STDLIB instead of downloading a helper method, either in a bigger context or alone.
So you don’t like single line node modules. I don’t think I’ve ever used one. Maybe as a transient dependency, though.
But a function could be written in multiple ways, and I’m not sure we actually mean “single line” here. You’re referring to something of arbitrarily small complexity.
But, getting back to the original comment, what’s to say that one exported function isn’t supported by 100 more non-exported functions? Or, worse case, a 1000 line function that could be decomposed into smaller blocks.
Now, I’m sure I’ve leaned on these sorts of dependencies. Especially ones that have a peer dependency on a broken release, where they monkey-patch the problem. Why would I invest in maintaining short lived code like that myself? My version manager would certainly tell me when things break.
> I don’t think I’ve ever used one. Maybe as a transient dependency, though.
The left-pad debacle proved that much of the ecosystem actually depends on such packages, at least indirectly.
> But, getting back to the original comment, what’s to say that one exported function isn’t supported by 100 more non-exported functions? Or, worse case, a 1000 line function that could be decomposed into smaller blocks.
I'm not sure what the point is here. Of course a huge module might only expose one public function. That's perfectly OK.
The context we were discussing was the question of whether an open-source project should be happy to accept a PR that only adds one (small) function to the project.
And the same question then extends to publishing that single small function as a separate module that others should add dependencies on, since that is essentially what TFA is arguing for.
But either way, the discussion was about modules that expose a small amount of (simple) functionality. And you're right, I'm saying "one line" as a proxy for that.
> Why would I invest in maintaining short lived code like that myself? My version manager would certainly tell me when things break.
I don't understand this part at all.
Perhaps what wasn't clear was what "ideal" I am proposing instead. My ideal is having a dependency tree that is both small (few direct dependencies) and flat (my direct dependencies should also have a small and flat dependency tree of their own). That necessarily gravitates towards fat modules that include lots of functionality under the maintenance of a single team. Of course, this also presupposes a package manager and build system that can efficiently ignore parts of those modules that are not being used in a particular program.
I'm very familiar with that specific discussion. I find Sindre's arguments very convincing in regard to small modules. But he then assumes that small modules ~= small NPM packages. That's where his argument fails, IMO.
Let's assume the alternative to "small NPM packages" is not "large NPM packages", but "small modules, vendored into my own repository". In this comparison, the benefits of small NPM packages become:
- Transient dependencies are automatically managed for you
- You can pull updates from the repository to get new features and bug fixes, and when you do this, as above, changes in transient dependencies are automatically managed for you
- The package gets a vanity download count on npmjs.com
There are meaningful tradeoffs here, but they're different to trying to rehash the benefits of modular code. Of course modular code is good. But packaging and distributing code is a whole different story.
EDIT: I'm having a bit of a reaction against the excesses of the NPM ecosystem at the moment. As, I think, are a lot of developers. I hope the pendulum swings from its current state more towards "fewer, better dependencies". Not towards "dependencies the size of a single function".
| - do away with modules
| - all functions have unique distinct names
| - all functions have (lots of) meta data
| - all functions go into a global (searchable) Key-value database
So like MP3 tags for functions, and then you get to import by theme/genre/whatever. The system can automatically generate any dependency information not fully made explicit in the give metadata.
>Now I find this very convenient when I write a new small utility function I stick in in elib1_misc.erl - no mental anguish in choosing a module name is involved.
No, just the anguish of navigating 1000 non-namespaced functions for the one you need. We might not need modules, but it's good to have namespaces - and just having metadata that are not part of the name doesn't cover that aspect.
They would be namespaced by "metadata" (tags). You still have to pick tags and/or create nw ones, so it's not necessarily easier than picking or creating modules for your functions. But it might be useful for consumers -- or not, because it might only increase their cognitive load.
Has NPM been the closest attempt at something similar to what's described here? Lots of single-function modules like is-even, is-odd, and left-pad.
Although given potential problems with function deletion, maybe there's a case to be made for careful curation of these functions in a single place, we could call it the standard library.
They have a single function but don’t have to be, so package managers can’t assume they are single function. It they could tree shaking might not be needed?
> (( managing namespaces seems really tricky, a lot of peoople seem
> to thing that the problem goes away by adding "." 's to the name
Well, it becomes someone else's problem... which is about as far as a technical solution to this problem can go: the namespace problem is actually a social problem.
> but how do we discover
> the initial name www.a.b? - there are two answers - a) we are given the name
> (ie we click on a link) - we do not know the name but we search fo it ))
The third way is that you look through the list of everything that "www.a" provides on a hunch there should be something like "b" in it. Usually this list is short enough that you can do this and also recognize that "ph" is a synonym so you can use it.
Sure, renaming-at-import would help with globally unique names being unwieldy to use but we already can do this with module imports in pretty much any languages, even in C (preprocessor macros or, when you ran into by identical symbol names at the ABI level, symbol renaming).
So I think modules is one of those "worse is better" things: when you imagine how different and wonderful the world could be, it kinda obvious that modules are pretty mediocre. But they work, right here and now, and simple things are possible and complex things require hacks on top but still are mostly possible too.
Except now you can handle name collisions by renaming imports. If you have two modules that share a name, you can rename them in the import line (e.g. in Haskell or Go). Without modules though if you have two libraries which used the same prefix for their functions, now the compiler will complain about redefinitions and the linker will complain about symbol collisions.
Modules are obviously the best solution for this purpose, but not the only one. Even in C, which has neither modules nor namespaces, you can avoid prefix collisions with a little bit of macro usage by making the prefix configurable in such a way that the consumer of the library only has to define a preprocessor constant before including the library if they're unlucky enough to have stumbled upon two different libraries with the same prefix, with no action required otherwise. Even for the author of an library this approach doesn't require any unreasonable levels of macro usage. In practice this approach is rare, but I've seen a couple of libraries use it, so it's not completely theoretical.
That will only work for open source libraries. It won't work for libraries provided in binary form (which is common in commercial contexts I've worked in so far). I also tend to dislike the preprocessor and other forms of syntactic programming in favour of solutions working on the semantic level. The more I know about the preprocessor, the less simple and more puzzling it is to me, especially after implementing it.
This was also part of the motivation behind (plug) https://GitHub.com/srikumarks/inai. While the current incarnation focuses on REST packaged services as a unit, earlier versions did it at function level. Inai's service code is stored and updated through Redis but equivalent KV stores will do as well.
This is actually a good point. On the project I am working on we do exactly that, we just give all functions a unique name, which is the hash of its source code and it works well
I think this is the only time I've read the word "database" and thought "blockchain might be a superior solution".
One could imagine replacing proof of work or whatever the "confirmation" process is with some sort of testing or peer review consensus protocol that would append reviewed code to the chain.
A blockchain contains a full history of all modificiations, which sounds a bit like version control if you look at it funny.
I get your point, and I agree with it for most cryptocurrencies.
In this case, the proof-of-work would be peer-review instead of computation, and the "blocks" would be as frequent as "minor releases". The need for computation would be pretty light.
As it is, I go on npmjs.com, look at the number of downloads, and hope for the best. Then ‘npm audit’ runs automatically, spitting out anywhere from a few dozen to several hundred warnings which I then ignore, still hoping for the best.
"A blockchain is a distributed ledger with growing lists of records (blocks) that are securely linked together via cryptographic hashes.[1][2][3][4] Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (generally represented as a Merkle tree, where data nodes are represented by leaves)"
This is exactly what I am working on. Code is stored as a hash of the source code and stored in IPFS and commits can be optionally confirmed by writing a confirmation to a blockchain
Genuinely curious, thinking outside the box (writing a new programming language on top of Prolog?), and he treated people with utmost respect. I was a nobody lucky enough to escort him around Chicago one day when he was attending a conference, and we spent a couple of hours talking about art, Erlang, Riak, and man I wish I could remember what else.