What you linked to is WASI-libc, which is a C standard library implementation on top of WASI - which itself is more low-level and less POSIX-like. https://github.com/WebAssembly/WASI
Fortunately the idea of WASI is for it to be modular. It makes sense for them to have the APIs required to make porting existing POSIX-ish-compatible apps easy, but there are also WASI modules (I forget the proper name) in the works for much more general IO objects etc. that better abstract away the underlying OS if you're willing to invest in that.
So I guess you could summarise it as: backward compatible-ish first, and then figuring out what a neater, more idiomatic world would look like second.
It's a pragmatic approach to bootstrapping the ecosystem, and there's no need for the POSIX-alike parts to hold anyone back, at least long term. Pretty soon you will be able to write apps that target WASI that end up reading and writing files in practice without the app even knowing what a "file" is (the runtime can just hand it an abstract IO object). That's a step beyond just not knowing about things like symlinks, and seems to me like a more "web-flavoured" future that can work even for CLI/desktop apps.
This point was recognized quite early. there is an influential article by Jack W. Reeves from 1992 [1] that looks at the manufacturing analogy and argues that "the code is the design" whereas the "manufacturing" is just the compilation and packaging (that is, super cheap and automated). I believe it‘s this line of thinking that inspired Agile practices, where software developers are seen as engineers rather than code monkeys.
To be clear, the difference is that an URI generally only allows you to refer to a resource ("Identifier"), whereas an URL also tells you where to find and access it ("Locator").
For instance, `https://example.com/foo` tells you that the resource can be accessed via the HTTPS protocol, at the server with the hostname example.com (on port 443), by asking it for the path `/foo`. It is hence an URL. On the other hand, `isbn:123456789012` precisely identifies a specific book, but gives you no information about how to locate it. Thus, it is just an URI, not an URL. (Every URL is also an URI, though.)
I started this to share various learnings from my dev work, as well thoughts on software engineering and architecture. The current two posts are on Redux, but I plan to publish posts on other topics (Kafka Streams, architecture, etc.) soon.
I recommend „Capital in the 21st Century“ by Thomas Pikkety, an economist who has deeply studied wealth and income equality using historical data spanning three centuries. This book has all the evidence you need.
Spoiler alert: income from wealth is on its way of becoming close to being as concentrated as it was in the 19th century (especially in the US), and the share of income from work in total national income is decreasing almost anywhere. So yes, increasingly you can only accrue significant wealth by already having significant wealth.
I really enjoyed the comparison of dependency injection with dynamic scoping, and the explanation of how the latter can take over the uses cases of the former with less boilerplate.
But one benefit of dependency injection unacknowledged in this article is that dependency injection is the explicitness of dependencies: the need to pass them in forces the caller to be aware of which dependencies exist, and changes in dependencies cannot be ignored (they lead to compilation errors in statically typed languages, at least).
Managing dependencies with dynamic variables, on the other hand, is implicit. It's impossible to know which parts of the dynamic environment are used by a module without inspecting its source code. And changes to the module's dependencies are not noticed by callers, which may lead to cases where tests fail to stub out particular side effects without anyone noticing.
Given this drawback, dependency injection still seems like the better trade-off to me, despite of its higher amount of required boilerplate. Perhaps it is possible to bring some of the explicitness to the dynamic scoping approach, though.
Now I'm flashing back to a system that passed important configuration via globals, so to call functions that relied on these globals, you had to carefully make sure that the global environment was in the right state before you called certain functions.
One common pattern in this system was that you'd first save the current environment, run the special "environment preparation" function, then the real function, and then write back the saved environment over the modified one, so that you didn't leave the environment changed after you returned. Unless of course you meant to change it. This was sometimes documented, you'd write in which globals a function expected and which it modified into a docstring.
This was probably only in the top ten of the problems that this thing had, but I do remember it vividly. Making any change was like pulling out a Jenga block and replacing it without toppling the tower.
> One common pattern in this system was that you'd first save the current environment, run the special "environment preparation" function, then the real function, and then write back the saved environment over the modified one, so that you didn't leave the environment changed after you returned.
Oh man. It's not as bad now with shaders, but I remember how horrible learning fixed pipeline OpenGL was. You'd try to write some simple code, the result would be a black screen, and you'd just keep adding glEnable/glDisable calls to your code over and over until you figure what invisible piece of global state was ruining your day.
> But one benefit of dependency injection unacknowledged in this article is that dependency injection is the explicitness of dependencies: the need to pass them in forces the caller to be aware of which dependencies exist, and changes in dependencies cannot be ignored (they lead to compilation errors in statically typed languages, at least).
This no longer appears to be true, as most projects nowadays do dependency injection via frameworks like guice or spring. Instead of the caller injecting all dependencies, the caller simply tells the framework that it wants an instance of FooBar, and relies on the framework to magically retrieve all dependencies and use them to construct the instance of FooBar
In production, yes - but in most unit and integration test code projects you still provide explicit constructor parameters.
Many frameworks also provide for dependency validation that can be performed when the program first starts-up - you can then extend your build process to execute this validation code as part of a CI/CD process - so while it’s not strictly speaking compiler-enforced correctness it’s still better than getting a nasty surprise in production.
Aside: I‘d love to see a T4-based static DI object factory for my .NET projects which would be the best of both worlds: full static compiler-enforced dependency correctness without needing to code it by hand. It should be possible to make using EnvDTE or Roslyn. Would anyone be interested in that?
The framework will fail to do so on startup though - thankfully, Spring defaults to eager initialization of all beans. But yes, you're in deep water when you make beans lazy or if you have no other choice than to make most things lazy for performance reasons, like in PHP.
I think this is a very good point, but I also think good code organization can come a long way in addressing it.
In my opinion, a good test suite should contain mostly module level test[1]. You stub out interactions with other modules (if you both read and write to another module you should use a handwritten test double rather than a mock) but leave your own module mostly unchanged. Perhaps you replace some configuration (like changing the DB driver to run against an in-memory database), perhaps you replace the entire peristence layer, but that should be about it.
The points where your module interact with other modules and external systems can be isolated to a single file or package. Sometimes this means just aliasing a function or class, sometimes it means writing a proper facade. This makes it easy to figure out what needs to change when testing.
I also don't think that this is a problem that dependency injection frameworks are helpful in addressing. Looking at constructors is not much better than reading the method implementations -- you still have to look at every file in the codebase. Manual dependency injection with handwritten or generated (ala Dagger) factories does solve the problem completely though.
[1] I think proper unit tests should be reserved for functions that are computational or algorithmic in nature and complicated calculations/algorithms are rare in the domain I'm currently working in, though this would be different in other domains. You'll also always want some real system level tests, but not too many since they're darn slow.
> Given this drawback, dependency injection still seems like the better trade-off to me, despite of its higher amount of required boilerplate. Perhaps it is possible to bring some of the explicitness to the dynamic scoping approach, though.
There's a difference between a program that's industrial-strength versus a toy project.
Any program that's industrial strength will require effort in strengthening, irregardless of the pattern chosen.
(Honestly, I really wish dependency injection was a language feature.)
It seems like this could be a language feature. The key would be an explicit declaration of the identifiers that the function expects to be bound in the calling context. So when declaring the function you optionally specify the dynamically scoped arguments (separately from regular arguments) that callers must have in scope. Doing so allows you to use those dynamic variables within the body of the function.
Scala implicits. It's a curried arg list you can omit, but the compiler checks that you have implicit values of all the needed types in scope. Or you can pass whatever you want explicitly.
I'm not sure how you'd override the typeclass instance you are going to get easily? (Haskell typically only allows a single instance per type.)
OCaml lets you swap out the equivalent of the typeclass instance. But it's a pain, because there system also means that you always have to specify which instance you want.
The crazy high numbers are due to a fairly Microsoft-specific definition of "API" in this context. What they count here is class members; so they ported a class with 15 methods and 3 properties, they'd count this as "18 APIs" (or perhaps 19 - not sure if the class itself counts as an "API" as well).
That being said, I'm sure there was indeed an impressive amount of code that had to be ported.
I think the point was that often "an API" means a larger collection of functions, classes, properties, etc. Like the COM API, the DirectX API, the MFC API and so on.
Yes, that's what I meant. The use of the term "API" for a single public code element is something I have not come across in any other language community, hence the clarification for those who are not familiar.