As far as I understand the idea behind scrum it's not that you don't plan, it's that you significantly shorten the planning-implementation-review cycle.
Perhaps that is the ideal when it was laid out, but the reality of the common implementation is that planning is dispensed with. It gives some management a great excuse to look no further than the next jira ticket, if that.
The ideal implementation of a methodology is only relevant for a small number of management who would do well with almost any methodology because they will take initiative to improve whatever they are doing. The best methodology for wide adoption is the one that works okay for the largest number of management who struggle to take responsibility or initiative.
That is to say, the methodology that requires management to take responsibility in its "lowest energy state" is the best one for most people-- because they will migrate to the lowest energy state. If the "lowest energy state" allows management to do almost nothing, then they will. If the structure allows being clueless, a lot of managers will migrate to pointy haired Dilbert manager cluelessness.
With that said; I do agree with getting products to clients quickly, getting feedback quickly, and being "agile" in adapting to requirements; but having a good plan based on actual knowledge of the requirements is important. Any strict adherence to an extreme methodology is probably going to fail in edge cases, so having the judgement of when to apply which methodology is a characteristic of good management. You've got to know your domain, know your team, and use the right tool for the job.
I've got a bridge to sell. It's made from watered-down concrete and comes with blueprints written on site. It was very important to get the implementation started asap to shorten the review cycle.
First sentence of the Wikipedia article for Node.js:
> Node.js is a cross-platform, open-source JavaScript runtime environment that can run on Windows, Linux, Unix, macOS, and more.
First sentence for the Wikipedia article Deno:
> Deno is a runtime for JavaScript, TypeScript, and WebAssembly that is based on the V8 JavaScript engine and the Rust programming language.
First line of hero text from Node.js's site:
> Node.js® is a free, open-source, cross-platform JavaScript runtime environment that lets developers create servers, web apps, command line tools and scripts.
First line of hero text from Deno's site:
> Deno is the open-source JavaScript runtime for the modern web.
I've also seen discussions where wrapping the servo-browser-engine in a UI layer referred to the UI layer as a runtime, though I think that's a substantially less canonical use of the word than referring to the part of an implementation that takes requests from the interpreter and executes them in the surrounding environment as a runtime.
It's a measure of inequality, not poverty as such, sure.
But practically it's obvious just by looking at the lives of "poor" people that, yeah, they are materially still struggling. I can't speak for Britain but I can speak for the USA: if you did both the "relative poverty" analysis and the "basket of goods" analysis, you'd find a lot of overlap. Splitting hairs over how exactly poverty is defined is just being dismissive of the actual people who are actually experiencing some form of material poverty, and shifting focus away from making things better.
We need more information. How does this work for internal combustion truck engines?
Is the regulation well intentioned poorly designed? Is it anti-competitive gatekeeping drafted by lobbyists? Is the author misrepresenting something? All of the above? Hard to say.
I imagine that the variation is in the internal combustion engines the system is being paired with. In that scenario, it can be that the regulator is treating the combined units as a new drivetrain and requiring certification of each combination as if it were a new engine.
It would be interesting to see a breakdown of what larger operators have in their fleets. It could be that a few certifications go a long ways. They are going to be at least somewhat inclined to avoid variation.
It's not about prefix notation, it's that the fully uniform syntax has legitimate ergonomic problems for editing, human reading, and static analysis. Sexprs are better for computers than for humans in a lot of ways.
Only when not using one of the many Lisp editors that exist since Lisp Machines (Symbolics, TI), Interlisp-D (Xerox), that survive in Emacs SLIME, Cursive, LispWorks, Allegro Common Lisp, Raket, VSCode Calva.
Not true at all IMO. Reading code is reading code regardless of whether you have a fancy IDE or not.
S-expressions are indisputably harder to learn to read. Most languages have some flexibility in how you can format your code before it becomes unreadable or confusing. C has some, Lua has some, Ruby has some, and Python has maybe fewer but only because you're more tightly constrained by the whitespace syntax. Sexpr family languages meanwhile rely heavily on very very specific indentation structure to just make the code intelligible, let alone actually readable. It's not uncommon to see things like ))))))))) at the end of a paragraph of code. Yes, you can learn to see past it, but it's there and it's an acquired skill that simply isn't necessary for other syntax styles.
And moreover, the attitude in the Lisp community that you need an IDE kind of illustrates my point.
To write a Python script you can pop open literally any text editor and have a decent time just banging out your code. This can scale up to 100s or even 1000s of LoC.
You can do that with Lisp or Scheme too, but it's harder, and the stacks of parentheses can get painful even if you know what you're doing, at which point you really start to benefit from a paren matcher or something more powerful like Paredit.
You don't really need the full powered IDE for Lisp any more than you need it for Python. In terms of runtime-based code analysis Python or Ruby are about on par with Lisp, especially if you use a commercial IDE like Jetbrains. IDEs can and do keep a running copy of any of those interpreters in memory and dynamically pull up docstrings, look for call sites, rename methods, run a REPL, etc. Hot-reloading is almost as sketchy in Lisp as it is in Python, it's just more culturally acceptable to do it in Lisp.
The difference is that Python and Ruby syntax is not uniform and therefore is much easier to work with using static analysis tools. There's a middle ground between "dumb code editor" and "full-power IDE" where Python and Ruby can exist in an editor like Neovim and a user can be surprisingly productive without any intelligent completion, or using some clunky open-source LSP integration developed by some 22 year old in his spare time. With Lisp you don't have as much middle ground of tooling, precisely because it's harder to write useful tooling for it without a running image. And this is even more painful with Scheme than with Lisp because Scheme dialects are often not equipped to do anything like that.
All that is to say: s-exprs are hard to deal with for humans. They aren't for humans to read and write code. They never were. And that's OK! I love Lisp and Scheme (especially Gauche). It's just wrong to assert that everyone is brain damaged and that's why they don't use Lisp.
"One can even conjecture that LISP owes its survival specifically to the fact that its programs are lists, which everyone, including me, has regarded as a disadvantage"
Not the first time someone didn't realize what they had.
I view code in many contexts though - diffs in emails, code snippets on web pages, in github's web UI, there are countless ways in which I need to read a piece of code outside of my preferred editor. And it is nicer, in my opinion, to read languages that have visually distinct parts to them. I'm sure it is because I'm used to it, but it really makes it hard to switch to a language that looks so uniform and requires additional tools outside of my brain to take a look at it.
You are free to criticize or look down on the way other people work. That doesn't change the silliness of the assertion that infix operator brain damage is the reason we aren't all using Lisp right now. It's totally valid to argue that we are all missing out on the benefits of superpowered interpreter-compilers like SBCL and Chez. But prefix math operators are only a small part of the reason why.
> S-expressions are indisputably harder to learn to read.
Has this been studied? This is a very strong claim to make without any references.
What if you take two groups of software developers, one which has 5-10 years of experience in a popular language of choice, let's say C, and then take a group of people who write LISP professionally (maybe clojure? Common lisp? Academics who work with scheme/racket?) and then have scientists who know how to evaluate cognitive effort measure the difference in reading difficulty.
I see you are debating lisps ergonomics, but that doesn't dismiss the paradigm. Erlang Haskell and Prolog has far better syntax readability, so I don't see this as really relevant in discussing the alternative to Von Neuman.
There are other ergonomics issues beyond syntax that pose issues to adoption (Haskell in production has become something of a running gag). Moving the paradigm into a mixed language alongside procedural code seem to help a lot in seeing its adoption in recent years. (swift, rust, python, c++)
I am responding to the assertion that the reason we don't all use Lisp is because we all have brain damage. My claim is that there are broader ergonomic issues with the language family. You could argue that maybe the system architecture and execution model of the Lisp machines should be debated separately from its syntax, but I am responding to an argument about its syntax.
reply