Hacker Newsnew | past | comments | ask | show | jobs | submit | rjknight's commentslogin

Surely the answer is "the other side of the Gobi Desert".


Why did the chicken cross the Gobi wall?


This isn't reddit.


No, to get to the other side.


I always think of it as the "Perez plateau"[1], but I will grant that this is less catchy.

[1]: https://www.researchgate.net/figure/Phases-of-the-S-Curve-Pe...


The "domain expert" is the business-person who is, it is suggested, more capable of reading and comprehending the Clojure code than the Haskell code.

Since there is an equivalence between types and propositions, the Clojure program also models a "type", in the sense that the (valid) inputs to the program are obviously constrained by what the program can (successfully) process. One ought, in principle, to be able to transform between the two, and generate (parts of) one from the other.

We do a limited form of this when we do type inference. There are also (more limited) cases where we can generate code from type signatures.

I think op's point is that the Clojure code, which lays the system out as a process with a series of decision points, is closer to the mental model of the domain expert than the Haskell code which models it as a set of types. This seems plausible to me, although it's obviously subjective (not all domain experts are alike!).

The secondary point is that the Clojure system may be more malleable - if you want to add a new state, you just directly add some code to handle that state at the appropriate points in the process. The friction here is indeed lower. But this does give up some safety in cases where you have failed to grasp how the system works; a type system is more likely to complain if your change introduces an inconsistency. The cost of that safety is that you have two representations of how the system works: the types and the logic, and you can't experiment with different logic in a REPL-like environment until you have fully satisfied the type-checker. Obviously a smarter system might allow the type-checker to be overridden in such cases (on a per-REPL-session basis, rather than by further editing the code) but I'm not aware of any systems that actually do this.


> The secondary point is that the Clojure system may be more malleable - if you want to add a new state, you just directly add some code to handle that state at the appropriate points in the process.

That's all certainly possible. But the same could be said of Python or JS. So if the big point here is "we can model business decisions as code!", I fail to see the innovation because we've been doing that for 50 years. Nothing unique to Clojure.

You could even do it Haskell if you want: just store data as a Map of properties and values, emulating a JS object.


Yes, the point wasn’t “Clojure rules, Haskell drools”, it’s that at a high enough level of abstraction, encoding business rules with static types is brittle. It’s not some huge revelation; enterprises have done this for decades with SQL and gasp stored procedures.


I honestly doubt a business person would be able to read Clojure. I’ve been programming for 15 years and it doesn’t make any sense to me.


I think it depends a lot on the org. In enterprise software development, there's definitely a type of "business analyst" or "domain expert" who is capable of reading code, at least to the extent that the code resembles a flow chart. Clojure's small syntax means that it's fairly easy to write code that is obviously just a flow-chart in text form.


There is absolutely no chance I could show the enterprise spaghetti I work with to a domain expert and they would understand any of it.


I've been reading and writing English for half a century and Chinese doesn't make any sense to me, so I doubt any ordinary human could read it.


That’s quite the false equivalence.


How so?


Alphabet? Generality?


One of the neat properties of base58 (and, strictly speaking, of base62 as well) is that it does not contain any characters that require special encoding to be used in either a URL or a filename. Nor does it contain any characters that are considered to be "word-breaking" by most user interfaces, so you can do things like double-click on a base58 string to select the entire string. Base64 has none of the above properties, while being only very slightly more efficient.


I wonder whether anyone has ever covered all of the tradeoffs in one place. There are a quite a few of these encodings.

UUencoding worked even when passed through non-ASCII mechanisms/protocols that didn't do lowercase, or that were case-insensitive; but at the expense of using what in some contexts would be reserved metacharacters. Whereas XXencoding did not have a problem with metacharacters, only using plus and minus in addition to the alphanumerics, but at the expense of being case-sensitive.

viz encoding can avoid whatever metacharacters one chooses, with no changes to the decoder, the choice being entirely at the encoding end, and is similarly used in scenarios where one does not want to break at whitespace or general word-breaking punctuation; but has a lot of overhead for each such encoded character and requires at minimum an alphabet of three punctuation characters (caret, minus, and backslash), the octal digits, and the letter 'M'.


"I want an end to corruption, or a chance to participate in it!"


One thing I've noticed about working with LLMs is that it's forcing me to get _better_ at explaining my intent and fully understanding a problem before coding. Ironically, I'm getting less vibey because I'm using LLMs.

The intuition is simple: LLMs are a force multiplier for the coding part, which means that they will produce code faster than I will alone. But that means that they'll also produce _bad_ code faster than I will alone (where by "bad" I mean "code which doesn't really solve the problem, due to some fundamental misunderstanding").

Previously I would often figure a problem out by trying to code a solution, noticing that my approach doesn't work or has unacceptable edge-cases, and then changing track. I find it harder to do this with an LLM, because it's able to produce large volumes of code faster than I'm able to notice subtle problems, and by the time I notice them there's a sufficiently large amount of code that the LLM struggles to fix it.

Instead, now I have to do a lot more "hammock time" thinking. I have to be able to give the LLM an explanation of the system's requirements that is sufficiently detailed and robust that I can be confident that the resulting code will make sense. It's possible that some of my coding skills might atrophy - in a language like Rust with lots of syntactic features, I might start to forget the precise set of incantations necessary to do something. But, corresponding, I have to get better at reasoning about the system at a slightly higher level of abstraction, otherwise I'm unable to supervise the LLM effectively.


Yes, writing has always generally been great practice for thinking clearly. It's a shame it isn't more common in the industry⸺I do believe that the norm of lack of practice in it is one of the reasons why we have to deal with so much bullshit code.

The "hammock time thinking" is exactly what a lot of programmers should be doing in the first place⸺you absorb the cost of planning upfront instead of the larger costs of patching up later, but somehow the dominant culture has been to treat thoughtful coding with derision.

It's a real shame that AI beat human programmers at the game of thinking, and perhaps that's a good reason to automate us all out of our jobs.


One problem is that one person’s hammock time is another’s overthinking time and needs the opposite advice. Of course it’s about finding that balance and that’s hard to pin down with words.

But I take your point and the trend definitely seems to be towards quicker action with feedback rather than thinking things through in the first place.

In that sense LLM’s present this interesting middle ground in that it’s a faster cycle than actually writing the code, but still more active and externalising than getting lost in your own thoughts (not withstanding how productive that can still be).


All good software engineers learn this. Unless you’re actively working in some languages, you don’t need to worry about syntax (that’s why reference manuals are there for). Instead, grow your capacity to solve problems and to define precise solutions. Most time is spent doing that, realizing you don’t have a precise idea of what you’re working on and doing research about it. Writing code is just translating that.

But there are other concerns to code that you ought to pay attention to. Will it works in all cases? Will it run efficiently? Will it be easily understood by someone else? Will it easily be adapted to fit to a change of requirements?


Through LLMs, new developers are learning the beauty of writing software specs :')


It's weird, but LLMs really do gamify the experience of doing software engineering properly. With a much faster feedback loop, you can see immediate benefits from having better specs, writing more tests, and keeping modules small.


But it takes longer. People taking a proper course in software engineering or reading a good book about it is like going through a game tutorial, while people going through LLMs skip it. The former let you reach faster to the intended objectives, learning how to play properly. You may have some fun doing the latter, but you may also spend years and your only gain will be an ad-hoc strategy.


And they’re making it much easier to build comprehensive test suites. It no longer feels like grunt work.


Ha! I just ran into this when I had a vague notion of a statistical analysis that I wanted to do


Writing the code already didn't feel like the bottleneck for me, so...


I think it depends on whether you think there's low-hanging fruit in making the ML stack more efficient, or not.

LLMs are still somewhat experimental, with various parts of the stack being new-ish, and therefore relatively un-optimised compared to where they could be. Let's say we took 10% of the training compute budget, and spent it on an army of AI coders whose job is to make the training process 12% more efficient. Could they do it? Given the relatively immature state of the stack, it sounds plausible to me (but it would depend a lot on having the right infrastructure and practices to make this work, and those things are also immature).

The bull case would be the assumption that there's some order-of-magnitude speedup available, or possibly multiple such, but that finding it requires a lot of experimentation of the kind that tireless AI engineers might excel at. The bear case is that efficiency gains will be small, hard-earned, or specific to some rapidly-obsoleting architecture. Or, that efficiency gains will look good until the low-hanging fruit is gone, at which point they become weak again.


It may sound plausible, but the actual computations are very simple, dense and highly optimised already. The model itself has room for improvements, but this is not necessarily something that an engineer can do, it requires research.


> very simple, dense and highly optimised already

Simple and dense, sure. Highly optimized in a low level math and hardware sense but not in a higher level information theoretic sense when considering the model as a whole.

Consider that quantization and compression techniques can achieve on the order of 50% size reduction. That strongly suggests to me that current models aren't structured in a very efficient manner.


Some of these seem good to me:

- "everything is an expression" is a nicer solution for conditional assignments and returns than massive ternary expressions

- the pipe operator feels familiar from Elixir and is somewhat similar to Clojure's threading macros.

- being able to use the spread operator in the middle of an array? Sure, I guess.

I want to like the pattern matching proposal, but the syntax looks slightly too minimal.

The other proposals are either neutral or bad, in my opinion. Custom infix operators? Unbraced object literals? I'm not sure that anyone has a problem that these things solve, other than trying to minimize the number of characters in their source code.

Still, I'm glad that this exists, because allowing people to play with these things and learn from them is a good way to figure out which proposals are worth pursuing. I just won't be using it myself.


I'll pass on the pipe operator, but it's not particularly objectionable.

Agree there's some good ideas. Pattern matching looks like a great idea with the wrong syntax - let's just get a match statement similar to the switch statement - if we can't reuse switch.

String dedent and chained comparisons look nice. Though I think the latter is a breaking change if it were done in js. I'd also be fine with default const for loop variables.

"Export convenience" is going to confuse people. The syntax looks different than named exports and looks closer to the export form of default imports which is begging for trouble.


Using the pipe operator in Elixir is very nice, even more so for building up complex multi operations and such


Exactly. It also massively improves readability by reordering the code to match the order of operations. Also long nested chains of function calls make it so the arguments for each call get spread out, so your eyes have to go back and forth to determine what function receives what.


For “everything is an expression” https://github.com/tc39/proposal-do-expressions may be of interest, though discussion seems to have paused.


The odd thing is that Civet includes both do expressions and everything is an expression. I'd be happy with either, but both seems like a bad idea.


Do blocks in Civet are mainly to provide shielded scopes (with local declarations). They're currently also necessary to handle declarations (const/let) in the middle of expressions, but hopefully they won't be necessary eventually - only when the user wants to shield scopes. They also have useful forms like `async do` that let you build Promises using await-style code.


I would really like to know whether this feature gets any (non-accidental) use. It's certainly an important problem to solve, and I can see the technical merit in the solution proposed. What I'm left wondering is how this solution is most effectively communicated to the people who need to know about it, such that they're able to make use of it correctly in the critical moments when they need to use it. For obvious reasons there are probably no good statistics on this, but I wonder what the user research was like.


Valibot is really nice, particularly for avoiding bundle size bloat. Because Zod uses a "fluent builder" API, all of Zod's functionality is implemented in classes with many methods. Importing something like `z.string` also imports validators to check if the string is a UUID, email address, has a minimum or maximum length, matches a regex, and so on - even if none of those validators are used. Valibot makes these independent functions that are composed using the "pipe" function, which means that only the functions which are actually used need to be included in your JavaScript bundle. Since most apps use only a small percentage of the available validators, the bundle size reduction can be quite significant relative to Zod.


Is there a reason why tree shaking algorithms don't prune unused class members? My IDE can tell me when a method is unused, it seems odd that the tree shaker can't.


Because you can write `this[anything()]()` and it's impossible to analyze it. IDE false negative will not do anything bad, but treeshaker false negative will introduce a bug, so they have to be conservative.


That's not entirely true. Tree-shaking algorithms could have a “noDynamicAccess” option that errors on such use (only viable for applications, not libraries). Alternatively, the algorithm could be integrated with the TypeScript compiler API to allow dynamic access in some cases (e.g. where the `anything` function in your example only returns a “string literal” constant type or a union thereof, instead of the `string` type).


Does that work for all class members? I think I've only ever seen that on private members, though I don't know whether that's because it's so much easier to check whether a private member is used or because an unused public member isn't an issue for eg a library.

This feels like an issue that reduces down to the Halting Problem, though. Halting is a function that could be made a member of a class, so if you could tell whether that method is used or not then you could tell whether the program will halt or not. I think it's one of those things that feels like it should be fairly easy, and it's really really not.


Comparing this to the halting problem isn't really meaningful here because even if you could make a full mapping (which yours isn't), you can prove that a rather large subset of programs halt, which is good enough for a tree shaker.

I don't need to be able to eliminate every single unused function in every situation, but if I can prove that certain functions are unused then I can delete just those functions. We're already doing this regularly with standalone functions, so my question is just why this isn't done with class members.


Ah, I see your question now. Prototypes maybe? I’m not nearly a good enough JS dev to have a reasonable guess at that specifically.

Being able to access class members using square bracket syntax with a variable also seems like it would make it really difficult to prove that something isn’t used. I’m thinking something unhinged like using parameters to build a string named after a class member and then accessing it that way.

Dunno, I would be curious if someone has a definitive answer as well.


It requires flow analysis, which is really hard to get right. I don't think there's a tree-shaking library that uses the TypeScript compiler API for static analysis purposes. Maybe because it would be slow?

edit: The creator of Terser is working on flow analysis for his new minifier, according to him[1].

[1]: https://github.com/terser/terser/issues/1410#issuecomment-17...


* Not the creator of Terser, but the lead maintainer


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: