Hacker News new | past | comments | ask | show | jobs | submit | more kthielen's comments login

A career change, or a job change?


> Just do some reasoning about the side effects.

"Wat?"

Have you ever inherited a 1MLoC codebase that you yourself didn't write from scratch, or worked on a large project with at least one other person? How do you know what the side-effects are and where they happen when you aren't intimately familiar with the code you're interacting with? That's the whole point of making effects explicit and available for analysis in the type system.


Are you saying you've inherited 1MLoC functional codebases, and due to their shining purity you haven't had to get familiar with them? Or haven't had to understand their "side effects", outputs / effects on the world?

Side effects are what you care about in a program. The simplest 8-bit CPU has side effects with every single instruction. It's baked into the bedrock, I don't think you can paper it over. Trying to shoehorn the complexity into nothing but function parameters and return values doesn't make the complexity go away, it just necessitates all these other weird contortions that add the complexity back twice over again.


Yes the good old days of spaghetti code where everything was simpler before it became complicated for me. Let me code my soup of ifs in methods returning void / Unit, there is nothing better, and that these FP lesson givers leave us alone.


Ok but just make sure nobody else has to inherit it, and it’ll be fine.


Yeah, some maintenance projects have been much easier than others. Implicit knowledge that you have to “just” hunt down is generally the cause of difficulty and delay. It has little to do with “purity” and more to do with not wasting my time.


> Lists and trees can be fully captured by sum and product types, but extending this representation style to DAGs and graphs doesn't work--you either get inefficiency (for DAGs) and then infinite regress (for cyclic graphs) attempting to continue the "syntactic" style of representation

You also need "inductive" recursive types to represent lists and trees, in addition to sums and products.

One way of representing the type of a list of T is like:

mu X.1+T*X

(Hence sum and product types, but also inductive or "least fixed point" recursion.)

But you can also use "coinductive" recursive types to represent "processes" (or "greatest fixed point" recursion) with almost the same notation:

nu X.1+T*X

This represents a FSM which at any point yields either a termination (the "1" on the left side of the recursive sum) or a value and a continuation (the "T" in "T*X" is the value and the "X" in "T*X" is the continuation).

This doesn't answer every question about graph representation, obviously, but it's a useful tool for attacking some graph problems you'd like to represent "syntactically" as you say (though you have to think in terms of "coinduction" instead of "induction" e.g. bisimilarity instead of equality).


Yeah but, how is an assembly index different than an LZW code?


I think they discussed that in the podcast.


I don't think they explained how it's different, I did listen to the podcast before posting my comment.

It's a pretty basic question that IMHO demands a clear and adequate explanation. If you read the 'abracadabra' example here: https://en.wikipedia.org/wiki/Assembly_theory

... you could be reading the abstract to an early paper explaining LZW codes.


> Formal verification can work in areas where you have a long time to do > development, or where your problem space is amenable to having a formal > standard (like math problems). In most other cases it just runs completely > opposite to the actual problems that software engineering is trying to solve.

Have you actually tried, or is this just the standard line?

I've heard this from people I've worked with, and when I followed up to ask what kinds of projects they've worked on with Agda, Idris, Coq, Lean, whatever, they had nothing.

I don't have anything either, other than that I've toyed around with some of these systems a little bit. But it seems to me like there's a lot of potential in dependently-typed programming, and we just don't have a lot of experience with it -- which is fine, but that's a very different situation than what seems to be the standard line, "all formal methods take a ton of time and are only viable in a few niche projects" (what are people imagining here? NASA space probes and stacks of binders? really it doesn't seem obvious to me, I'm not trolling).


Why would you post this here instead of the github page?


Because the founders came here and solicited feedback and he provided feedback directly to them.


Many (most?) humans eschew Microsoft products


I feel like those days are long past.


I don't think those days ever really existed.

Sure, lots of people (myself included) used to refuse to run windows, refuse to use any micro$hit software or anything bill g@te$ wrote and would loudly decry them.

But, the reality is, most people just ran whatever their employer gave them and for many many years that was almost definitely Microsoft stuff (especially on the desktop OS and core productivity suite).

Now I think the universe is just more pragmatic and we realize we can and should meet people where they are and arguing over tooling is just bikeshedding, at least I have.

I'm never going to ask people to please stop emailing spreadsheets and send me links to spreadsheets we can all edit online together, but, I no longer care if you send me gdocs or excel.

I'm no longer going to reply to your email with a .doc file with copypasta about how bill gates and microsoft are evil.

But, I'm firmly into grey beard territory these days so maybe i've just grown up but I do think (and hope) most of the internet has too.


Eh, no. They passed for a bit when Microsoft seemed to be playing nice and accepting the fact that the world has moved on from Windows.

Now they've regressed.


This piece focuses on complaints about ocaml, and the last half seems to just be syntax complaints (which I mostly ignore just because every language seems to have awkwardness in its syntax -- certainly Haskell, a language otherwise held up as an example, can be shown to accept some really weird text).

Good things about ocaml that I think should be mentioned are Hindley-Milner type inference (some might argue this, but it's a very effective system for inferring types), and also straightforward semantics (it's usually pretty easy to look at some code and make a reasonably accurate guess about what it will do on real hardware -- compare with Haskell).

I'm sympathetic toward the initial complaints here, which I also think are the most substantive. Type classes and qualified types, as in Haskell, would be very useful in ocaml/ML. The examples offered (show_of_int, show_of_double, ...) are valid, and also the related complaint that dishonestly broad types like structural equality (where runtime errors are raised for cases where the types are actually not supported) also make a compelling case for qualified types in ML.

I made this project ~10 years ago after making similar observations: https://github.com/morganstanley/hobbes

Things like structural equality and ordering are user functions here rather than magic internal definitions (e.g. equality is defined as a type class, and type class instances can deconstruct algebraic types at compile time to decide how to implement instances). But evaluation is eager by default, so it's pretty easy to reason about performance. And data structures can be persisted to files and/or shared transparently in memory with C/C++ programs without translation, also very convenient for the places where this was used. Actually we built some very big and complicated time series databases with it, used in both pre and post trade settings where ~15% of daily US equity trades happen. So I think these observations are useful and have passed through some pretty significant real tests.


> Programming is child's play compared to undergraduate mathematics taught in math departments.

One thing you might learn in math is to avoid making overgeneralized statements that you can’t support.

A valid substitution in your statement for “programming” is writing a compiler. And for “undergraduate mathematics taught in math departments”, basic differential calculus.

Yet we regularly teach smart high school students and first-year undergraduates calculus, and almost never try to teach them to write a compiler, contradicting your proposition.

But what do I know? I’m just a dumb programmer. I can’t read your mind, so maybe you had something a little more specific you wanted to say.


> "Yet we regularly teach smart high school students and first-year undergraduates calculus..."

High school students are taught plug-and-chug calculus where one uses rules and formulae without any real understanding of the underlying subtleties that make calculus work.


Bulletproof counter argument, you sure showed me.


I think this is the goal of proof assistants based on the Curry-Howard isomorphism, which the original author thought to denigrate for some reason.


> But I would advise taking time to consider whether it's actually, within the lens of yesterday (or 2 years ago)... good?

Other than to soothe egos, this almost never matters, because the person inheriting someone else’s code doesn’t live 2 years ago and can’t be blamed for not having the prior context (establishing that context is the job of the original author and is why documentation matters).

If you find yourself in the thankless position of having to reverse engineer intent and especially if you have an unsympathetic manager ready to lecture you about how you should magically read the mind of the original author, start looking for a new job and hack/slash away as much of the old code as possible to do the job that you’re asked to do.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: