Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Developers need to learn how to think algorithmically. I still spend most of my time writing pseudocode and making diagrams (before with pen and paper, now with my iPad). It's the programmers' version of the Abraham Lincoln's quote "Give me six hours to chop down a tree and I will spend the first four sharpening the axe."


I don’t really know what “think algorithmically means,” but what I’d like to see as a lead engineer is for my seniors to think in terms of maintenance above all else. Nothing clever, nothing coupled, nothing DRY. It should be as dumb and durable as an AK47.


>I don’t really know what “think algorithmically means,”

I would say thinking about algorithms and data structures for algorithmic complexity not to explode.

>Nothing clever

A lot of devs use nested loops and List.remove()/indexOf() instead of maps, etc., the terrible performance gets accepted as the state of the art, and then you have to do complex workarounds not to call some treatments too often, etc., increasing the complexity.

Performance yields simplicity: a small increase in cleverness in some code can allow for a large reduction in complexity in all the code that uses it.

Whenever I do a library, I make it as fast as I can, for user code to be able to use it as carelessly as possible, and to avoid another library popping up when someone wants better performances.


We need this to be more prevalent. But the sad fact is most architects try to justify their position and high salaries by creating "robust" software. You know what I mean - factories over factories, micro services and what not. If we kept it simple I don't think we would need many architects. We would just need experienced devs that know the codebase well and help with PRs and design processes, no need to call such a person 'architect', there's not much to architect in such a role.


I was shown what it means to write robust software by a guy with a PhD in... philosophy out of all things(so a literal philosophiae doctor).

Ironically enough it was nothing like what some architecture astronauts wring - just a set of simple to follow rules, like organizing files by domain, using immutable data structures and pure functions where reasonable etc.

Also I hadn't seen him use dependent types in the one project we worked together on and generics appeared only when it really made sense.

Apparently it boils down to using the right tools, not everything you've got at once.


I love how so much of distributed systems/robust software wisdom is basically: stop OOP. Go back to lambda.

OOP was a great concept initially. Somehow it got equated with the corporate driven insanity of attaching functions to data structures in arbitrary ways, and all the folly that follows. Because "objects" are easy to imagine and pure functions aren't? I don't know but I'd like to understand why corporations keep peddling programming paradigms that fundamentally detract from what computer science knows about managing complex distributed systems.


> Nothing clever, nothing coupled

Yes, simple is good. Simple is not always easy though. A good goal to strive for nevertheless.

> nothing DRY

That's interesting. Would you prefer all the code to be repeated in multiple places?


Depends. I haven’t come up with the rubric yet but it’s something like “don't abstract out functionality across data types”. I see this all the time: “I did this one thing here with data type A, and I’m doing something similar with data type B; let’s just create some abstraction for both of them!” Invariably it ends up collapsing, and if the whole program is constructed this way, it becomes monstrous to untangle, like exponentially complicated on the order of abstractions. I think it’s just a breathtaking misunderstanding of what DRY means. It’s not literally “don’t repeat yourself”. It’s “encapsulate behaviors that you need to synchronize.”

Also, limit your abstractions’ external knowledge to zero.


Very good explanation!

> “I did this one thing here with data type A, and I’m doing something similar with data type B; let’s just create some abstraction for both of them!”

I'm guilty of this. I even fought hard against the people who wanted to keep the code duplicated for the different data types.

> “encapsulate behaviors that you need to synchronize.”

I like that!


The problem is that most developers don't not actually understand DRY. They see a few lines repeated a few times in different functions and create a mess of abstraction just to remove the repeated code. Eventually more conditions are added to the abstracted functions to handle more cases, and the complexity increases, all to avoid having to look at a couple lines of repeated code. This is not what DRY is about.


Yep, exactly. I went into further detail in another comment.


Bit OP but probably means “no fancy silver bullet acronyms”.


In my mind this is breaking down the problem into a relevant data structure and algorithms that operate on that data structure.

If for instance you used a tree but were constantly looking up an index in the tree you likely needed a flat array instead. The most basic example of this is sorting, obviously but the same basic concepts apply to many many problems.

I think the issue that happens in modern times, specially in webdev, is we aren't actually solving problems. We are just glueing services together and marshalling data around which fundamentally doesn't need to be algorithmic... Most "coders" are glorified secretaries who now just automate what would have been done by a secretary before.

Call service A (database/ S3 etc), remove irrelevant data, send to service B, give feedback.

It's just significantly harder to do this in a computer than for a human to do it. For instance if I give you a list of names but some of them have letters swapped around you could likely easily see that and correct it. To do that "algorithmically" is likely impossible and hence ML and NLP became a thing. And data validation on user input.

So algorithmically in the modern sense is more, follow these steps exactly to produce this outcome and generating user flows where that is the only option.

Human do logic much much better than computers but I think the conclusion has become that the worst computer program is probably better at it that the average human. Just look at many niche products catered to X wealth group. I could have a cheap bank account and do exactly what is required by that bank account or I can pay a lot of money and have a private banker that I can call and they will interpret what I say into the actions that actually need to happen... I feel I am struggling to actually write what's in my mind but hopefully that gives you an idea...

To answer your nothing clever , well clever is relative. If I have some code which is effectively a array and an algorithm to remove index 'X' from it, would it be "clever" code to you if that array was labeled "Carousel" and I used the exact same generic algorithms to insert or remove elements from the carousel?

For most developers these days they expect to have a class of some sort with a .append and .remove function but why isn't it just an array of structs which use the exact same functions as every single other array... That people generally will complain that that code is "clever" but in reality it is really dumb. I can see it's clearly an array being operated on but OOP has caused brain rot and developers actually don't know what that means... Wait maybe that was OPs point... People no longer think algorithmically.

---

Machine learning, Natural Language Processing


> I think the issue that happens in modern times, specially in webdev, is we aren't actually solving problems. We are just glueing services together and marshalling data around which fundamentally doesn't need to be algorithmic...

This is true and is the cause of much frustration everywhere. Employers want “good” devs, so they do complicated interviews testing advanced coding ability. And then the actual workload is equal parts gluing CRUD components together, cosmetic changes to keep stakeholders happy, and standing round the water cooler raging at all the organisational things you can’t change.


I still use pen and paper. Actually as I progress with my career and knowledge I use pen and paper more and digital counterparts less.

It might be me not taking my time to learn Mathematica/Julia tho...


it's an odd analogy because programs are complex systems and involve interaction between countless of people. With large software projects you don't even know where you want to go or what's going to happen until you work. A large project doesn't fit into some pre-planned algorithm in anyone's head, it's a living thing.

diagrams and this kind of planning is mostly a waste of time to be honest. You just need to start to work, and rework if necessary. This article is basically the peak of the bell curve meme. It's not 90% thinking, it's 10% thinking and 90% "just type".

Novelists for example know this very well. Beginners are always obsessed with intellectually planning out their book. The experienced writer will always tell you, stop yapping and start typing.


Your part of your comment doesn't fit with the rest. With complex projects, you often don't even know exactly what you're building, it doesn't make sense to start coding. You first need to build a conceptual model, discuss it with the interested parties and only then start building. Diagrams are very useful to solidify your design and communicate it to others.


There's a weird tension between planning and itterating. You can never forsee anywhere close to enough with just planning. But if you just start without a plan you can easily work yourself into a dead end. So you need enough planning to avoid the dead ends, whilst starting early enough so you get your reality checks so you have enough information to get to an actual solution.

Relevant factors here are how cheaply you can detect failure (in terms of time, material, political capital, team morale) and how easily you can backtrack out of a bad design decision (in terms of political capital, how much other things need to be redone due to coupling, and other limitations).

The earlier you can detect bad decisions, and the easier you can revert them, the less planning you need. But sometimes those are difficult.

It also suggests that continuous validation and forward looking to detect bad decisions early can be warranted. Something which I myself need to get better at.


> Novelists for example know this very well. Beginners are always obsessed with intellectually planning out their book. The experienced writer will always tell you, stop yapping and start typing.

This is not true in general. Brandon Sanderson for example outlines extensively before writing: https://faq.brandonsanderson.com/knowledge-base/can-you-go-i...


> You just need to start to work, and rework if necessary

And making changes on paper is cheaper than in code.


I'm tempted to break out the notebook again, but... beyond something that's already merged, what situations make paper changes cheaper than code changes? I can type way faster than I can write.


Do you have any resources for this? especially for the adhd kind - I end up going down rabbit holes in the planning part. How do you deal with information overload and overwhelm OR the exploration exploitation dilemma?


There are 2 bad habits in programming: people that start writing code the 1st second, and people that keep thinking and investigating for months without writing any code. My solution to that: just force to do the opposite. In your case: start writing code immediately. Ni matter how bad or good. Look the youtube channel “tsoding daily” he just goes ahead. The code is not always the best, but he gets things done. He does research offline (you can tell) but if you find yourself doing just research, reading and thinking, force yourself to actually start writing code.


Or his Twitch videos. That he starts writing immediately and that we're able to watch the process is great. Moreover the tone is friendly and funny.


I wonder if good REPL habits could help the ADHD brain?

It still feels like you are coding so your brain is attached, but with rapid prototyping you are also designing, moving parts around to see where they would fit best.


Does it really take four hours to sharpen an axe? I've never done it.


Doing it right, with only manual tools, I believe so, remembering back to one of the elder firefighters that taught me (who was also an old-school forester).

Takes about 20 minutes to sharpen a chainsaw chain these days though...


10/20 minutes to sharpen a pretty dull kitchen knife with some decent whetstones.

Also, as someone famous once said: if I had 4 hours to sharpen an axe, I'd spend 2 hours preparing the whetstones.


If I had 2 hours to prepare whetstones I’d do 1 hour of billable work and then order some whetstones online.


If I had 1 hour of billable work, I'd charge per project and upfront, to allow me to claim unemployment for the following weeks.


Question in my head is, can LLMs think algorithmically?


LLMs can't think.


Source?


LLMs string together words using probability and randomness. This makes their output sound extremely confident and believable, but it may often be bullshit. This is not comparable to thought as seen in humans and other animals.


unfortunately that is exactly what the humans are doing an alarming fraction of the time


One of the differences is that humans are very good at not doing word associations if we think they don't exist, which makes us able to outperform LLMs even without a hundred billion dollars worth of hardware strapped into our skulls.


that's called epistemic humility, or knowing what you don't know, or at least keeping your mouth shut, and in my experience actually humans suck at it, in all those forms


Ask an LLM.


LLMs can think.


Source?


I use them a lot. They sure seem thinky.

The other day I had one write a website for me. Totally novel concept. No issues.


I have a similar experience. Just thought it'd be cute to ask you both for sources. Interesting that asking you for sources got me upvoted, while asking the other guy for sources got me downvoted :)


Interesting question.

LLMs can be cajoled into producing algorithms.

In fact this is the Chain-of-Thought optimisation.

LLMs give better results when asked for a series of steps to produce a result than when just asked for the result.

To ask if LLMs “think” is an open question and requires a definition of thinking :-)


Like a bad coder with a great memory, yes


The problem is the word “producing” of the parent comment, where it should be “reproducing”.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: