Hacker Newsnew | past | comments | ask | show | jobs | submit | PaulRobinson's commentslogin

Outsource things that aren't valuable to you and your core mission. Do the things that are valuable to you and your core mission.

This applies at a business level (most software shops shouldn't have full-time book keepers on staff, for example), but applies even more in the AI age.

I use LLMs to help me code the boring stuff. I don't want to write CDK, I don't want to have to code the same boilerplate HTML and JS I've written dozens of times before - they can do that. But when I'm trying to implement something core to what I'm doing, I want to get more involved.

Same with writing. There's an old joke in the writing business that most people want to be published authors than they do through the process of writing. People who say they want to write don't actually want to do the work of writing, they just want the cocktail parties and the stroked ego of seeing their name in a bookshop or library. LLMs are making that more possible, but at a rather odd cost.

When I write, I do so because I want to think. Even when I use an LLM to rubber duck ideas off, I'm using it as a way to improve my thinking - the raw text it outputs is not the thing I want to give to others, but it might make me frame things differently or help me with grammar checks or with light editing tasks. Never the core thinking.

Even when I dabble with fiction writing: I enjoy the process of plotting, character development, dialogue development, scene ordering, and so on. Why would I want to outsource that? Why would a reader be interested in that output rather than something I was trying to convey. Art lives in the gap between what an artist is trying to say and what an audience is trying to perceive - having an LLM involved breaks that.

So yeah, coding, technical writing, non-fiction, fiction, whatever: if you're using an LLM you're giving up and saying "I don't care about this", and that might be OK if you don't care about this, but do that consciously and own it and talk about it up-front.


> Outsource things that aren't valuable to you and your core mission.

When you outsource the generation and thinking, you're also outsourcing the self-review that comes along with evaluating your own output.

In the office, that review step gets outsourced to your coworkers.

Having a coworker who ChatGPT generates slides, design docs, or PRs is terrible because you realize that their primary input is prompting Claude and then sending the output to other people to review. I could have done that myself. Reviewing their Claude or ChatGPT output so they can prompt Claude or ChatGPT to fix it is just a way to get me to do their work for them.


I learned to code on my school's BBC Micro. [0]

8-bit. 16KiB of RAM. BASIC as the programming language. 640x256 resolution in 8 colours.

I could make that thing sing in an hour. It was hard to get it to do much, but then the difficulty was the fun thing.

By the time we got to the early 2000s and I could buy something with more RAM, CPU and storage than I could ever reasonably max out for the problems I was interested in at the time, I lost something.

Working within constraints teaches you something, I think. Doing more with less makes you appreciate the "more" you eventually end up with. You develop intuitions and instincts and whole skillsets that others never had to develop. You get an advantage.

I don't think we should be going back to 8-bit days any time soon, but in the context of this post, I want novices to try and build software on an A18 chip, I want learners to be curious enough to build a small word game (Hangman will do at first, but the A18 will let them push way, way past that into the limits of something that starts to feel hard all of a sudden), to develop the intuition of writing code on a system that isn't quite big enough for their ideas. It'll make them thirsty for more, and better at using it when they get it.

[0] https://en.wikipedia.org/wiki/BBC_Micro


> Working within constraints teaches you something, I think.

It absolutely does. But every system has constraints; even when provided with massive resources, humans tend to try things that exceed those resources, as evidenced by Parkinson's Law of data https://en.wikipedia.org/wiki/Parkinson%27s_law


It was worse than you remember. You could have 640x256 in monochrome, or 320x256 with 4 colours, or 160x256 with 16 colours (which IIRC was actually 8 distinct colours plus 8 flashing versions of them).

The game Elite did something extremely evil and clever: it was actually able to switch between modes partway through each frame, so that it could display higher-resolution wireframe graphics in the upper part of the screen and lower-resolution more-colourful stuff for the radar/status display further down.


AlexandertheOk's documentary on Elite and the BBC Micro: https://www.youtube.com/watch?v=lC4YLMLar5I


Switching modes like that was common practice on the Amstrad CPC (which used the same 6845 video chip), but as time went on, people also learned how to change the base address of screen RAM part way through each frame. This gave you super-smooth hardware scrolling for the main game area while still retaining a static score display. Unfortunately it came too late in the machine's history to be used for more than a handful of games, but demo coders used it extensively (and still do).


I hear you, having learned programming on a machine even more constrained by the BBC Micro. But learners today are more likely to "Siri, build me a Hangman app."


I’m waiting for somebody to come and tell us about the time they punched cards by hand, one hole at the time, and then threw coal in the furnace to have the cards interpreted by a steam-powered computer.


Is this close enough? it’s from 1969, I wonder what became of them:

“Tomorrow's World: Nellie the School Computer 15 February 1969 - BBC”

https://www.youtube.com/watch?v=f1DtY42xEOI


Do you have a substantive argument against any points made by parent?


it should be clear i'm not arguing along the points made by parent nor against them.


That's a wild ride of passive aggressive academia in a field I know something about. A rare treat. Thanks for sharing!


Great example of allowing perfect to be the enemy of good.

If major advanced economies are able to move their entire grid away from coal, it means the entire grid globally can move from coal.

"Ah", the critics say, "but manufacturing is so much more complex!"

Really? These are not countries without manufacturing. They have data centres stacked with the latest generation of Nvidia chips, electric rail, major capital cities, populations of millions...

... and of course, China agrees and is trying to move towards decarbonisation of their grid.

Yes, it'll take time, but it'll take even longer if you never start.


Coal is so deeply irrational. Only when you plug your ears and scream can you block out comprehension of the massive local externalities that make it inefficient compared to other energy options. It is cheap to setup with minimal access to highly skilled professionals so it was a good option to bootstrap economies until recently when solar, wind and NG have become easy to access and cost competitive. It's perfectly reasonable to have a phase out timeline to avoid under utilizing paid-for infrastructure, but it is a dead technology.


Just this afternoon I was reading an account of one of the earliest known betting ledgers, the "Betting Book" at White's, a private member's club in London. In the 18th century, one of the most common bets taken up by members was which Lord or nobleman would outlive another. One bet had a note under it that the wager was not settled up by the bettors because the subjects both died of suicide within a few months of each other.


I say this as somebody who regularly travels around EMEA and the US: there is airport security at the same or higher level all around the World, and yet fewer people travelling in those countries seems to have the same level of problems.

My hot take is that its almost certainly a recruitment and training issue: there seem to be just enough bad apples getting through and not having poor behaviours trained out of them to mean the self-reported "these guys are idiots" numbers are higher than in other parts of the World.


Yeah, it is security theater, but other countries are way more relaxed than the US, especially small airports with few international flights.

When i was ~17, i had a friend with a false leg, with metal in it. We were late to our plane at a Moroccan airport (Agadir i think), we burst through the scanner gate that started beeping. He looked at the agent, tapped his leg, the agent made a "you can go" sign and we managed to get to the plane without any issue. I have seen very similar scene at Porto, it might be the mediterranean temper but i really think it has more to do with airport size (Lisbon airport agents seems more thorough)


> I'm just tired.

> "I'm so lost, anxious and filled with doubt"

You sound burned out. Deal with that first.

Moving country at this point might not have been the optimal thing to do, but I wouldn't suggest you give up just yet: it does give you a new environment where all those old habits and circles and things are no longer around you. You get to reset. You get to define a new you. But you're going to have to do it slowly, and you're unlikely going to get a whole lot of answers from LLMs or shrooms, and a lot more from asking yourself - and answering honestly and openly - some questions you might not have thought about deeply (as in, repeatedly over many weeks or months, without distraction), in a long time, if ever.

What interests you?

Each word matters.

"What" points to a thing, and is a more interesting question than "Why am I tired?", or "How do I fix this?". You can probably write a list of things, but "Why" is about blame or justification and "How" is about method, technique or skill. "What", just is.

"Interests" is not about "passion" or "love" or "desire" or "think will make the most money". It is about what makes your brain feel tickled. It's the thing you can start to create (not what you consume), where you start diving in for 5 minutes and you're still there 2 hours later. I don't mean doom scrolling or media you like - rule out anything where you are not learning deeply about something that will help you create, or creating something directly.

"You" is obviously important. Don't try and build your direction based on what other people do if you're feeling like this. Don't try and copy - try and be your authentic self. You can ask others what interests them and think "Huh, me too, I hadn't thought of that", but don't be diving deep into internals of crypto or LLMs or buying a farm or becoming a buddhist unless those things interest you.

Again: What. Interests. You?

The answer might be "nothing". That's a sign of definite burn-out. It would not surprise me based on what you have written.

Take some time for yourself, explore your new home, go and see some sights and read some books (fiction as well as non-fiction - there's more truth in them, in my experience), and for a while (a month or two, maybe longer), just allow yourself to follow your nose. Focus on your physical and mental health for a while. Eat good quality food. Rest. Consider avoiding stimulants like alcohol and recreational drugs. See the next few months as a sort of extended vacation where you get a chance to reset.

You ask about how not to "waste your time" and how do you "focus" - maybe the best thing you can do for you in the long-term right now is waste your time and focus on nothing. Had you considered that as an option?

After a while - because you're capable, intelligent, conscientious, this is almost inevitable - an idea will start to emerge that you want to focus on. It might not be what you were expecting. It might be building something for yourself (I love writing software for an audience of one: me), or learning a new skill or applying for a job. It could be writing a book or producing art, or learning a musical instrument. It might be in your comfort zone, it might not be.

Whatever it is, you'll look at it and think "This interests me".


I've thought about doing this with software:

"I will write X. Once I have sold $Y worth of licenses, I'll open source it".

Every purchaser is contributing towards the future state of it being open sourced. It balances the needs of developers to need to live and pay bills vs most us wanting to get our code out there. It breaks the monthly recurring revenue model most customers hate. It incentivises early adopters to "invest" by getting early access, but means uncertain just have to wait.

Doing this with articles, books, music, whatever - all sounds pretty cool to be honest. It requires creators to radically transform their human need to maximise revenue from "hits" though.


Somebody should tell him that the character of Mr Burns in The Simpsons was meant to be a satirical parody of evil tycoons, not a role model.

I'd wager that one day, his grandchildren (possibly even children), are going to call for his arrest and imprisonment, as a means to stop themselves being judged for his sins.


History shows it's pretty rare for tech founders to be personally punished in the dramatic way people imagine


Developers are - on average - terrible at this. If they weren't, TPMs, Product Managers, CTOs, none of them would need to exist.

It's not specific to software, it's the entire World of business. Most knowledge work is translation from one domain/perspective to another. Not even knowledge work, actually. I've been reading some works by Adler[0] recently, and he makes a strong case for "meaning" only having a sense to humans, and actually each human each having a completely different and isolated "meaning" to even the simplest of things like a piece of stone. If there is difference and nuance to be found when it comes to a rock, what hope have we got when it comes to deep philosophy or the design of complex machines and software?

LLMs are not very good at this right now, but if they became a lot better at, they would a) become more useful and b) the work done to get them there would tell us a lot about human communication.

[0] https://en.wikipedia.org/wiki/Alfred_Adler


> Developers are - on average - terrible at this. If they weren't, TPMs, Product Managers, CTOs, none of them would need to exist.

This is not really true, in fact products become worse the farther away from the problem a developer is kept.

Best products I worked with and on (early in my career, before getting digested by big tech) had developers working closely with the users of the software. The worst were things like banking software for branches, where developers were kept as far as possible from the actual domain (and decision making) and driven with endless sterile spec documents.


Yet IDEs are some of the worst things in the world. From EMacs to Eclipse to XCode, they are almost all bad - yet they are written by devs for devs.


Unfortunately, they are written by IDE-devs for non IDE-devs.


I disagree, I feel (experienced) developers are excellent at this.

It's always about translating between our own domain and the customer's, and every other new project there's a new domain to get up to speed with in enough detail to understand what to build. What other professions do that?

That's why I'm somewhat scared of AIs - they know like 80% of the domain knowledge in any domain.


I think developers are usually terrible at it only because they are way too isolated from the user.

If they had the chance to take the time to have a good talk with the actual users it would be different.


The typical job of a CTO is nowhere near "finding out what business needs and translate that into pieces of software". The CTO's job is to maintain an at least remotely coherent tech stack in the grand scheme of things, to develop the technological vision of a company, to anticipate larger shifts in the global tech world and project those onto the locally used stack, constantly distilling that into the next steps to take with the local stack in order to remain competitive in the long run. And of course to communicate all of that to the developers, to set guardrails for the less experienced, to allow and even foster experimentation and improvements by the more experienced.

The typical job of a Product Manager is also not to directly perform this mapping, although the PM is much closer to that activity. PMs mostly need to enforce coherence across an entire product with regard to the ways of mapping business needs to software features that are being developed by individual developers. They still usually involve developers to do the actual mapping, and don't really do it themselves. But the Product Manager must "manage" this process, hence the name, because without anyone coordinating the work of multiple developers, those will quickly construct mappings that may work and make sense individually, but won't fit together into a coherent product.

Developers are indeed the people responsible to find out what business actually wants (which is usually not equal to what they say they want) and map that onto a technical model that can be implemented into a piece of software - or multiple pieces, if we talk about distributed systems. Sometimes they get some help by business analysts, a role very similar to a developer that puts more weight on the business side of things and less on the coding side - but in a lot of team constellations they're also single-handedly responsible for the entire process. Good developers excel at this task and find solutions that really solve the problem at hand (even if they don't exactly follow the requirements or may have to fill up gaps), fit well into an existing solution (even if that means bending some requirements again, or changing parts of the solution), are maintainable in the long run and maximize the chance for them to be extendable in the future when the requirements change. Bad developers just churn out some code that might satisfy some tests, may even roughly do what someone else specified, but fails to be maintainable, impacts other parts of the system negatively, and often fails to actually solve the problem because what business described they needed turned out to once again not be what they actually needed. The problem is that most of these negatives don't show their effects immediately, but only weeks, months or even years later.

LLMs currently are on the level of a bad developer. They can churn out code, but not much more. They fail at the more complex parts of the job, basically all the parts that make "software engineering" an engineering discipline and not just a code generation endeavour, because those parts require adversarial thinking, which is what separates experts from anyone else. The following article was quite an eye-opener for me on this particular topic: https://www.latent.space/p/adversarial-reasoning - I highly suggest anyone working with LLMs to read it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: