Hacker Newsnew | past | comments | ask | show | jobs | submit | fallous's commentslogin

The question is does this eventually lead us back to genetic programming and can we adequately avoid the problems of over-fitting to specific hardware that tended to crop up in the past?

The investor being the customer rather than actual paying customers was something I noticed occurring in the late 90s in the startup and tech world. Between that shift in focus and the influx of naive money the Dot Bomb was inevitable.

Sadly the fallout from the Dotcom era wasn't a rejection of the asinine Business 2.0 mindset but instead an infection that spread across the entirety of finance.


Heh, I was at Netscape when the Sun-Netscape Alliance was created. Tip of the hat to a fellow gray beard. ;)


You just described the burden of outsourcing programming.


Outsourcing development and vibe coding are incredibly similar processes.

If you just chuck ideas at the external coding team/tool you often get rubbish back.

If you're good at managing the requirements and defining things well you can achieve very good things with much less cost.


With the basic and enormous difference that the feedback loop is 100 or even 1000x faster. Which changes the type of game completely, although other issues will probably arise as we try this new path.


That embeds an assumption that the outsourced human workers are incapable of thought, and experience/create zero feedback loops of their own.

Frustrated rants about deliverables aside, I don't think that's the case.


No. It just means the harsh reality: what's really soul crushing in outsourced work is having endless meetings to pass down / get back information, having to wait days/weeks/months to get some "deliverable" back on which iterate etc. Yes, outsourced human workers are totally capable of creative thinking that makes sense, but their incentive will always be throughput over quality, since their bosses usually give closed prices (at least in what I lived personally).

If you are outsourcing to an LLM in this case YOU are still in charge of the creative thought. You can just judge the output and tune the prompts or go deep in more technical details and tradeoffs. You are "just" not writing the actual code anymore, because another layer of abstraction has been added.


Also, with an LLM you can tell it to throw away everything and start over whenever you want.

When you do this with an outsourced team, it can happen at most once per sprint, and with significant pushback, because there's a desire for them to get paid for their deliverable even if it's not what you wanted or suffers some other fundamental flaw.


Yep, just these past two weeks. I tried to reuse an implementation I had used for another project, it took me a day to modify it (with Codex), I tried it out and it worked fine with a few hundred documents.

Then I tried to push through 50000 documents, it crashed and burned like I suspected. It took one day to go from my second more complicated but more scalable spec where I didn’t depend on an AWS managed service to working scalable code.

It would have taken me at least a week to do it myself


>If you are outsourcing to an LLM in this case YOU are still in charge of the creative thought.

The meetings will continue regardless. Those meetings aren't purely there for productivity reasons. It's the incentives for middle managers and directors and whatnot.

Its just that all that throughput of an outside team is pushed onto you and 0-3 more engineers. You will optimize accordingly for your job and get something out to satisfy their demands. Likely sacrificing quality in the process. It's not different domesticating the outsourcing.

And that's fpr now. Maybe in decades' time the engineer is extinct and it's simply cheap scriptors who run it.


It doesn't have to be soul crushing.

Just like people more, and have better meetings.

Life is what you make it.

Enjoy yourself while you can.


It's not strictly soul-crushing for me, but I definitely don't like to waste time in non-productive meetings where everyone bullshits everyone else. Do you like that? Do you find it a good use of your time and brain attention capacity?


I think there's a certain kind of irony in being asked externally to enjoy the rubbish I've been given to eat. It's still rubbish.


You sit at a desk.

You get paid in the top 1% globally

You have benefits

Some hope or dreams for what to do with your future, life after work, retirement.

You get to work with other people, overseas.

Talk to those contractors sometimes. They are under tremendous pressure. They are mistreated. One wrong move, they're gone. They undergo tremendous prejudices, and soft racism everyday especially by us FTEs.

You find out that they struggle with the drudgery as well, looking for solutions, better understanding, etc.

We all feel disposable by our corporate masters, but they feel it even more so.

Be the change you want to see in the world.


> Be the change you want to see in the world.

Gladly! I think what I would choose is building on-shore teams exclusively. That's the change I'd like to see more of, while overseas teams build their own economies instead of ripping away jobs from domestic citizens in an already difficult job market.


almost feels like this could be a good political slogan for a campaign… like “america first” or something like that… oh wait… :)


If it was really America First they might not be so screwed for a free and fair election on November.

If it was really America first, their priorities wouldn't be to try and attack free and fair elections instead of reflecting and actually practicing what they preach.


>You sit at a desk.

Already long. Got laid off.

>Be the change you want to see in the world.

Easy to say when you're at a desk and not scrounging up pennies to pay rent.


Just have better meetings

If we could I think we would be doing that...


It's going to come across very naive and dumb, but I believe we can and people just aren't aware of or they simply aren't implementing the basics.

Harvard Business Review and probably hundreds of other online content providers provide some simple rules for meetings yet people don't even do these.

1. Have a purpose / objective for the meeting. I consider meetings to fall into one of three broad categories information distribution, problem solving, decision making. Knowing this will allow the meeting to go a lot smoother or even be moved to something like an email and be done with it.

2. Have an agenda for the meeting. Put the agenda in the meeting invite.

3. If there are any pieces of pre-reading or related material to be reviewed, attach it and call it out in the invite. (But it's very difficult to get people to spend the time preparing for a meeting.)

4. Take notes during the meeting and identify any action items and who will do them (preferably with an initial estimate). Review these action items and people responsible in the last couple of minutes of the meeting.

5. Send out the notes and action items.

Why aren't we doing these things? I don't know, but I think if everyone followed these for meetings of 3+ people, we'd probably see better meetings.


Probably like most businesses issues, it's a people problem. They have to care in the first place and idk if you can make people who don't care starting caring.

I agree the info is out there about how to run effective meetings.


Bingo -- 95% of work is people problems.

The coding is the easy part.

With LLMs and advanced models, even more so.


You can make people care easily. But people these days aren't incentivized to care. They announce layoffs and get a stock boost many times. You leave a company as a career suite and get paid millions. You speak corporate BS in meetings and get promoted.you bribe a government and you get tax breaks. I can go on for paragraphs about influencers, grifters, government, etc. It's entrenched everywhere.

We in tech like talking about meritocracy, but that's all collapsed, and even the illusion of it has collapsed now.


Not really, its just obviously true that the communication cycle with your terminal/LLM is faster than with a human over Slack/email.


100%! There is significant analogy between the two!


There is a reason management types are drawn to it like flies to shit.


Working with and communicating with offshored teams is a specific skill too.

There are tips and tricks on how to manage them and not knowing them will bite you later on. Like the basic thing of never asking yes or no questions, because in some cultures saying "no" isn't a thing. They'll rather just default to yes and effectively lie than admit failure.


YES!

AI assistance in programming is a service, not a tool. You are commissioning Anthropic, OpenAI, etc. to write the program for you.


Yes, but as with outsourcing those who are making such decisions often lack the awareness, or even skills, to properly specify the requirements and be able to evaluate the results.


We need a new word for on-premise offshoring.

On-shoring ;


> On-shoring

I thought "on-shoring" is already commonly used for the process that undos off-shoring.


How about "in-shoring"? We already have "insuring" and "ensuring", so we might as well add another confusingly similar sounding term to our vocabulary.


How about we leave "...shoring" alone?


Ha, my inexperience is showing :)


En-shoring?


Corporate has been using the term "best-shoring" for a couple of years now. To my best guess, it means "off-shoring or on-shoring, whichever of the two is cheaper".


If the on-premise offshoring centers around the use of LLMs then I suggest the term "off-braining." :)


Rubber-duckying... although a rubber ducky can't write code... infinite-monkeying?


In silico duckying


NIH-shoring?


Ai-shoring.

Tech-shoring.


Would work, but with "snoring". :D


vibe-shoring


eshoring


We already have a perfect one

Slop;


So many are desperately wishing to be the next Tom Wolfe rather than striving to find their own voice and style (as Wolfe did).


"My architecture depends upon a single point of failure" is a great way to get laughed out of a design meeting. Outsourcing that single point of failure doesn't cure my design of that flaw, especially when that architecture's intended use-case is to provide redundancy and fault-tolerance.

The problem with pursuing efficiency as the primary value prop is that you will necessarily end up with a brittle result.


> "My architecture depends upon a single point of failure" is a great way to get laughed out of a design meeting.

This is a simplistic opinion. Claiming services like Cloudflare are modeled as single points of failure is like complaining that your use of electricity to power servers is a single point of failure. Cloudflare sells a global network of highly reliable edge servers running services like caching, firewall, image processing, etc. And more importas a global firewall that protects services against global distributed attacks. Until a couple of months ago, it was unthinkable to casual observers that Cloudflare was such an utter unreliable mess.


Your electricity to servers IS a single point of failure, if all you do is depend upon the power company to reliably feed power. There is a reason that co-location centers have UPS and generator backups for power.

It may have been unthinkable to some casual observers that creating a giant single point of failure for the internet was a bad idea but it was entirely thinkable to others.


> Your electricity to servers IS a single point of failure, if all you do is depend upon the power company to reliably feed power.

I think you quite didn't got the point. The whole point is that putting together a system architecture that considers Cloudflare is a single point of failure is like designing a system architecture that considers a power supplier a single point of failure. Technically they can be considered that if you really really want to, but not only are things irredeemably broken when those failure modes are triggered but also they themselves are by far expected to be the most reliable components of your systems due to their design and SLAs that is pointless to waste time and resources mitigating such a scenario.


You're arguing from an end-user perspective, I'm pointing out that the Internet wasn't designed to solve easy but fragile problems but instead was intended to be a resilient network capable of surviving failures and route around them.

"I want to use a power tool and simply plug it into a wall" is not the same class of problem as "we're using a heart-lung machine during this bypass operation and power loss results in dead patients."

The widespread dependence upon Cloudflare has resulted in the "heart-lung machine" problem of DNS, among other things, being "solved" by a "power tool" class of solution.


> You're arguing from an end-user perspective,

No.I am arguing from a software engineer's perspective tackling a systems design problem.

> I'm pointing out that the Internet wasn't designed to solve easy but fragile problems but instead was intended to be a resilient network capable of surviving failures and route around them.

Irrelevant. Engineers design systems that remain functioning in spite of their failure modes. Some failure modes are irredeemable. Even structural engineers don't design structures to withstand all conceivable earthquakes, because they understand that mitigating that failure modes is unrealistic.

The same goes for software. You do not build your WebApps to remain working when half of the internet dies. This means scenarios such as AWS, GCP or Cloudflare being out.


You do know that data centers use backup generators because electricity is a single point of failure right? They even have multiple power supplies plugged into different circuits.


> You do know that data centers use backup generators because electricity is a single point of failure right?

How many times do you account for the existence of backup generators in a data center when you design, say, a SPA? How do you expect to be treated if you even suggest such a thing in a design requirements meeting? Do you understand what I am saying?


All of the time? Backup power is table stakes for anyone serious about hosting.

It’s not even clear what you’re trying to say, but redundancy is one of the primary things you are paying for when shopping for a datacenter to use.

Even tiny 1000 sqft data centers for colleges had UPSes with 1-2 generators 20+ years ago.


If you combine a 3 year-old, whose favorite word is "why?", and the ambition of a 7 year-old you might just end up with the most productive genius possible.


Add the social insecurity of a pubescent, and you suddenly have a madman knowitall (with almost no actual knowledge) that slows that learning to a drip.

It's a miracle high schools are able to achieve anything, really.


And, in some circumstances, they'll oscillate.


If you think that resolution is terrible, you should've tried my lightpen-based "scanner." ;)


It precludes many of the advantages of a flatbed scanner (such as scanning book pages without requiring removal of the pages), which existed at the same time as the Thunderscan. Things like hand scanners established themselves at the low-end by the early 90s.


Prior to the iPhone (but within the years Jobs was in charge), Apple was a company whose target demographic was the professional/semi-professional creative market. Once iPod and iPhone demonstrated a huge sales potential the company abandoned the creatives market and became a consumer-oriented company that provided means of media consumption.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: