Hacker News new | past | comments | ask | show | jobs | submit | gregfjohnson's comments login

PuTTY is a supurb tool. Thank you so much for your efforts over the years.


IBM wrote APL\360 in 360 assembly language. The IBM 5100 personal computer had a small cpu. They wanted APL on the 5100, so they implemented a 360 emulator and ran the original implementation of APL on that.


I thought it was pretty cool when they had a pci(?) card you could add to a PC that could run VM/SP


The later generation mainframe-in-a-PC cards were PCI, but they’d also done MCA and ISA ones, going back to the XT/370.

The XT/370 is particularly bonkers, as it uses a combo of a 68000 and an 8087 with custom microcode in them to run System 370 code.


And that’s why John Titor came back :)


This brings to mind something I've wondered about for a while. Sunrise on the shortest day of the year is earlier than sunrise for several days after it.

Sunset on these days after the shortest day is of course even later than sunset of the shortest day.

On the beautiful image of the OP, you can see that after dawn of December 21, dawn continues to get later over the next few days.

In my area, sunrise on 12/21/2024 was 6:54am, and it will continue to get later until 1/8/2025, when it is at 6:59am.

Length of day on 12/21/2024 is 9 hours, 54 minutes, and length of day on 1/8/2025 is 10 hours, 2 minutes.

Searching the web, I haven't found an explanation for this that "clicks" for me as both intuitive and rigorous. Any thoughts or pointers on this?


OK here is my attempt at intuition:

Here is an approximation that captures the main effect (the 23.5 degree tilt of the earth's rotation axis) and overlooks secondary effects.

Consider the equator. Imagine a circle on the X-Y plane centered at the origin. Angle the circle up 23.5 degrees, rotated around the x axis. The projection of this circle back onto the plane is an ellipse on the X-Y plane, with the vertical axis about 92% of the length of the horizontal axis. Now, consider a series of vectors in the X-Y plane starting on the X axis, with angles in steps of 0.986 degrees. (This is approximately the angle the earth progresses around the sun each day.)

Where each vector hits the unit circle, move the point up or down so that it hits the ellipse. The angle will change a bit for most of the rays. In some cases the angle will be a bit smaller, and in some cases a bit larger. These discrepancies are the variations in time of day of sunrise and sunset over the course of a year on the equator.


The apparent movement of the sun is not influenced solely by the Earth's rotation, but also the instantenuous velocity of revolution and also the fact that Earth's axis is slanted w.r.t. the ecliptic.

Study this Wiki article, especially the components part: https://en.wikipedia.org/wiki/Equation_of_time


It’s partly because we have a standardised 24 hour clock and solar noon (when the sun is highest) is sometimes ahead and sometimes behind GMT noon. The sunrise and sunset times relate to solar noon so they vary accordingly.

See ‘equation of time’ and ‘analemma’ for underlying astronomical explanations, as hinted at by sibling posts.


I don’t know if this is sufficient but there was just a StarDate about the topic https://stardate.org/podcast/2024-12-3


Thanks, very much appreciate the thoughts and pointers!


I've been trying to find discussion of this topic on Hacker News between October 1582 and September 1752, but to no avail.

'cal 9 1752' is .. funny. I guess instead of doing this annoying a-periodic leap second business, they accumulated a bunch of leap seconds owed, and skipped 11 days at one go. Sysadmins at the time were of divided opinion on the matter.


This short book is (IMHO) one of the best on software design. To me the main point of the book is the importance of well-designed abstractions. The "surface area" of a well-designed abstraction is small, easy to understand, and helpful as you reason through your code when you use it. The underlying implementation may be deep and non-trivial, but you find that you don't have any need to worry about the underlying internal details.

In short:

A beautifully designed abstraction is easy to understand and use.

It is so trustworthy that you don't feel any need to worry about how it is implemented.

Finally, and most importantly, it enables you to reason with rigor and precision about the correctness of the code you are writing that makes use of it.


That book is a almost perfect summary of what is in my head after 30+ years of programming. I recommend it often to new people, as I see they make the same mistakes I did back then.

I recommend not loosing time with “Clean X” books, but instead read this book. Also, as noted in other comments, you can only “get it” after some real experience, so it is important to practice an develop a “common sense” of programming.


I disagree that the "Clean X" books are a waste of time. They lay a nice ground understanding of what to aim for when writing code, in particular when you're early in your career.

When I was starting as a professional coder years ago, I had an intuitive sense of what good code was, but I had no idea how much actual thought had been put to it by other people. Reading those books was a good step in seriously starting to think about the subject and look at code differently as a craft ("it's not just me, this code smells!" or "hey that's a neat idea, better keep this in mind").

Definitely would recommend to someone starting out their career.

Edit: getting downvoted for a reasonable, justified opinion. Classy.


Don’t know about the rest of the series, but Clean Code isn’t merely a waste of time, it’s worse — it’s actually a net negative, and lies at the root of a number of problems related to incidental complexity.


Care to elaborate?


Not GP but: Personally, I find that book's advice is highly subjective and rooted on aesthetics rather than pragmatism or experimentation. It encourages an excessive number of very small methods and very small classes, and brushes off problems that it causes.

Not about the book, but: Its influence is malignant. Even Uncle Bob mentioned in a recent interview that he will break the "10 lines per method" rule if need be. But practitioners influenced by the book lack his experience, and are often very strict. I even remember a specific Ruby linter that capped methods at 5 or 6 lines max IIRC. Working in such a fragmented codebase is pure madness. This comment from another user made me remind some of those codebases: https://news.ycombinator.com/item?id=42486032

EDIT: After living in the "Clean Code world" for half a decade I can categorically say that it produces code that is not only slow to run (as argued by Casey Muratori [1]), but also slower to understand, due to the jumping around. The amount of coupling between incestuous classes and methods born out of "breaking up the code" makes it incredibly difficult to refactor.

[1] https://www.youtube.com/watch?v=tD5NrevFtbU


I think people get hung up with the small classes/methods and ignore all the rest. One important lesson being that the aesthetics do matter and you have to pay attention to writing maintainable code. These are important lessons for a beginning developer. If you think otherwise, you've never worked on a code base which has 300 line functions with variables named temp, a and myVar.

Regarding short functions: yes, having them too short will absolutely cause problems. And you should not use this as an absolute rule. But when writing code it's very useful to keep this in mind in order to keep things simple - when you see your functions doing 3 independent things, maybe it's time to break ivt in 3 sub functions.

Edit: I see some criticism concerning too small classes, class variables being used as de facto global variables and shitty inheritance. Fully agree that these are plain bad practices stemming from the OOP craze.


Sure, but nobody is saying that aesthetics don't matter. Quite the opposite. People have been saying this for decades, and even government agencies have code-style guidelines. Also, the idea that big procedures are problematic is as old as procedural programming itself.

The problem is that, when it comes to aesthetics, one of the two more-or-less-novel ideas of the book (and the one that is followed religiously by practitioners) is downright problematic when followed to the letter.

> when you see your functions doing 3 independent things, maybe it's time to break it in 3 sub functions

That's true, and I agree! But separation of concerns doesn't have much to do with 10-lines-per-method. The "One Level of Abstraction per Function" section, for example, provides a vastly better heuristic for good function-size than the number of lines, but unfortunately it's a very small part of the book.

> I see some criticism concerning [...] class variables being used as de facto global variables

The criticism is actually about the book recommending transforming local variables into instance/object variables... here's the quote: https://news.ycombinator.com/item?id=42489167


If the 3 things are related such that they will only ever be called in order one after the other (and they are not really complex) it’s better to just do all the work together.


Yep, if they're related then I agree 100%.


But this line of thinking is exactly what's wrong with Clean Code. Just seeing your function doing three independent things is not a signal that you should begin refactoring.

I've worked on code bases with functions that were longer than 300 lines with shorter variable names. Whether this is a problem is completely dependent on the context. If the function is 300 lines of highly repetitive business logic where the variable name "x" is used because the author was too lazy to type out a longer, more informative variable name, then maybe it's possible to improve the function by doing some refactoring.

On the other hand, if the function is an implementation of a complicated numerical optimization algorithm, there is little duplicated logic, the logic is all highly specific to the optimization algorithm, and the variable name "x" refers to the current iterate, then blindly applying Clean Code dogma will likely make the code harder to understand and less efficient.

I think the trick here is to cultivate an appreciation for when it's important to start refactoring. I see some patterns in when inexperienced developers begin refactoring these two examples.

In the first example, the junior developer is usually a little unmoored and doesn't have the confidence to find something useful to do. They see some repetitive things in a function and they decide to refactor it. If this function has a good interface (in the sense of the book---is a black box, understanding the implementation not required), refactoring may be harmful. They run the risk of broadening and weakening the interface by introducing a new function. Maybe they accidentally change the ABI. If you have only changed the implementation, if no one spends any time looking at the details of this function because it has a good interface, ... what's been gained?

In the second example, the junior developer is usually panicked and confused by a Big Complicated Function that's too hard for them to understand. They conflate their lack of understanding with the length and complexity of the function. This can easily be a sign of their lack of expertise. A person with appropriate domain knowledge may have no trouble whatsoever reading the 300 line function if it's written using the appropriate idioms etc. But if they refactor it, it now becomes harder to understand for the expert working on it because 1) it's changed and 2) it may no longer be as idiomatic as it once was.


One of the biggest issues with the book is that it is a Java-centric book that aspires to be a general-purpose programming book. Because it never commits to being either, it sucks equally at both. In much the same way, it's a "business logic"-centric book that aspires to be general purpose, so it sucks at both (and it especially sucks as advice for writing mostly-technical/algorithmic code). This is epitomized by how HashMap.java from OpenJDK[0] breaks almost every single bit of advice the book gives, and yet is one of the cleanest pieces of code I've ever read.

One fundamental misunderstanding in the book and that I've hear in some of his talks is that he equates polymorphism with inheritance. I'll forgive him never coming across ad hoc polymorphism as present in Haskell, but he book was published in 2009, while Java had generics in 2004. Even if he didn't have the terminology to express the difference between subtype polymorphism and parametric polymorphism, five years is plenty of time to gain an intuitive understanding of how generics are a form of polymorphism.

His advice around prefering polymorphism (and, therefore, inheritance, and, therefore, a proliferation of classes) over switch statements and enums was probably wrong-headed at the time, and today it's just plain wrong. ADTs and pattern matching have clearly won that fight, and even Java has them now.

Speaking of proliferation of classes, the book pays lip service to the idea of avoiding side-effects, but then the concrete advice consistently advocates turning stateless functions into stateful objects for the sake of avoiding imagined problems.

One particular bugbear of mine is that I've had literally dozens of discussions over the years caused by his advice that comments are always failures to express yourself in code. Many people accept that as fact from reading it first hand, others you can clearly trace the brain rot back to the book through a series of intermediaries. This has the effrt of giving you programmers who don't understand that high-level strategy comments ("I'm implementing algorithm X") are incredibly information dense, where one single line informs how I should interpret the whole function.

Honestly, the list goes on. There's a few nuggest of wisdom buried in all the nonsense, but it's just plain hard to tell people "read this chapter, but not that, and ignore these sections of the chapter you should read". Might as well just advise against juniors reading the book at all, and only visiting it when they've had the time to learn enough that they can cut through the bullshit themselves. (At which point it's just of dubious value instead of an outright negative)

0. https://github.com/openjdk/jdk/blob/master/src/java.base/sha...


I think you are totally right. The clean X books are not a waste of time. I meant that in the sense of “start here, don’t delay this”. I would recommend: read aPoSD, then Clean X series, then again aPoSD ;)


There tend to be two camps with the Uncle Bob franchise as I see it:

Those that fall for the way he sells it, as the 'one true path', or are told to accept it as being so.

Those who view it as an opinionated lens, with some sensible defaults, but mostly as one lens to think through.

It is probably better to go back to the earlier SOLID idea.

If you view the SRP, as trying to segment code so that only one group or person needs to modify it, to avoid cross team coupling, it works well.

If you use it as a hard rule and worse, listen to your linter, and mix it in with a literal interpretation of DRY, things go sideways fast.

He did try to clarify this later, but long after it had done it's damage.

But the reality is how he sells his book as the 'one true path' works.

It is the same reason scrum and Safe are popular. People prefer hard rules vs a pile of competing priorities.

Clean architecture is just ports and adapters or onion architecture repackaged.

Both of which are excellent default approaches, if they work for the actual problem at hand.

IMHO it is like James Shore's 'The Art of Agile Development', which is a hard sell compared to the security blanket feel of scrum.

Both work if you are the type of person who has a horses for courses mentality, but lots of people hate Agile because their organization bought into the false concreteness of scrum.

Most STEM curriculums follow this pattern too, teaching something as a received truth, then adding nuance later.

So it isn't just a programming thing.

I do sometimes recommend Uncle Bob books to junior people, but always encourage them to learn why the suggestions are made, and for them to explore where they go sideways or are inappropriate.

His books do work well for audiobooks while driving IMHO.

Even if I know some people will downvote me for saying that.

(Sorry if you org enforced these over simplified ideals as governance)


Which of these leads to 1 sentence per paragraph? I found this incredibly hard to read.


> A beautifully designed abstraction is easy to understand and use.

It's like in "Clean Code" where Ward Cunningham said a clean code is a beautiful code.

Beautiful design, beautiful code, beautiful abstraction, beautiful class, beautiful function ... But is not that subjective and broad ?


Yes, it's subjective, but not entirely. After you've done it for a couple of decades, you start to have a sense of taste, of aesthetics. Some things seem beautiful, and others ugly. It's "subjective", but it's also informed by two decades of practice, so it is far from being purely subjective.


Robert M. Pirsing discusses qualia in his writings. One objection raised by his antagonists is, “quality is just what you like", echoing the idea of broad subjectivity you raise. Yet there is broad agreement on what counts as quality. Among the aspects we agree on is complexity and subjective cognitive load.


This same identity can be used to provide geometric intuition as to why i*i must equal -1. This is shown in the diagrams at the bottom of http://gregfjohnson.com/complex/.


A nice way to think about eta-reduction is that it asserts something about the types of expressions in lambda calculus, namely that every expression is in fact a function.

If an expression M can appear in the left position of the function application operation, this implies that M is a function.

By way of analogy, if I have a formula x == x+0, this implies that x is a number.

Or, s == s + '' would imply that s is a string.

So, if M == lambda x. M(x), this is saying that M is a function.


I work at a medical device company that specializes in radiation treatment for brain tumors.

This thoughtful and profound essay brings home the lived reality of the patients who are treated by our systems.

The writer speaks lived truth that has a tone of heft and substantiality.

Human life is a fragile and temporary gift. Most of us are lucky enough to have a few moments of transporting and profound beauty and joy.

While life's journey has an inevitable end for all of us, we can help each other in innumerable ways to make the journey more bearable, and at times joyful.

I'm an old guy, and have an artificial hip and cataract implants. I'm deeply grateful for the quality of life I've been gifted to receive by the medical people who make these kinds of things possible.

I hope that the brain treatment system I work on will be a similar gift to the lives of at least some of the patients who require that kind of treatment.


>Human life is a fragile and temporary gift. Most of us are lucky enough to have a few moments of transporting and profound beauty and joy.

>While life's journey has an inevitable end for all of us, we can help each other in innumerable ways to make the journey more bearable, and at times joyful.

very nicely articulated. thank you.

the series of essays is moving and profound. what we take for granted is a miracle and we dont realize it, and are caught up in trifles.

thanks to whoever brought this our collective attention.


What a beautiful, beautiful comment. Thanks for working on a brain treatment system. Who knows, one day I might benefit from it.


We used LwIP for a project some years ago, and found a very nice way to do system testing.

The project involved multiple microcontrollers communicating over an internal LAN. They used a small embedded kernel named MicroCOS, with LwIP as the IP stack.

We had cross-platform build tools set up, so we could build our stand-alone microprocessor applications either for native execution or with gcc, compiling to x64 code and executable on developer boxes. In the latter case, we implemented the lowest level link-layer part of LwIP using a mock, that used standard TCP/IP! We wrote a small TCP server and would spool up the micro-controller applications, which would then talk to each other on the developer machines as though they were running inside the actual system.

This setup worked really well, and we used it for years during the development effort for the project.


A person close to me works at a law firm. She was feeling a bit stagnant, so she connected with a recruiter. She got a very solid offer with a significant pay bump. She gave her two weeks' notice to her firm, including appointments with the partners. One of the partners asked her for a half hour. They came back with a massive pay raise, and a promotion to partner if she would stay. She was in a state of shock, but then informed the other firm that she was staying at her current firm.

By way of contrast, an engineering firm I am familiar with had an employee who had been there six years, and knew the company's very complex product inside and out, every nook and cranny. He was one of the only people who had such deep understanding of the system that he could fix any issues that might come up, hardware, firmware, software, everything. He gave his two weeks' notice, and then went to a different job. He's a very talented guy, who would command a very attractive offer, but his talent to the current company is vastly greater than his generic value on the market, because of his detailed knowledge of the product. Although he diligently documented his knowledge, the company was still left in a jam after his departure. It would have been great if the company had fought for him the way the law firm fought for the other individual described above.


“but his talent to the current company is vastly greater than his generic value on the market”

This has always been one of my fears. I would not want to become indispensable for a no name company while paying the price of being average to the market.


The point is that the market is willing to pay more, which seems counterintuitive since the employee should be much more valuable to their current employer. But, this does seem to happen with some frequency.


What prevents engineering firms from acting like law firms?


Well...

Because the Industry thinks that Management/Marketing/Sales are the important "leaders" and Engineering is just mere "foot soldiers" and hence replaceable/dispensable as needed. The above is justified as "Business needs", "Investor returns", "Profit next quarter" etc.


Very early on in my career, I had a few good mentors who all told me the same piece of advice - tech (IT, programming, QA, etc) are __cost centers__ to most businesses. No matter how valuable you are, your pay is not going to scale the same way no matter how good your performance reviews are. I’m very thankful for that advice because I was never shocked when raises were low or when finding out that new, lower leveled coworkers had higher starting salaries than me. My default mindset is that I’m fungible to the business. Work hard but also expect to fight to prove your worth. This is generally good advice in any career but I think it’s a must in tech.


But it wasn't like this in the beginning of the Industrial/Knowledge revolution. Engineers/Scientists were valued and founded companies with Engineering at its core with Management/Marketing/Sales/etc. playing their proper ancillary roles. It was much later that the system was manipulated to place Management on a pedestal (undeserved) citing market/financial reasons. And Engineers have allowed this to happen, take root and persist to this day.

We need to change the above status-quo.

However; the important point we need to be aware of is that in the current Economic/Financial System many events and their payoffs are no longer linear and that is what companies are trying to optimize for. The best explanation of this is Nassim Taleb's Mediocristan (non-scalable) vs Extremistan (scalable) dichotomy. This video Pareto, Power Laws, and Fat Tails—what they don’t teach you in STAT 101 is a very nice overview of the essential points : https://www.youtube.com/watch?v=Wcqt49dXtm8


Well, yeah, and generally you need to seek new employment regularly, because your current company is counting on you not recognizing your worth, whereas other companies will have to make competitive offers in order to win you over.


> What prevents engineering firms from acting like law firms?

I wonder if it's harder to associate an employee's skill level with company profit.

If most lawyer companies have established hourly billing rates for each employee, then the owners can more clearly see an employee's true value to the company?


and yet for software consulting firms, this billing hours are also done the same. Yet, you don't see the same level of compensation raises unlike with law firms.

I think it's true that Engineering is just mere "foot soldiers" and hence replaceable/dispensable as needed (as mentioned in another sibling comment here).


Management perceives people to be more replaceable than they are. Years of working in, or being the architect of, the company's core product will make you a true expert in it.

But, from the company perspective, your value is based on the 'market rate' for your generically defined skills and experience.


This is precisely the reason for management; when priorities are internally competitive.


I think engineers tend to overestimate the value they bring. _Most_ companies make money from business deals that are written and signed into contracts, sometimes those deals involve automating stuff, that's where software engineers are useful. If a company loses some rock-star engineer, the automations they worked on don't break immediately, there's some time for other engineers to figure out how it works. If something cannot be delivered in a timely manner according to the contract because engineering branch got weaker after a rock-star left, companies either have to pay some fraction as fines or may agree on deadline extensions, however the contracts have been signed, the money are flowing, all the holes will be plugged eventually by other engineers.

Anecdotally, I have been recently approached by someone who was very eager for me to consult them on the product they were going to build. After a few hours of talking I quickly realized that they don't really need to build anything complex. In fact, my advise was to only focus on the core functions which are very simple and leave the majority of actual work to be done manually by a much cheaper secretary-type role until the product got enough traction to actually benefit from automation.


All attorneys that work for law firms have billable hours. Only agencies and consultancies have billable hours for engineers.

It’s easier to say “we will lose $X if this person leaves” or “$Y will be at risk if they leave due to personal relationships” than it is to quantify an engineer’s revenue impact to the company.

Hence the common cost center assumption.


in a law firm, the lawyers are partners or very important people to the firm even if they're not partners.

i.e they're not cogs in a wheel that can be replaced by 'management'.

in as much engineers here, would like to believe otherwise -- most engineers at most firms are treated as replaceable / disposable resources.


People may hate this, but people leaving is valuable to companies as a whole.

Lawyers typically specialize and they work off the same body of work everywhere (the same set of laws). Having worked for 10 law firms doesn’t mean that they know something current employees don’t.

Tech isn’t like that. Everywhere is different, many of us touch multiple specializations, the body of knowledge we need is always shifting. An engineer that has worked 10 places very likely does know things your current employees don’t, like different tools.

Losing an engineer is bad for the individual business, but engineers moving around is good for businesses in general.


Partners at law firms know that the money comes from the lawyers, and they (probably) know who the rising stars are.

Managers at engineering firms too often think that the money comes from the wisdom of the managers and the hustle of the marketers, not from the work of the engineers.

This may be because partners at law firms are lawyers, but upper managers at engineering firms often are not engineers.


In theory, this would become some type of exploitable game theory.

Send out offer letters to companies most indispensable people, just to jack up your competitors labor costs. Arms race. Eventual mutually assured destruction.


The fraudulent offer letters notwithstanding, I think insiders would see this as an opportunity for Moneyball rather than a risk of unraveling in confusion.

Moneyball as in an optimization technique when roles are static. Obviously this is generic because static data of any value inevitably ends up in a spreadsheet. Otherwise, I’m not sure game theory holds together here.


Companies did engage in the reverse of this, to keep salaries down.


That is capitalism.


They actually do.

I had and accepted counter-offer myself and know quite a few people who were in such position.


engineers either have patent quotas or are blue collar as far as any Enterprise is concerned.

do you have patent quotas at your position? if not they are counting the days to replace you with a machine, someone overseas, a script, AI. depending on the decade.


Why is a lawyer position irreplaceable, but an engineering/blue collar one is not?

Not trying to be spiteful or angry, just geniunely curious about the business logic and psychology behind that kind of decision making. Isn't every job replaceable?


I think the reason is not that lawyers are irreplaceable, to the contrary perhaps. The reason is most likely that law firms are always led by lawyers, while that's rarely true for firms where most engineers work (or for most other professions). Typcially, law firms can only be owned by lawyers due to regulatory constraints (the UK, Australia being a notable, but not particularly successful exception). While this rule restricts the possible size, scalability, efficiency and profitability of law firms compared to many other sectors, it can ensure a certain degree of independence (insulation) of the management from extra-professional influences. Like "Das Kapital" trying to change the way the firm works to achieve better returns in the next 5 years. That leads to law firm management and owners being composed mostly of people who have previously worked in that exact field and PROBABLY being better at judging the worker's worth in terms of potential revenue he/she may earn, the client trust they have and the chance that they may transfer that to another firm... Maybe law firms also have less of an illusion about what is valuable in their law firm as opposed to those companies relying on engineers who have or believe they have some special moat outside their people (technology, source code, inventory etc.). No professional law firm would have any illusions that their lawyers are special - perhaps outside trial lawyers and some very special field of litigation, the firm just needs burnout resistant cannon-fodder with the capacity to run for at least ten years, so they may acquire the sufficient experience, plus a number of bland human skills and patience in coping with other human beings. There never was a genuine myth of the "10x lawyer", that's something that only ChatGPT invents. (Sidenote: I wouldn't call an engineering job blue collar as long as you work in an office - WFH, we are all collarless workers now.)


That makes a lot of sense, thanks for explaining!


Two main reasons;

1) Power/Politics - This is basically Human Nature at work and institutionalized as Management/Leadership/etc to put themselves at the top. A lot of it is BS (see the books by https://jeffreypfeffer.com/) but unfortunately only the enlightened in the industry have woken up to this. It is also the case that in these domains many of the objectives are intangible/subjective and difficult to measure thus allowing the actors to create an illusion of "Importance".

2) Nature of Engineering - The output of any Engineering activity has a well-defined boundary. This makes it more tangible/manageable/measurable and reason about. All the main costs are paid upfront and once gizmo-x/software-y is done the recurring costs are generally pretty low. This gives the illusion that the Engineer is now not worth his pay compared to his current output and hence replaceable/dispensable based on bean counter calculations. It also doesn't help that Engineers do a pretty good job so that a product/software once released and accepted in the market is generally very stable and not in much need of rework. This is the reason "Planned Obsolescence", "Subscription Model" etc. were invented by the Industry.


a lawyer very much can, but 1. there's no machine or script capable of replacing one yet, and 2. they are half sales people with their own client lists.


The fact that (in theory) the computers are doing all the work.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: