Hacker News new | past | comments | ask | show | jobs | submit | snuxoll's comments login

Being a SWE doesn't mean you have to be involved in the startup grinder. I work for a privately owned company, and while my TC would be way better working for a FAANG or dealing with the startup world, there's no way I would mentally survive working in either of those environments for the 12 years I've done where I am now.

I don't work on glamorous projects; most of what I make will never be seen outside of my company, even. But I've got a child, got to move back to the rural town I was born in after I inherited my late grandfather's house (since I've been WFH my 12+ years), and I can genuinely say I enjoy my work most days.

If it's the actual job you hated I'd tell you to go back to school for something else or get into a trade, but it sounds more like you're just tired of the culture surrounding the startup and big tech scene. Go find work from companies that aren't "tech companies", you're less likely to run into leetcode interviews, shitty work/life balance, and the constant fear of your employer folding any morning.


Idk how long it would take me to be interview ready - I know it's not a great look and above all it's really embarrassing... but leetcode seems like a waste of time and side projects seem like the only way to prep for non-leetcode pairing sessions where they always expect you to know all the bindings / syntax like the back of your hand.

Really depends on the company and interviewer. I can't teach problem solving and critical thinking skills, but those are what I focus on during interviews and why I toss 98% of the applications that make it to my manager and myself after an interview. Given I am a hybrid SRE/SWE my role and team is a bit weird, but I can give somebody extra time on tasks while they're learning new tools or responsibilities, what I can't do is have somebody tell me "I don't know how to do that" and require I sit with them and walk them through the entire thing or create a detailed step-by-step design document that takes longer than doing the work itself. Being able to do the research, think about the problem, and design a solution is what separates the warm bodies that contracting firms provide from actual engineers - and I don't just need warm bodies.

Hell, if I had an open req right now I'd ask for your CV, because I think you're probably being a bit too hard on yourself and overthinking things. There's plenty of chill places who just need somebody to keep those couple critical pieces of software that are 15 years old running, and there's nothing wrong with work just being a means to an end.

EDIT: Actually, send it to me anyway. If nothing else I can at least give you some more specific advice or a mock interview and see if there's anyone in my network that you'd fit with. Email's in my profile.


I really appreciate your thoughts here - I'll send my cv your way.

I'd say my strong-suit is in vaguely defined zero to one work and scale out for more sharply scoped functionality. That said, if you ask me to write a react app from scratch, I'm likely just going to lean on cursor / claude to do the boilerplate and hop in when I need to make sure things are surgically correct.


> what I can't do is have somebody tell me "I don't know how to do that" and require I sit with them and walk them through the entire thing

Man is this underrated. My peers are so hyper-focused on whether or not new hires know SQL (of all things), and not whether they can adequately problem solve. And then they wonder why my hires, though they might start slower, end up being stars.

To your other point, I’m even fine spending a day explaining the business to someone new - it’s complicated. But I try to bring on people who only have to hear it once.

Anyways, I guess I just like to +1/amplify sensible hiring posts.


Explaining the business is part of onboarding, both for employment as well as introducing somebody into an existing project IMO. It took me years to fully grasp everything with the team that I started with, and that’s not a knock on anything, but the reality of processes that grew organically for the 10+ years before I started there.

I love sitting down with newer employees, especially junior engineers, and going over stuff like that, along with explaining why specific design choices were made, and concepts they may be unfamiliar with. One of the ones that paired with me on many of my projects from my original team is now the maintainer of those very same projects now, and hot damn does it bring a smile to my face that somebody can send me a message about something and I can just @mention him and pass the buck without worrying. We’d mostly been a .Net+MSSQL shop and these projects were a mix of Python, Kotlin, and PostgreSQL, but he ran with it after I helped get him setup and walked him through the code and gave him some time to pick up the tools.

That success is what solidified my views that critical thinking skills, along with a commitment to lifelong learning, are the best indicators of success for new hires with my org. I can’t teach people these qualities, but those that have them will seek the answers and know how to apply them once they’ve picked them up.


Lisdexamfetamine is still a C2 in the US, as somebody with a script for it the headaches of pharmacies running out is real.

The biggest difference between the language-specific IDEs in my experience is how they expose the project structure, with GoLand, PyCharm, etc. providing a much more directory centric workflow while Rider by nature has to work around .sln and .*proj files.

But Rider is uniquely weird in its use of ReSharper for code analysis.


I’ve used it to make a UI for an ESP32 with one of those tiny B&W OLED displays, it happens to scale up from there as well.


Yeah, not a huge fan of error handling in go - stuck relying on a linter to catch you and because of shadowing rules it's extremely difficult to make it look nice.

Rust's `?` operator on Result<T,E> types is flipping fantastic, puts all of the following to shame.

    // can forget to check err
    thing, err := getThing()
    if err != nil {
      panic(err)
    }

    // More verbose, now you could possible forget to assign thing
    var thing Thing
    if t, err := getThing(); err != nil {
        panic(err)
    } else {
      thing = t
    }

    // What I end up doing half the time when I've got a string of many
    // calls that may return err as a result of this

    var whatIActuallyWant string
    if first, err := getFirst(); err != nil {
      return err
    } else if second, err := doWith(first); err != nil {
      return err
    } else if final, err := doFinally(second); err != nil {
      return err
    } else {
      whatIActuallyWant = final
    }
It's actually to the point that in quite a few projects I've worked on I've added this:

   func [T] must(value T, err error) T {
     if err != nil {
       panic(err)
     } else {
       return value
     }
   }


> stuck relying on a linter to catch you

Isn't that what your tests are for? Linters aren't normally intended to stop you from creating undefined behaviour.

It is not like Rust negates the need for those tests. Remembering to handle an error is not sufficient. You also need to ensure that you handle it correctly and define a contract to ensure that the intent is documented for human consumption and remains handled correctly as changes are made. Rust is very much a language designed around testing like every other popular language.


Relying on (someone making) a test to ensure you use a variable is even worse than relying on a linter.


You are absolutely right. But why would anyone do that? That doesn't make sense and it is bizarre that you would even put in the time to post this.

What you do need to do is document how the function is intended to behave. If, for example, your function opens a file, you need to describe to other developers what is expected to happen when the file cannot be open.

"The compiler won't let me forget to handle the error" is not sufficient to answer that. That you need to handle the error is a reasonable assumption, but upon error... Should it return a subsequent error? Should it try to open a file on another device? Should it fall back to using a network resource? That is what you need to answer.

And tests are the way to answer it. It is quite straightforward to do so: You write a test that sees the file open failure occur and check that the expected result happened (it returned the right error, it returned the right result from the network resource, etc.). Other programmers can then read your example to understand what is expected of the function. This is as necessary in Rust as it is in Go as it is in any other language you are conceivably going to be using. Otherwise, once you are gone, how will anyone ever know what it is supposed to do? As changes occur through the ongoing development cycle, how will they ever ensure that they haven't broken away from your original intent?

So, once you've written the necessary tests – those that are equally necessary in Rust as in any other language – how, exactly, are you going to forget to handle the error? You can't! It's impossible.

I don't know why this silly thought persists. It is so painfully contrived. If one is a complete dummy who doesn't understand the software development process perhaps they can go out of their way to make it a problem, but if one is that much of dummy they won't be able to grasp the complexities of Rust anyway, so...


You can do something like this:

   type errHandler struct {
      err error
   }

   func (eh *errHandler) getFirst() string {
      // stuff
      if err { eh.err = err }
      return result
   }

   func (eh *errHandler) doWith(input string) string {
      if eh.err != nil {
         return ""
      }
      //stuff
      if err { eh.err = err }
      return result
   }

   func (eh *errHandler) doFinally(input string) string {
      if eh.err != nil {
         return ""
      }
      //stuff
      if err { eh.err = err }
      return result
   }

   func (eh *errHandler) Err() error {
      return eh.err
   }

   func main() {
      eh := &errHandler{}
 
      first := eh.getFirst()
      second := eh.doWith(first)
      final := eh.doFinally(second)

      if err := eh.Err(); err != nil {
         panic(err)
      }
   }


You may as well use exception handlers if you're going to go there.

   func foo() (final int, err error) {
      defer func() {
         if e, ok := recover().(failure); ok {
            err = e
         } else {
            panic(e)
         }
      }()
      first := getFirst()
      doWith(first)
      final = doFinally()
      return
   }
encoding/json does it. It's okay if you understand the tradeoffs.

But look at what you could have wrote:

   func foo() (int, error) {
      first, err := getFirst()
      if err != nil {
         return 0, ErrFirst
      }

      err = doWith(first)
      if err != nil {
         return 0, ErrDo
      }


      final, err := doFinally()
      if err != nil {
         return 0, ErrFinally
      }

      return final, nil
   }
This one is actually quite nice to read, unlike the others, and provides a better experience for the caller too – which is arguably more important than all other attributes.


And with some error utilities you could do this:

  func foo() (int, error) {
      first := getFirst()?
      doWith(first)?
      return doFinally()
  }
or this:

  func foo() (int, error) {
      first := getFirst() % ErrFirst
      doWith(first) % ErrDo
      return doFinally() % ErrFinally
  }
The first one is a significant upgrade over the exception version. It cuts out half the code and makes the early return points explicit.

I think something similar to the second one is also nice to read, and it gives the same improved experience to the caller as your suggestion.


> and it gives the same improved experience to the caller as your suggestion

Albeit a contrived suggestion for the sake of brevity. In the real world you are going to need to write something more like:

   first, err := getFirst()
   var err1 *fooError
   var err2 *barError
   switch {
   case errors.As(err, &err1):
      return nil, FirstError1{err1.Blah()}
   case errors.As(err, &err2):
      return nil, FirstError2{err2.Meh()}
   case errors.Is(err, io.EOF):
      return nil, EOF{}
   // ...
   case err != nil:
      return nil, FirstError{err}
   }
And that is where eyes start to gloss over. The trouble with errors is that they quickly explode exponentially. Programmers long to distill all possible errors into one logical operation to not have to actually think about all the cases, since that is hard and programmers are lazy, but that is not sufficient for a lot of programming problems.

The cutesy shortcuts like ? and % operators are fine for some classes of programming problems, to be sure, but there are numerous languages that are already designed for those classes of problems. Does Go even need to consider travelling into those spaces? In the original Go announcement it was made explicitly clear that it was designed for a very particular need and was never intended to be a general purpose programming language.

I'm certainly not the gatekeeper. If Go wants to move away from its roots and become the must-have language for the classes of problems where something like ? is a wonderful fit, so be it. But, from my point of view, putting energy into tackling the big problems is more interesting. There should be plenty of room for improvement in the above code without losing what it stands for. But that is going to require a lot more deep thought than I've seen put in and programmers are lazy, so...


If you need different logic for different errors out of a function call you wouldn't use this, but your example code there... I think it's at the point where you've made things more complicated for your caller than just returning FirstError{err} no matter what the error is. The caller still has to deal with all the errors getFirst can cause, but they've been reorganized in a complicated bespoke way.

> Does Go even need to consider travelling into those spaces?

Oh come on. Changing how one common piece of boilerplate is written is not travelling into new spaces or moving away from Go's roots.


> You may as well use exception handlers if you're going to go there.

I didn't know about this trick, thanks for sharing.


> You can do something like this

Do people actually do this? Is it included in the standard library? If not, should it be?


I dislike the company, I like the product - I have built several small apps on the platform and nothing has ever been "no code" as they like to advertise, but hot damn was I able to throw together something useful for the business in less than a day and have people using it as quickly. As an engineer I find it incredibly valuable for making these low-to-medium value LOB apps that don't justify the man-hours to build, deploy, and fight tech debt for a full-stack web application, at least when factoring in the tradeoff for license costs.


Not really. 15A receptacles are required by code to be able to handle 20A of current at their terminals, so the larger wire and breaker allows for more loads to be run instead of one big load to draw it all (and if you need that, then a 20A circuit with a single receptacle does actually require it to be a 20A receptacle).


20a outlets are also expensive compared to 15A ones. When redoing the electrical in the house I inherited I put a dedicated circuit in the living room for the TV/game consoles/etc since my daughter's gaming rig would be going in the living room as well; since I hadn't decided to put the recessed panel for the TV wiring in at the time it was the only outlet on the circuit and code dictates that a 20A circuit with a single outlet must use a 20A receptacle.

I can buy a 10-pack of nice 15A Leviton Decora Edge outlets (that use lever lock connectors instead of screw terminals) for $26 ($2.60 each), but basic 20A tamper resistant outlets (which newer editions of the NEC require anywhere a small child can access them) are $6+ a pop.

When nothing outside of my electrical room (where the servers live) has need of a 20A receptacle it's kind of pointless to spend the extra money on them, but the extra couple bucks on 12 gauge copper is always wise.


It has long been my understanding that a single regular, bog-standard 15A duplex (ie, dual outlet) receptacle meets the multi-receptacle requirement of a 20-amp branch circuit.

If my understanding is correct, then you overkilled it and you could have saved a few dollars, at least one welli-intentioned rant, and still have been compliant with NEC.

A literally-single 15A outlet like a Leviton 16251-W would not pass muster, while one dual-outlet example of the 15-amp lever-lock devices you mention would.


Well, the rant still applies either way - the 20A outlets are 2x the cost, that's the big reason why the aren't routinely installed. I was thinking when I embarked on the project "oh, I'll just put 20A ones everywhere because why not?" and immediately decided not to when I looked at the prices for the packs of outlets....


Oh, for sure.

I had the luxury (or curse, depending) once of owning a home that needed all of the wiring replaced.

Being the kind of person that I am, I overbuilt things as I felt was appropriate. As part of that, I certainly wanted to install 20 amp outlets (even though I've never held in my hand a 20 amp plug).

The cost of that, vs good spec-grade 15A duplex outlets, was insane.

I know that the only difference is using one T-shaped contact instead of a straight and some different molds for the plastics. The line producing T-shaped contacts already exists, and so do the molds. Every 15A outlet sold today can transfer 20A safely.

It should be pennies difference in cost, and it was instead whole dollars.

Sucks.

(I'm reasonably certain that we are going to be broadly stuck with this low-current, low-voltage business until something very different comes along, and that any of this is unlikely to change in my lifetime.)


Not a criticism, but a question. Did you consider adding a subpanel? If you're running a new circuit I assume there was already some drywall patching to be done, seems like it would have been more cost effective and removed future headaches to just give yourself more breaker space.


At least in the US, a sub panel is an easy grand or two even if it’s right next to the main panel.


A lot is going to depend on labor rates for your local electricians, but that costing more than $500 where I am would be outrageous. I do my own electrical, but even I paid a licensed electrician to come handle installing a new panel since I did not have an outdoor service disconnect and didn't feel like fighting with the utility company over de-energizing and re-energizing my service. Ended up needing a lot more done, but the whole thing cost me $2500 to get a new service drop, outdoor meter main, and wiring run to the old panel (in the bedroom on the other side of the main) and the new panel (in the old furnace closed that's now my electrical / network room).

But really, doing a subpanel yourself to expand breaker capacity is a really simple project - most people if so inclined could do it themselves. Anywhere from $100-200 for the panel itself depending on how many spaces you feel like adding, up to $80 for a large enough breaker to feed it, and some tens of dollars for SER cable.


Agreed - I’ve ended up installing 4 (inspected) ones over the years myself, and one I paid an electrician for (they also had to upgrade the main service feed).

IMO, what usually drives up the price is the ancillary stuff - opening up a wall (and re-finishing it) because there isn’t enough physical space, or adding extra main panel capacity/service capacity because the main feed is insufficient, or having to run heavier than expected wiring because the only available space gets really hot (poorly ventilated attic space), or having to run surface conduit due to a specific challenge with framing.

Then add in labor (where I was in a high cost of labor area), and it can get expensive quick.

An actual surface mount subpanel and appropriate wires/breakers is usually only a couple hundred bucks total like you note.


> Really would like to see more safety margin with 14 gauge wire.

The wire itself really isn't the issue, the NEC in the US is notoriously cautious and 15A continuous is allowed on 14AWG conductors. Poor connectors that do not ensure good physical contact is a real problem here, and I really fail to understand the horrid design of the 12VHPWR connector. We went decades with traditional PCIe 2x6 and 2x6 power connectors with relatively few issues, and 12VHPWR does what over them? Save a little bulk?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: