Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the person writing the questions is a Senior Engineer already and knows how to evaluate the answers properly for correctness, you probably would not be able to tell, even if they've only had minimal exposure to let's say, Ruby or Python as opposed to JavaScript/TypeScript. Or writing advanced SQL queries when they don't do this very often.


You're assuming the red flag relates to technical correctness. It doesn't. It's a red flag about mindset and diligence.

Using genAI is fine, using it to bolster a lack of underlying knowledge as I read it, is a red flag.


Most engineers will come across something they haven't used before in most roles. Perhaps some legacy system in some dying language, for example. Previously, they might have spent hours on Google. Now, GPT-4 can unblock them in seconds.

It cant replace the mindset of a human, but what I'm basically saying here is with GPT-4 and good prompting skills, you can be alot more brave when it comes to new unfamiliar tech. That's an advantage in a fast changing tech landscape.


GPT-4 doesn't work for legacy systems in dying languages. It only "works" for things that are well-documented on the internet, or described in what books were included in the dataset.

I can't think of a situation where you'd go from spending hours on Google to being unblocked in seconds. If you're spending hours on Google, then your first dozen search queries aren't turning up the information – in which case, GPT-4 wouldn't turn up the information immediately, either! (It would say something, but it's unlikely to be based on a true story.)


I can’t work out if your responses are being written by GPT or not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: