Its an interview question. If you draw a blank on that, rephrase the question for yourself to something like this: "When you think of bugs you've recently fixed in your code, what comes to mind? Which bug had the most interesting story?". If that feels illegal to you, you can check with your interviewer: "I can't remember many weird old bugs, but can I tell you about a bug I fixed the other week?".
Interviews aren't like tests in school. The point isn't to demonstrate knowledge. The point is to demonstrate to the interviewer that you're someone they'd want to work with. What are they looking for? They want someone they can trust technically, and someone they would enjoy working alongside.
This question is great because its an opportunity to demonstrate both of those skills, by asking you to tell a story (social skills) about an experience you had programming. Use it!
Can you think of any bug from that last five months that you could talk about for five minutes in a way that makes you look smart for solving it? That's all you need to do here.
No, my mind goes completely blank. There's nothing wrong with my memory - I can tell you how I patched the timed Grand Theft Auto 1 demo over 25 years ago with SoftICE as a kid, but I assume my brain finds no value in tracking bugs.
You don’t even need a bug where you look smart for solving it. Eg, as an interviewer I also love stories that go like this:
“There was this memory leak in some C++ code that took us weeks to track down that I’m still kicking myself for not finding earlier. It was totally my fault. Let me tell you what happened … … So the moral of the story is -fsanitize=address is smarter than me. But wow did I learn a lot about my tools from that.”
Exactly, I've been programming for a long time, almost 13 years since I started working on complex software and that question just makes me draw a blank. There's been bugs in my career but none that have "traumatized" me or been all that complicated to get through. Maybe I just don't put that much emphasis on negative experiences. I don't remember losing poker hands all that accurately either, I just don't care.
Also if you've only ever worked in web dev or something easy like that are there even that many really "weird" bugs anymore? Maybe if you work on embedded or low level code bases with lots of race conditions.. those could qualify but if you're doing high level work.. that's rare
Web dev still has tons of weird bugs, tend to be in the interactions between multiple frameworks and edge cases in the assumptions they make. Like, I had one issue with intermittent test failures in Ruby on Rails, not reproducible with any fixed seed, and not affecting production. The overall source of it was:
1. We had a table from a third-party API that used their ID as a primary key, I believe stored as a string.
2. Ruby on Rails believed that it should make sure that primary keys aren't a too-large-to-store integer before attempting to save a row.
3. Our test suite would generate fake hexadecimal strings for the primary key for testing purposes for the table.
4. Sometimes, the hexadecimal string would have enough leading 0-9 digits that when Rails coerced it to an integer for their own internal checks, it would exceed the integer limit it thought the table ought to have.
Part of the process is going over your own history and making a collection of stories about what you've done and having that ready. I've shipped over a dozen products in my 25+ years. I can't recall much of any of it off-hand.
If you don’t have at least a couple of weird enough debugging war stories to still remember years later, you probably don’t have very broad debugging experience.
(Not to say you aren’t a good developer, but you haven’t seen a lot of weird edge cases.)
You're generalising how your brain works to how everybody's brain work. My brain doesn't respond well to questions out of nowhere. I don't maintain a "weird bugs" index. If we are just talking and something comes up there is a good chance I will have a related story, though.
Me to ChatGPT: reply to this interview question "What's the weirdest bug you debugged? What made it weird?" in first-person, make it a personal story.
ChatGPT: It was early on in my career as a software developer, and I was working on a web application that was designed to allow users to upload images and apply filters to them.
One day, a user reported a strange issue where some of the filters were producing unexpected results. Specifically, they were seeing images that appeared to be completely random and unrelated to the original image. At first, I assumed it was a problem with the filter algorithms, but as I dug deeper, I realized that the issue was actually caused by a very strange bug.
It turned out that the user's browser was caching a previously uploaded image and reusing it for subsequent filter operations, even if the user had selected a completely different image. This led to some very bizarre and unexpected results, as the filter algorithms were being applied to an entirely different image than what the user intended.
To fix the bug, I had to implement a workaround that forced the browser to always load the most recent version of the image from the server, rather than relying on its cache. It was definitely one of the weirdest bugs I've ever encountered, and it taught me the importance of always being vigilant when it comes to caching and browser behavior.
Me to ChatGPT: reply to this interview question "What's the weirdest bug you debugged? What made it weird?" in first-person, make it a personal story, make it using C++.
ChatGPT: I was working on a project that involved developing a custom compiler. One day, a user reported a very strange issue where the compiled program would crash whenever it encountered certain input files. At first, I assumed it was a problem with the input files themselves, but as I dug deeper, I realized that the issue was actually caused by a very strange bug in the compiler code.
It turned out that there was a subtle error in one of the class definitions that was causing memory corruption in certain circumstances. Specifically, the class had a virtual destructor, but the derived class failed to declare its own destructor as virtual. This led to undefined behavior when deleting objects of the derived class, which in turn caused the program to crash.
To fix the bug, I had to carefully analyze the affected class definitions and make sure that all derived classes properly declared their destructors as virtual. It was definitely one of the weirdest bugs I've ever encountered, and it taught me the importance of paying close attention to class hierarchies and virtual functions in C++.
Me to ChatGPT (since I haven't worked on a compiler): reply to this interview question "What's the weirdest bug you debugged? What made it weird?" in first-person, make it a personal story, make it using C++ about the time when I ported PyTorch from Python to R.
ChatGPT: One of the weirdest bugs I encountered was during my work on porting PyTorch from Python to R. One of the major challenges was ensuring that the R implementation matched the performance and functionality of the Python version. As I was going through the codebase, I noticed a particularly strange issue that was causing crashes in the R implementation.
The issue turned out to be related to differences in how Python and R handle memory allocation and deallocation. PyTorch relied heavily on dynamically allocated memory, but the R implementation was not properly managing this memory, leading to crashes and other unexpected behavior.
To fix the issue, I had to go through the entire codebase and carefully analyze all memory-related operations. This involved a lot of debugging and experimentation, as well as close collaboration with the rest of the development team.
Ultimately, we were able to resolve the issue and deliver a stable and performant port of PyTorch to R. It was definitely one of the weirdest bugs I've ever encountered, and it taught me the importance of paying close attention to memory management in C++ and other low-level languages.
Note: Likely ChatGPT can also generate a fake resume, etc. so yeah.
The C++ experts here will be able to confirm, but if my memory serves, I think the story about virtual destructors doesn't make sense:
> Specifically, the class had a virtual destructor, but the derived class failed to declare its own destructor as virtual.
The derived class doesn't have to declare its destructor as virtual. What matters is that the base class does so.
> This led to undefined behavior when deleting objects of the derived class, which in turn caused the program to crash.
No; the undefined-behavior case arises if the base class destructor is not declared as virtual, and you then try to delete a derived class via a pointer-to-base.[1]
So this is a nice example of ChatGPT producing something that looks plausible, yet on examination, betrays the fact that it has no actual "understanding" of the topic.
I know it's well-known and because it's a statistical model, but I like how ChatGPT basically fills in a Mad Lib with these replies.
> ChatGPT: I was working on a [project] that [description].
> One day, a user reported a strange issue where [issue description]. At first, I assumed it was a problem with the [feature], but as i dug deeper, I realized that the issue was actually caused by a very strange bug.
> It turned out that [bug cause]. This led to [problem].
> To fix the bug, I had to [solution]. It was definitely one of the weirdest bugs I've ever encountered, and it taught me the importance of [lesson].
Third example deviates from this template a bit but still has the key components ("strange issue", "To fix the issue, I had to" "It was definitely one of the weirdest bugs I've ever encountered, and it taught me the importance of")
I would 100% accept that as an answer since it tells me the candidate is not good with subtext, and that's really needed to solve a lot of interesting problems.
After all, if I ask "what's your favorite food you've ever eaten", there's an unspoken implication that it's a food you remember eating. I am not in fact asking you to recall every single food you've ever eaten and choose one...
-
From my comment below since every reply seems to be bent on ignoring the subtext even in a theoretical discussion about picking up subtext...:
Again, the subtext is "interesting example we're going to discuss".
If there's one you can't discuss for any reason (doesn't even have to be you forgot: could be an NDA) then it's already excused from the discussion.
An even half-decent interview is not adversarial: just like day to day work, it requires interpreting some level of useful subtext and some level of open communication
-
I mean, you forgot the details so it's not like it's not like you're going to start monologuing if you just touch on it: "You know, there's a real doozy from X years ago where Y but the details escape me, more recently Z happened"
If there are none that you remember that are interesting: "There aren't many interesting bugs, but there was this really interesting product requirement, could we go over that?"
> I would 100% accept that as an answer since it tells me the candidate is not good with subtext, [...] if I ask "what's your favorite food you've ever eaten", there's an unspoken implication that it's a food you remember eating.
Will this filter cut many of the best engineers?
Our field is full of people who pull 'engineering' out of our behinds, to various degrees. I'd assert that the engineer who doesn't assume an unspoken implication, but instead qualifies their answer, or tells you when they cannot answer, or asks for clarification... is more likely to be the one who can make a system that works, and tell you when a system will not work.
> Will this filter cut many of the best engineers?
It won't cut out a single good engineer, let alone the best.
> I'd assert that the engineer who doesn't assume an unspoken implication, but instead qualifies their answer, or tells you when they cannot answer, or asks for clarification... is more likely to be the one who can make a system that works, and tell you when a system will not work.
You grouped the one option that a bad engineer would take, with several that a good engineer would take. "Tells you when they cannot answer" is not what a good engineer does.
They may say "I cannot answer question as-is" as a jump off for clarification.
In fact in my response to your sibling comment I explain that even if there were no interesting bugs you can give an answer that isn't lying, or pulling engineering out of your ass, or dishonest.
-
But flat out refusing or immediately jumping to "well but I can't remember everything!!!!" is still you interpreting subtext... except you've now interpreted the most negative possible subtext. You've assumed your interviewer is asking you to recall things you can't recall and that there is no further room for discussion.
A poor engineer is one that shuts completely down at the first hint of a broken invariant, rather than trying to surface that there is an invalid invariant, or learn more about the broken invariant.
That kind of curiosity to go further than shutting down is what the question is meant to tease out, so you're not beating the system by deciding not to engage, instead you're sending the exact signal being looked for as something to avoid bringing into your organization.
If you need to settle for technical correctness that's fine.
Fortunately we have the resources to hire for technical correctness and a bit more than the minimum when it comes to being well rounded with your ability to understand problems, communicate, etc.
We don't want people who jump to conclusions like "the interviewer is asking me to recall things I don't remember" under the guise of "precision" instead of just asking
It takes commensurate pay/interesting work/an attractive workplace/etc. which are out of a single interviewer's control, so I never hold it against those who don't filter to any of that.
Ironically the "shutting down" part isn't the real test because it's not a trick question: Discussing an interesting bug really is a great way to dive into technicals and learn about the person's soft skills and learn about their background amongst a bunch of other stuff.
But specific to the "shutting down because the requirements weren't 100% totally perfect" I see it all the time, and it's even what we're seeing people attribute to Google's slow decline
On one hand many hardcore engineers think we're seeing the slow and steady decline of software because of bootcamp kiddies ready to hack together any mess with a ball of Leftpad inspired libraries.
But on the other, so so many engineers struggle to see past the tip of their nose in larger organizations. There's this antagonistic co-existence with those outside of engineering where little effort is put into disseminating requirements if they don't agree with them to start.
Which ironically we're watching unfold here! People jumped to the conclusion the interviewer is in fact asking you to select from "every bug ever", but in doing so refuse to interpret that the interviewer might be asking "things you remember"... because that would be jumping to conclusions?
-
For example: when estimating how long tasks take and finding that there's a disconnect between what the larger org expected and what an engineer produced, there's rarely any deep inclination of many otherwise brilliant engineers to find out why because it's assumed "non-engineers just don't know."
They might try to shave some time here or there, they might try and bake in some crunch time because they seem themselves as being that brilliant and dedicated that they can make it work.
But rarely will they try discarding the notion that there was a disconnect on the non-engineering side, and self-directedly throwing out their entire proposed solution to try something that fits on the assumption that their solution was what was wrong in the equation.
Because when they made the design: they designed it with all of their intelligence and skill and experience. And that's what they were hired for, to make brilliant things. So why should they cheapen all that? If that's what management wants they should go hire some junior devs or something.
And unfortunately, if the reality really is that majority of the business value could be produced with orders of magnitude less effort, it's the engineering side that has to enable that kind of discovery. The engineering side is source of the plays in the playbook.
-
The reality is not every engineer can ever reach that. There are brilliant brilliant people who will never have the communication skills or the inclination, or the patience for any of this, and a good interview process doesn't require 1 person to ace every single signal.
Also some people will jump at me for implying engineers should need to zoom out, because in their minds management should be enabling them to stay complete heads down writing code.. but to me that mentality is not generally compatible with being a top of field company for the long haul.
Yes you might catch lightning in a bottle by just enabling very smart people to do build marvels in their silos, but business is more than having marvels to stare at.
I personally worked at a company that essentially succumbed to exactly this. A culture of exceptional engineering, hiring technically brilliant people at all costs... and dying a slow death because the engineers wouldn't leave room for business in their engineering.
-
I guess the tl;dr of all this is: A CEO will say "It's no use if we take 10 years to make a perfect product, if our competitor makes it to market with a decent product next year". And engineers will expect as much from business types.
But what they often forget is that the same is true for customers. No one benefits from your engineering if it never reaches the field. No one benefits from your answer if you willingly get stuck on every single speed bump.
Being a good engineer is being able to efficiently categorize which speed bumps are "just" bumps, and which ones are chasms that will swallow the ship whole if you don't change direction.
If the engineers at Boeing had the mentality that I see often in our field, each 727 would have cost a billion dollars, and would no one would fly today.
I've just been assuming that this kind of product/customer-driven engineering in a business environment can be learned, if it's not already known. And the only questions are whether the org can teach it (with culture, onboarding, consistent messaging) and whether the candidate would be happy with that.
If a candidate came to me with no product/commercial experience (e.g., recent grad, or from a research environment), I'd try to characterize the nature of the work, and see whether I could get an honest discussion with them about how they'd feel about that (and whether they really understood what that means). I'm not wise enough to have figured out tests that will tell me.
And I'd have to hit some team-oriented discussion, too, since that's my biggest concern lately, even more than product-oriented. And it's something a lot of companies seem to do badly (e.g., people focused on their own appearance in sprint tasks or metrics or promotions, rather than the whole of the team's work coming together).
Again, the subtext is "interesting example we're going to discuss".
If there's one you can't discuss for any reason (doesn't even have to be you forgot: could be an NDA) then it's already excused from the discussion.
An even half-decent interview is not adversarial: just like day to day work, it requires interpreting some level of useful subtext and some level of open communication
-
I mean, you forgot the details so it's not like it's not like you're going to start monologuing if you just touch on it: "You know, there's a real doozy from X years ago where Y but the details escape me, more recently Z happened"
If there are none that you remember that are interesting: "There aren't many interesting bugs, but there was this really interesting product requirement, could we go over that?"
While I'm not complaining that people are realizing they're dead... why is ChatGPT the final straw for those ridiculous "make a binary tree" questions?
Why wasn't it the fact that these questions became such a gameable system, that we started referring to them by the copyrighted name of a site where you can access nearly every permutation that will ever be asked of you, along with extremely detailed solutions with rationale: https://leetcode.com/
It's crazy to me that of everything that ChatGPT can do, regurgitating well known answers to well known interview questions is what kills anything off...
I mean if you can’t make a binary tree, what do you really know about programming? If ChatGPT is so great, why would even need a job interview, why not just eliminate the jobs altogether?
You know, the only thing I get more sure of over the years is that defining a "good programmer" is hard
Now, I can't tell if you're being facetious or not, but if one seriously conflates being able to write a binary tree with knowing how to program... they're at least making defining the antonym a bit easier.
-
Also no one is saying ChatGPT is "so great" in this specific case, if anything the point is that ChatGPT can do impressive things, but again, regurgitating leetcode answers isn't one of them.
I’m not trying to be mean but “conflating” is not the same as “necessary.” There is a very common notion of “necessary but not sufficient”. Yes, I think if you don’t know how to make a binary tree, you don’t know much about programming. The converse is not true.
I view general algorithm problems as a "programming subfield", just like graphics/frontend/AI are. Certain coding techniques are overrepresented in each subfield, and each subfield has fundamental "base code", design paradigms, methods of testing, need-to-know algorithms, and canonical examples.
One of the smartest I ever felt was hearing some teammates a cube wall over saying "this one customer from poland says he's crashing a lot" (in our ui). I was like "is he in the polish locale/language? They asked.... yes... "I bet polish has different seperators and our split("/")[1] is crashing"... they checked, had him change to english, crash went away. I felt real yoda like. We had customer on chat so much easier to debug.
Windows uses 0x5C as the path separator, and the character rendering of that codepoint is locale-dependent (most famously in Japanese encodings it's ¥). Apparently in most Polish encodings it's still \, although there's one obscure one where it becomes Ś.
Higher up in the thread, the question was about how to ask an interview question that could both test someone's skill and allow you to detect whether something is a bot or not. If I ask someone to code and explain something on a whiteboard and they give me ascii art, I think it did the job of identifying whether they were a bot.
"it involved a program that ran perfectly fine during testing but crashed whenever it was deployed to the production environment. After much investigation, it was found that the bug was caused by a combination of the production environment's security settings and the program's use of random number generation.
The program used a random number generator to generate a unique ID for each user that logged in. However, the production environment had security settings that blocked certain types of random number generation for security reasons. This caused the program to crash whenever it tried to generate a unique ID for a user.
The solution to this weird bug was to modify the program to use a different random number generator that was allowed by the production environment's security settings. The lesson learned from this bug is to always be aware of the production environment's settings and limitations and to thoroughly test software in the target environment before deploying it."
Back in my first job we had a database which worked fine in testing, worked fine on the customer’s server… then started crashing a week later.
Turns out their MS SQL Server install was configured for MM-DD-YYYY output which only crashed when we reached the 13th of the month. Important lessons were learned that day!
> The best interview question that will never die: "What's the weirdest bug you debugged? What made it weird?"
@ChatGPT: Give me three sample answers to the following questions related to <insert your interview language> programming: "What`s
the weirdest bug you debugged? What made it weird?"
"One time, I was working on a project where the UI kept freezing when a certain button was pressed. I spent hours debugging the code, but couldn't figure out what was causing the problem. Eventually, I discovered that the button's event handler was accidentally triggering an infinite loop that was consuming all the CPU resources, causing the UI to freeze. It was a weird bug because the symptoms were not immediately obvious and it took a lot of digging to uncover the root cause.
I once spent an entire day trying to figure out why my code was behaving erratically when processing a certain data file. It turns out that the file had some hidden control characters in it that were causing my program to misinterpret the data. The bug was weird because I had never encountered a situation where hidden characters were causing issues before.
One time, I was working on a codebase that had been written by multiple developers over a long period of time. I was debugging a particularly tricky issue where certain data was being lost between different parts of the system. After a lot of investigation, I discovered that one of the earlier developers had implemented a workaround for a different issue that involved storing data in a global variable that was being accidentally cleared by another part of the code. It was a weird bug because it was caused by a seemingly innocuous change that had been made months earlier and was difficult to trace back to its origin."
Interesting. Played around with that kind of prompt a bit and asked follow up questions on the answers as one would in a interview. The answers to those were all very vague and almost "evasive". And within 1-2 follow-ups it started to contradict itself.
One weird thing about ChatGPT is that it consistently feels like a B+ student, at everything you ask it. Its never top of the class, but its answers seem skilled enough to get by.
I've seen this asking about my own field (CRDTs), asking it to write comedy sketches, and with some of the academics I know asking it the sort of essay students they'd ask their students.
I wonder if its somehow doing this on purpose - like if its pitching its answers to be the most "middle of the pack" from all of its training data.
Why is itdead? An interview is live, apart from sone kind of sci-fi implant thing, ChatGPT won’t help you cheat your way through a live interview; in fact using ChatGPT on a take home interview test and failing to answer basic questions in an interview may reveal aspects of your character and competence.
Yes, agreed! We ask a variant of this question (we call it "an analysis sample"); from the materials we ask candidates to submit[0]:
"A significant challenge of engineering is dealing with a system when it doesn't, in fact, work correctly. When systems misbehave, engineers must flip their disposition: instead of a creator of their own heaven and earth, they must become a scientist, attempting to reason about a foreign world. Please provide an analysis sample: a written analysis of system misbehavior from some point in your career. If such an analysis is not readily available (as it might not be if one’s work has been strictly proprietary), please recount an incident in which you analyzed system misbehavior, including as much technical detail as you can recall."
These samples are very revealing -- and it feels unlikely that generative AI is going to be of much help, even assuming a fabulist candidate. (And of very little assistance on our values-based questions like "when have you been happiest in your professional career and why?").
To be honest, this sounds extremely difficult and not in a good way. That sounds like many many hours of writing work, to describe a problem that might be many years in the past, that might have been solved by extremely intricate methods that are easy to forget, using technologies that are now not commonly in use, etc.
A good question to ask about each interview question might be: would a good liar have an easier time answering this than a person trying to answer honestly? And if so, retire the question.
Having read many, many, many answers to this question, I don't think that a good liar has a particularly easy time answering this question -- or certainly not in a way that gets them further consideration!
And yes, it's many hours of work -- but the work itself that we are doing is quite hard, and if someone washes out in the application process because it feels unduly arduous, we are likely not a fit for one another.
> I don't think that a good liar has a particularly easy time answering this question -- or certainly not in a way that gets them further consideration
How would you know?
> And yes, it's many hours of work -- but the work itself that we are doing is quite hard, and if someone washes out in the application process because it feels unduly arduous, we are likely not a fit for one another.
I sincerely hope that I never accidentally apply for a company that thinks an unpaid, long form writing prompt is an appropriate interview question because the work happens to be hard.
Eh, it also excludes people that don't have spectacular long-term memory, or people that don't keep a diary about bugs that they've chased down at work. Personally, I think you're overfitting to fight cheating, but maybe you work at a desirable enough place that you can afford to exclude so many people but still get enough good candidates.
IMO a good question provides the necessary context itself, and the candidate's thinking and reasoning skills are what's tested. With your question, it's basically turned into a competition of which candidate has tackled the most ridiculous/obscure/complex bug, so candidates aren't being judged on even footing.
Agreed completely. This problem borders on that common category of questions which test whether the applicant shares a specific fine-grained flavor of nerdiness as the interviewer, rather than whether the candidate is a good fit for the job.
Sounds like one where somebody does pretty well out of finding an interesting example from the internet and tweaking it slightly. They might not survive an interview asking them to expand upon it in an interview, but they could still succeed in passing the screening better than people recounting boring bugs they actually fixed...
(If ChatGPT wasn't busy I'd be tempted to see whether it can manage that, or whether your phrasing throws it off)
I'd be interested in seeing how we can incorporate AI into interviews. Example being entry level software engineers using AI to jam through a small project with tests in a limited time span. Lazy engineers won't check the work while others will use whatever AI generates as their draft and correct whatever bugs that are in it.
I believe that we should be taking advantage of this productivity boost across the board.
is a hard bug the best interview question? When I do my job right 98% of the time is not debugging. This ratio changed dramatically over the course of my career.
I agree it's the best question because it's a good topic to begin a bigger conversation about your programming experience. It's hard to fake and you can take the discussion in many directions. Simply asking for their weirdest bug, getting an answer, and going "ok good, let's move on" is a bad interview question.
For example, you say the ratio of debugging:development changed dramatically over the course of your career. I would follow up and ask what key things you would attribute that to? Testing? Changing languages? Changing programming paradigms? Maybe it's simply that you have a wider knowledge of CS concepts and a stronger intuition for the correct way to model your logic. There's no right or wrong answers, just trying to see that you actually do have some opinions of your own.
I find the ratio doesn't change, but the trickiness of the bugs gets more complicated as you solve harder problems and avoid previously unknown problems.
In my last round of interviews, by far my favorite session was "here is a real project with a real bug, debug it, then if you have time, describe how to fix it, then if you have time, write the fix". (At Stripe FWIW.)
It was super representative of what my work is actually like, and what I'm good at, and ChatGPT would not have figured it out.
Yeah! I didn't end up taking a job at Stripe, but I came away very impressed with their process. Only one interview was leetcode-ish and it was a phone screen and purposefully not a high bar. Then the real interviews were: that debugging one which I loved, an "integration" one which involved live coding while plugging a few different unfamiliar libraries and APIs together (also something I really do in my job), a standard system design interview (still don't really love these...), and an in depth conversation with a hiring manager. Still a tough day, but pretty decent process I think!
Do you mean this debugging interview in particular, because it could potentially be useful work fixing a real bug for them? To be clear, it isn't an extant bug. I think the way they do it is they take an interesting (but fairly straightforward) real bug that they fixed at some point, and they back out the fix.
And I get it in every interview. Somehow my brain just doesn't care to remember the gritty details for tough bugs. I'll spend days on a bug but soon after I solve it I'll only remember the actionable takeaways like "next time I should try using X tool sooner" or "check your assumptions on Y part of the stack".
I think it's because I never spend any time revisiting that bug in my mind. I've got new problems to solve. You need to revisit something to remember it.
I've been quite outspoken about the impact that ChatGPT will have in the tech interview. My solution is not subtle: I believe we should burn the leetcode-style coding interview to the ground and replace it with something else entirely, possibly embracing LLMs.
I'm in the vast minority though. My interviewer colleagues things this doesn't matter and we should keep doing business as usual.
I guess we'll find out how it works out in the next couple of years.
God I hope so, the last time I went job hunting I spend a couple of months doing the leet coding stuff, and I got "pretty good", but I'm certain it would have taken me another couple of months to pass the gauntlet somewhere.
I learned that off-by one errors, mixing up arguments and caches are hard to debug.
I have easily spent days debugging many such problems which were almost always solved by a one line change. And rarely did I find ways to prevent similar bugs in the future bugs by improving testing or code factoring.
The best interview question that will never die: "What's the weirdest bug you debugged? What made it weird?"
For posterity: https://www.gamedeveloper.com/programming/my-hardest-bug-eve...