Hacker News new | past | comments | ask | show | jobs | submit login

I thought about learning Go and Fiber to build the backend of a side project of mine and I did but as it goes for any new language / stack I wasn't feeling confident in it. Then ChatGPT came out and I thought what the hell, let's see what all the fuss is about.

So I asked it to write me a struct for a table with the "id, name, longitude, latitude, news" columns. That worked well, I was surprised it automatically inferred the data types for said columns.

Then I asked it to write endpoints for retrieving a record from that table and it did so perfectly which again I was surprised by. Asked it to add endpoints for adding records and retrieving all records. Again, no bugs, perfect code. At the end I asked it to create a python script to test the API and it did so flawlessly.

Next day I created a docker env with postgres and went to test the code but it didn't work, turns out it wrote it with mysql in mind so went back and told it to rewrite the entire code with postgres in mind and again it did so flawlessly so overall writing this small API endpoint took maybe 30-60 min.

Considering I was a total newbie at Go this probably would have taken me several hours to complete successfully and this code is basically just boilerplate. I don't care to learn it by heart so I can be more productive in the future. Now that I have ChatGPT I basically don't have to, I don't have to write python to speed up my dev time, I can just have ChatGPT write the basic stuff in a highly performant language. It removed the only drawback which was more boilerplate.




> Again, no bugs, perfect code

How do you know? Either you know enough Go to be able to tell, and thus you'd be able to write this yourself, or you don't and thus you can't really judge.

I mean, it probably is right, but this "I don't know something, but I trust what the chatbot told me" is what worries me about the rise of the LLMs.


Reading and validating code is way easier than figuring out how to write it in the first place. A lot of programming is the same - functions, conditionals, loops, libraries, etc.. The hard part is mostly semantics across languages. Way easier for GPT to do the first pass, then go through it and read up on the parts you don't understand.


It isn't in my experience. Understanding code that you did not write for a problem that you do not fully grasp is a lot of work. It's hard enough when you did write the code and when you do fully grasp the problem, which you'd have to if you were to write it yourself. Typically that's what first draft code is for: to see if you actually understand the problem.


Do you find it takes more time to review a PR than it took for someone to figure out the solution and implement it?


Understanding the code of a fully fledged application is hard.

But with ChatGPT you can build it piece by piece.


With a brain you can also build it piece by piece, in fact I don't know of any other way of writing a large software system than doing it piece by piece.


Sure, but the argument was that reading an entire application's code is hard, therefore GPT-4 is counterproductive.


Validating code you understand, sure. Validating code for a language you don't know, I don't see how you could.

E.g. say you speak python, how would you know how to cleanup memory in C, since you never had to do it? Would you even know that you have to?


> Reading and validating code is way easier than figuring out how to write it in the first place

Not at all. I would even say it is harder because the code being reviewed is priming you.

All code is a leaky abstraction and if you don't know what to look for you just won't see it.


That's just not true. I expect software to become a magnitude shittier soon because of this attitude. After that I expect software to become unbelievably good, because thanks to AI we will have the ability to prove correctness of software much more cheaply than before, and to design on a higher level than before. After all, you don't want the AI to help you generate boilerplate code, you want the AI to help you avoid boilerplate code.


It is true if you can understand the code it writes. Maybe you are worried junior programmers are gonna churn out code using ChatGPT but to be honest I'd rather trust code from ChatGPT than from a junior, I feel ChatGPT writes better code on average.


I'd argue you only THINK you understand the code. If it generates a 1000 lines, which look like they are doing the right thing, will you be diligent enough to go through every single one of them? This literally can only work for extremely boilerplate code (which is maybe the code most of programmers write), but for most code I write I need a mental model of it, and constructing that model myself is easier than to try to learn it from some code. Of course ChatGPT can work as an inspiration, especially for working with unfamiliar APIs.


Because what the code does is simple enough, connects to a database, runs some queries and returns the results. I know it worked because I tested it and for that kind of code there aren't really edge cases. Even if I can't validate or catch syntax errors because I'm not yet experienced enough I can see the overall structure of the code and what it does and if it runs then there's no syntax errors.

It's like with riddles, someone asks you a riddle you think and you think and you draw a blank but if you are given the answer you can instantly validate it even if you didn't know the answer before hand, same with this.


On one hand one may not know the exact syntax for a function call but still understand looking at code where a function is defined and where it's called, those are two different knowledge set, the first doesn't transfer from other languages but the second does.

On the other hand you can ask gpt to write test and validate at the outside layer that data is transformed the way you need to.


> I mean, it probably is right, but this "I don't know something, but I trust what the chatbot told me" is what worries me about the rise of the LLMs.

I share the same worry with regards to the way humans will use AI, along with worries about enabling various antisocial behaviours at scale.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: