Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So, don't leave us in suspense; what do you ask of it? Because I'm quite sure it can already pass it.

Your experience is very different from mine anyway. I am a grumpy old backend dev that uses formal verification in anger when I consider it is needed and who gets annoyed when things don't act logical. We are working with computers, so everything is logical, but no; I mean things like a lot of frontend stuff. I ask our frontend guy; 'how do I center a text', he says 'text align'. Obviously I tried that, because that would be logical, but it doesn't work, because frontend is, for me, absolutely illogical. Even frontend people actually have to try-and-fail; they cannot answer simple questions without trying like I can in backend systems.

Now, in this new world, I don't have to bother with it anymore. If copilot doesn't just squirt out the answer, then chatgpt4 (and now my personal custom gpt 'front-end hacker' who knows our codebase) will fix it for me. And it works, every day, all day.



I'm not the person you're responding to, but here's an example of it failing subtly:

https://chat.openai.com/share/4e958c34-dcf8-41cb-ac47-f0f6de...

finalAlice's Children have no parent. When you point this out, it correctly advises regarding the immutable nature of these types in F#, then proceeds to produce a new solution that again has a subtle flaw: Alice -> Bob has the correct parent... but Alice -> Bob -> Alice -> Bob is missing a parent again.

Easy to miss this if you don't know what you're doing, and it's the kind of bug that will hit you one day and cause you to tear your hair out when half your program has a Bob-with-parent and the other half has an Orphan-Bob.

Phrase the question slightly differently, swapping "Age: int" with "Name: string":

https://chat.openai.com/share/df2ddc0f-2174-4e80-a944-045bc5...

Now it produces invalid code. Share the compiler error, and it produces code that doesn't compile but in a different way -- it has marked Parent mutable but then tried to mutate Children. Share the new error, and it concludes you can't have mutable properties in F#, when you actually can, it just tried marking the wrong field mutable. If you fix the error, you have correct code, but ChatGPT-4 has misinformed you AND started down a wrong path...

Don't get me wrong - I'm a huge fan of ChatGPT, but it's nowhere near where it needs to be yet.


I'm not really sure what I'm looking at. It seems to perform flawlessly for me... when using Python: https://chat.openai.com/share/7e048acb-a573-45eb-ba6c-2690d2...

I only made two changes to your prompt: one to specify Python, and another to provide explicit instructions to trigger using the Advanced Data Analysis pipeline.

You also had a couple typos.

I'm not sure if "Programming-like tool that reflects programming language popularity performs poorly on unpopular programming language" is the gotchya you think it is. It performs extremely well authoring Kubernetes manifests and even makes passing Envoy configurations. There's a chance that configuration files for reverse proxy configuration DSLs have better representation than F# does. I guess if you disagree at how obscure F# is, you're observing a real, objective measurement of how obscure it is, in the fascinating performance of this stochastic parrot.


F# fields are immutable unless you specify they are mutable. The question I posed cannot be solved with exclusively immutable fields. This is basic computer science, and ChatGPT has the knowledge but fails to infer this while providing flawed code that appears to work.

An inexperienced developer would eventually shoot themselves in the foot, possibly long after integrating the code thinking it was correct and missing the flaws. FYI, your Python code works because of the mutation "extend()":

    alice.children.extend([bob, carol])


>F#

Barely exists in training data.

Might as well ask it to code some microcontroller specifically assembly, watch it fail and claim victory.


> Barely exists in training data.

Irrelevant - this is basic computer science. As far as I know, you can't create a bidirectional graph node structure without a mutable data structure or language magic that ultimately hides the same mutability.

The fact that ChatGPT recognizes the mutability issue when I explain the bug tells you it has the knowledge, but it doesn't correctly infer the right answer and instead makes false claims and sends developers down the wrong path. This speaks to OP's claim about subtle inaccuracies.

I have used ChatGPT to write 10k lines of a static analyzer for a 1k AST model definition in F#, without knowing the language before I started. I'm a big fan, but there were many, many times a less experienced developer would have shot themselves in the foot using it blindly on a project with any degree of complexity.


I would agree with you if it was a model trained to do computer science, rather than a model to basically do anything, which just happens to be able to do computer science as well.

Also code is probably one of the easiest use cases for detecting hallucinations since you can literally just see if it is valid or not the majority of the time.

It's much harder for cases where your validation involves wikipedia, or academic journals, etc.


Then we are in agreement but bear in mind that I was replying to this comment:

> So, don't leave us in suspense; what do you ask of it? Because I'm quite sure it can already pass it.


If it can pass it when you ask it in a way only a coder can write, then we will still need coders.

If you need to tweak your prompt until you get the correct result, then we still need coders who can tell that the code is wrong.

Ask Product Managers to use ChatGPT instead of coders and they will ask for 7 red lines all perpendicular to each other with one being green.

https://www.youtube.com/watch?v=BKorP55Aqvg


I didn't say we don't need coders. We need less average/bad ones and a very large amounts of coders that came after the 'coding makes $$$$' worldwide are not even average.

I won't say AI will not eventually make coding obsolete; even just 2 years ago I would've said we are 50-100 years away from that. No i'm not so sure. However, I am saying that I can replace many programmers with gpt right now, and I am. The prompting and reprompting is still both faster and cheaper than many humans.


In my mind, we need more folks who have both the ability to code and the ability to translate business needs into business logic. That’s not a new problem though.


That's what we are doing all day no? I mean besides fighting tooling (which is getting a larger and larger % of the time building stuff).


Only if you have access to end user.

If between you and your client four people are playing deaf phone (client's project manager, our project manager, team leader and some random product guy just to get even numer), then actually this is not what you are doing.

I would argue that the thing that happens at this stage is more akin to manually transpiling business logic into code.

In this kind od organization programmers become computer whisperers. And this is why there is a slight chance that GPT-6 or 7 will take their job.


TFA's point is not that «coders» won't be needed any more, it's that they will hardly spend their time «coding», that is «devot[ing themselves] to tedium, to careful thinking, and to the accumulation of obscure knowledge», «rob[bing them] of both the joy of working on puzzles and the satisfaction of being the one[s] who solved them».


You can ask it almost anything. Ask it to write a YAML parser in something a bit more complex like Rust and it falls like a rag.

Rust mostly because it's relatively new, and there isn't a native YAML parser in Rust (there is a translation of libfyaml). Also you can't bullshit your way out of Rust by making bunch of void* pointers.


How do you make a custom gpt which knows a specific code base? I have been wanting to do this


You tune an existing model on your own set of inputs/outputs.

Whatever you expect to start typing, and have the model produce as output, should be those input/output pairs.

I'd start by using ChatGPT etc. to add comments throughout your code base describing the code. Then break it into pairs where the input is the prefacing comment, and the output is the code that follows. Create about 400-500 such pairs, and train a model with 3-4 epochs.

Some concerns: you're going to get output that looks like your existing codebase, so if it's crap, you'll create a function which can produce crap from comments. :-)


I use the new feature of creating a custom gpt and I keep adding new information ; files, structures etc by editing the gpt. It seems to work well.


Ah ok so you have to paste entire files in 1 by 1, you can't just add it locally somehow? too bad you cant just upload a zip or something...


you can upload zips. Make a new GPT and go to the custom settings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: