Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tell HN: I asked ChatGPT to build a login form in React and it gave me the code
172 points by jerrygoyal on Dec 1, 2022 | hide | past | favorite | 131 comments
I asked OpenAI chatGPT to "build a login form containing email, password, and login button in React using state" and it gave me the full code along with an explanation. You can see the video of the prompt and response: https://cln.sh/X4p01n

I thought Copilot was fairly advanced, but this chatGPT takes it to a whole other level. What are your thoughts?



Prompt: create a rap song about a guy named Roman writing .NET APIs

Sure, here is a possible rap song about a guy named Roman writing .NET APIs:

  Verse 1:
  I'm Roman, and I'm a code king
  I write .NET APIs, and I do my thing
  I've got Visual Studio on my screen
  Writing code that's clean and lean

  Chorus:
  Roman, Roman, writing .NET APIs
  Roman, Roman, coding all day

  Verse 2:
  I'm a master of C#, no doubt
  I write APIs that always work out
  I'm a whiz with classes and objects too
  I can handle any challenge that comes my way

  Chorus:
  Roman, Roman, writing .NET APIs
  Roman, Roman, coding all day

  Verse 3:
  I'm a pro with ASP.NET and Web API
  I can build a server in a jiffy
  I know my way around REST and JSON
  I can make your data flow like a river

  Chorus:
  Roman, Roman, writing .NET APIs
  Roman, Roman, coding all day

  Verse 4:
  I'm a rockstar with Azure and the cloud
  I can make your apps run smooth and proud
  I'm a wizard with microservices and containers
  I can help your business grow and prosper

  Chorus:
  Roman, Roman, writing .NET APIs
  Roman, Roman, coding all day.

  Outro:
  Roman, Roman, the .NET API man
  Writing code that's always in demand
  Roman, Roman, the code hero
  Writing .NET APIs for the win.


What a time to be alive!


Says the AI!


Ah sh*t here we go again


Wait, what? Was this really generated by an AI or are parent just joking and wrote it themselves?!?


Not joking at all, this really was generated. I have now wasted hours amusing myself and others with epic poems, rap songs, limericks, short stories and such. This thing is pretty incredible, especially how it has context about referenced objects/concepts, and can alter and evolve its output based on further input. All in mere seconds.


ChatGPT blew my mind this morning — I was struggling with a regex, so asked it to help me.

Here's how it went: https://i.imgur.com/PS6oPsq.png

A pretty incredible developer experience, if you ask me!


Impressive!

It's lying though when it claims its regexes match the empty parameter cases, right? (The regexes do what's asked, just explanation seems buggy.)


That’s right. The other mistake it made was including the curly braces when capturing the first parameter.

Despite this, for a regex amateur like me I was very impressed. It saved me all the time I’d normally spend head-scratching in Regex101, trying to remember how it works. Instead I could just jump there at the end, validate the matching, and make any small tweaks far easier.


Also, a LaTeX macro will grab the first token as it's argument if there are no curly brackets which the regex doesn't support ... Not entirely surprised as that's hard (impossible if you don't assume default catcodes) to get right and probably not worth it for most usecases but worth keeping in mind.


For context, this is for a pretty strange use case — I was trying to preprocess user-submitted LaTeX with PHP, by replacing occurrences of a specific macro with simply the value of its first argument.

The macro just colour-codes its first parameter presentationally, whilst the rest of the LaTeX needs to be passed to a compute engine for solving. The compute engine doesn't know how to "solve" a colour, hence why I'm trying to strip them out beforehand.


Wow! I've been learning how to code for a personal project but looking at this stuff, junior devs might be out of work soon.


Wait until the GPT-4 version is out, and seniors are in trouble too. People don’t realize, this is still in its infancy.


I showed this to my wife who is in academia. It can't replace academics because research, by its very definition, has to be new, but good God it's going to make her job impossibly hard - it's trivially easy to pump out papers with this tool. Once students catch on, it's going to be really hard to figure out what's a real paper and what's AI-generated.

And journals are going to have a real hard time dealing with "paper spam".


There are already books being sold on aAmazon that are complete junk, generated by GPT-3


> Once students catch on, it's going to be really hard to figure out what's a real paper and what's AI-generated.

Just ask the student in person to explain the paper to you.


True!

Right now it is like when the camera had been around for a few years: People knew it was revolutionary, they went to the cinema to watch a train arrive and got terrified [1]. But compare that to the billions of camera phones, Marvel movies, Hubble, etc.

[1] https://en.wikipedia.org/wiki/L%27Arriv%C3%A9e_d%27un_train_...


Which is unfortunate because then they will never become senior devs.


Yeah, remember how Fortran put all the ASM devs out of work?....

;)


Would you say ASM devs make up any significant fraction of the software industry these days?


The point is that most of them were able to upskill and become much more productive by automating their previous workload.


Except when you need an ASM dev, or a COBOL dev (see: unemployment systems during the pandemic).


I think management should be first. AI managers sound very reasonable actually. Next stop, VCs. If we geeks were actually smart (regrettably we are practically not) the bots would be taking away non-geek jobs.

"Thou shalt not Code!"* [the secret 4th law of robotics on planet Actually Smart Geeks Here]

* graven image business all over again -- it's a sin ;)


Say it ain't so


It basically gave me the same feeling of glee I got out of Google while it was still at stanford.edu: just a great UX and what feels like the correct answer on the first try. Google should be shaking in their boots.


wait, you had to correct it for it to produce a working regex?


If it only required one round of correction, it's already better at regex than I am.


Exactly. I didn’t expect it to be 100% correct (and nor was it).

It just meant I could load up Regex101 having already gotten 90% of the way to the end result.


Look at Section 8 and Section 9 here:

http://tug.ctan.org/info/tex-nutshell/tex-nutshell.pdf

How is ChatGPT's answer wrong? The regex is matching square brackets instead of the correct OPTIONAL curly braces. Unseparated arguments aren't handled. The regex for the id is also wrong. The leading slash is correct, but other than that, the whole fucking thing is wrong!

And of course it can't handle macro calls within the macro arguments (nesting).

It's just an absolute clusterfuck, from top to bottom. The chatbot gave its user a useless piece of garbage, and Hacker News doesn't know enough to throw it away. Instead, we parade it around like some great success!

Fooled by a language model.


Did you mean to reply to someone else perhaps?

The comment you replied to was a one line joke about being bad at regex. I didn't directly or implicitly make any claim about the quality of the output of the chatbot. It's unclear to me how my comment could be reasonably interpreted as "parading it around".

On a tangent:

> Fooled by a language model

I suspect a reasonable person would understand this to be insulting on some level, so I assume you must have seen some other meaning/value in including it such that the "insultingness" is outweighed by the point you're making. I'm not seeing that point though, could you clarify?


Based on this and the other post where it delivered lies about Typescript generics, ChatGPT will—with total confidence—tell you things that look correct but are horrendously wrong for anyone who understands the subject matter. Just like the "science news" section of any mainstream media.


Thankfully all the other sections of mainstream media are much more accurate!


Mainstream needs journalists that can code. headduck


As an aside, this is why I hate regex. The expressions are too convoluted for a human to readily parse and understand. Unless there's a good reason, I prefer to stick to imperative parsing code (and have developed personal tooling to help with that).

Part of elegant development is abstracting at the right places to make your code communicate what it's doing to the next person working on it (which could include future you).


Too convoluted for a human to readily parse and understand? The same could be said for pretty much any code. As with anything else, the more practice you have reading and writing regexes, the easier they become.

If you ask me, the real boogeyman is lack of programmers' comments that explain what the regex is doing. In other words, the fault lies not with the tools, but PEBKAC.


I can’t see any mention of square brackets in section 9 (macros). Also, my end goal was to specifically target the contents of the first parameter, which I know which will wrapped in curly braces.

Regardless, even if my prompt didn’t meet the LaTeX spec exactly, the regex I got back (after just 2 natural-language revisions) saved me a lot of time and hassle, and helped me get the job done in the way I needed it to.

There’s little point judging it to be a “total clusterfuck” without knowing what I’m even trying to achieve in the first place :)


Hahaha that's wild, but... I have to feel like it's abuse to use regex here!


Given an arbitrary LaTeX string...

- which may contain 0 or more \parameter macros

- of which each will either have one (\parameter{123}) or two (\parameter{123}{blue}) arguments

... besides regex, how else would you suggest replacing the entire macro with simply the value of its first argument (i.e. 123)?

I can do it in one line with regex, but if there's an easier way, I'm all ears :)


I'd have to learn more about the problem! But at the end of the day if regex works for you it works for you :)



Shh, nobody tell them GOFAI didn’t pan out.


I've done the same thing, first thing I tried with ChatGPT: https://imgur.com/a/d8LZicq

Prompts:

- Write a simple CRDT implementation in Swift

- How can I make a GraphQL server using Rust using the async-graphql and actix-web libraries?

- Write code to create a Dropbox clone in Rust

- How do I make an HTTP request in Javascript?

However, just like with Copilot, it can introduce subtle bugs. For example, in the Dropbox example, notice how it uses synchronous code in an async function. Also, longer prompts will reach the token limit and get cut off.

Interestingly, this must be what Replit is also using for their AI codegen service too (https://replit.com/site/ghostwriter) since I believe they have some connection to OpenAI, as they got early access to OpenAI's service last year with their previous Codex product (https://blog.replit.com/codex).


The problem with GPT is that it seems to be in an uncanny valley, where it's smart enough to be convincing but not smart enough to be completely accurate. So using it is often lower productivity than just writing from scratch because you have to spend more time carefully going over every line of code for subtle errors.


Indeed, I use Copilot as a glorified autocomplete, just for the next line or next few lines, which I then go over. Usually it saves time because it's what I was gonna write anyway.

I never use it for full function generations, they're almost never correct.


Don't forget tests and documentation, which is something copilot is really good at and can really improve a codebase.


ChatGPT is much better as far as I can tell.


That'll be an interesting job. AI generated code bug fixer. An AI generates all code in bulk and then it's handed to you to fix the 'subtle' bugs they introduced. Those bug fixes go back into training the algorithm so that the bugs die out over time!


> I believe they have some connection to OpenAI

Sam Altman <--> Paul Graham <--> Amjad Masad might be it.


Ah that would make more sense.


I asked it a question about how to do something with dates in the .NET Entity Framework, and it give me a long explanation with code of how rowversion is used for optimistic concurrency control. Completely unrelated to what I was asking.

I did ask it something more basic: "how to use fetch to download and parse a csv", and the code was helpful, although not much different from what I could find with google/stackoverflow. However, this next part was what really impressed me. The results used Papa for parsing, so I then asked it to do it with csv-parse instead, and it worked! That seems like the advantage over something like Copilot--it has contextual history of the question which can be used conversationally to tweak the results.


> I asked it a question about how to do something with dates in the .NET Entity Framework, and it give me a long explanation with code of how rowversion is used for optimistic concurrency control. Completely unrelated to what I was asking.

Sounds exactly like some of my co-workers :)


Bulk of the training of these types of questions are probably from stack overflow anyway


This makes me wonder if the tech jobs or the future will fall into two camps:

1. those that can specify concise requirements for the AI

2. those that can check correctness of the code the AI generated.


Most programming jobs now require someone who can specify the exact syntax to satisfy the requirements of a compiler (or interpreter) and basically nobody checks the correctness of the (assembly) code that it generates.


compilers aren't AI programs, they have deterministic output. Well, usually, I'm not a C++ programmer.


Does AI not have deterministic output?


3. those who can build and maintain the AI


4. Somebody somewhere will still find a legitimate use case to code in assembly and blow everyone's mind.


5. AI governance is likely needed.


What makes you think you need people for that other than the ones in category 1 and 2?

Building an AI is just like building a website. It can mostly be done by an AI as well.


I think that assumes AGI which I don't believe is possible, or anytime soon.


You don't need AGI for that, just compute. As soon as you formulate the requirements to the model (objective function) and the dataset, methods can be iterated and found automatically, in principle.

If it can program a desktop application it can program and evolve tensorflow/pytorch scripts.


This assumes people will still be willing to make their code public so that AI can train on it.


With the amount of people using GitHub and Copilot (including me) even with all the controversy surrounding it, I'm sure the assumption will hold true. Most people simply don't care enough about their code to make it stop being trained on a neural net, and if they get benefits from it in the form of stuff like ChatGPT or Copilot, then they're even more so not gonna care.


Technically millions of human intelligence coders were trained from public open source code and tutorials, etc.


you know what a concise requirement for the AI is called? Code. It's called code. [0]

[0] quasicitation https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...


Code can be produced by AI. Eventually the consistent requirement will be only compute and energy. If the AI can get influence over those it might be unstoppable. Even if not intelligent in the human sense. (Less Agent Smith more Plague of Locusts)


Somebody would then start an AI company which does 1 and 2 then. What then?


I highlighted your prompt and clicked web search from the context menu and got this out of DuckDuckGo: https://www.educba.com/react-login-form/

It's annoying blog spam that will hijack your attempt to scroll back with a modal asking you to enter an e-mail and subscribe, but it seems to be roughly the same thing, except the entire article appeared at once and I didn't need to interact with it and wait five minutes for the full answer.


And in a similar vein:

We've come up with a new pedagogical method to teach 4 year olds arithmetic. With two months of study they can easily multiply two 15 digit numbers in their heads.

Well I can multiply those two numbers in a calculator to get the same result, and I don't have to feed or take care of it.


You can ask it to modify the program and it will. I asked it to add chocolate to a recipe.


I feel like I've mostly seen these AIs produce code on things that are mostly just copy-paste examples (login, fibonacci, primes). From a theoretical standpoint I'm only impressed if it's combining things to do achieve something that has no code example on the internet.

From a practical perspective I do think a lot of programmers are doing copy-paste in their day jobs, which means they should be reusing existing code (I would imagine there are numerous reusable customizable login forms).

---

For example I asked it this:

Can you write a javascript function that takes a callback and returns the value of that callback, but returns a cached value if that function has been called less than 5 seconds ago?

And it produces code that looks right but doesn't work (doesn't handle arguments, failing its own example implementation). However, upon reminding it to support arguments it does work. I guess this is actually potentially useful.


It's definitely not JUST doing copy-and-paste, though...

https://shot.3e.org/ss-20221201_153225.png


Yeah for real I don't know why everybody is so impressed. Chatgpt is basically just copying what's already on the internet and rewording it. It's not self thinking and I personally think we will lose touch with different ideas from multiple sources and people because when an AI is only giving you one answer vs google giving you multiple opinions from different people it gives you more options.

Not to mention the copyright issues with this. It's just regurgitating what's already on the internet not sure why it's ok for them to copy people's work but if you do it you will get sued.


Holy molly, it's not copy-paste. Keeping with the season, I tried today's Advent of Code puzzle:

``` One Elf has the important job of loading all of the rucksacks with supplies for the jungle journey. Unfortunately, that Elf didn't quite follow the packing instructions, and so a few items now need to be rearranged.

Each rucksack has two large compartments. All items of a given type are meant to go into exactly one of the two compartments. The Elf that did the packing failed to follow this rule for exactly one item type per rucksack.

The Elves have made a list of all of the items currently in each rucksack (your puzzle input), but they need your help finding the errors. Every item type is identified by a single lowercase or uppercase letter (that is, a and A refer to different types of items).

The list of items for each rucksack is given as characters all on a single line. A given rucksack always has the same number of items in each of its two compartments, so the first half of the characters represent items in the first compartment, while the second half of the characters represent items in the second compartment.

For example, suppose you have the following list of contents from six rucksacks:

vJrwpWtwJgWrhcsFMMfFFhFp jqHRNqRjqzjGDLGLrsFMfFZSrLrFZsSL PmmdzqPrVvPwwTWBwg wMqvLMZHhHMvwLHjbvcjnnSBnvTQFn ttgJtRGJQctTZtZT CrZsJsPPZsGzwwsLwLmpwMDw

    The first rucksack contains the items vJrwpWtwJgWrhcsFMMfFFhFp, which means its first compartment contains the items vJrwpWtwJgWr, while the second compartment contains the items hcsFMMfFFhFp. The only item type that appears in both compartments is lowercase p.
    The second rucksack's compartments contain jqHRNqRjqzjGDLGL and rsFMfFZSrLrFZsSL. The only item type that appears in both compartments is uppercase L.
    The third rucksack's compartments contain PmmdzqPrV and vPwwTWBwg; the only common item type is uppercase P.
    The fourth rucksack's compartments only share item type v.
    The fifth rucksack's compartments only share item type t.
    The sixth rucksack's compartments only share item type s.
To help prioritize item rearrangement, every item type can be converted to a priority:

    Lowercase item types a through z have priorities 1 through 26.
    Uppercase item types A through Z have priorities 27 through 52.
In the above example, the priority of the item type that appears in both compartments of each rucksack is 16 (p), 38 (L), 42 (P), 22 (v), 20 (t), and 19 (s); the sum of these is 157. ```

Adding the question: "Write a solution in JavaScript to find the sum of the priorities of those item types." got me:

``` const input = vJrwpWtwJgWrhcsFMMfFFhFp jqHRNqRjqzjGDLGLrsFMfFZSrLrFZsSL PmmdzqPrVvPwwTWBwg wMqvLMZHhHMvwLHjbvcjnnSBnvTQFn ttgJtRGJQctTZtZT CrZsJsPPZsGzwwsLwLmpwMDw;

const rucksacks = input.split('\n');

const sumOfPriorities = rucksacks .map(rucksack => { const half = rucksack.length / 2; const firstHalf = rucksack.slice(0, half); const secondHalf = rucksack.slice(half); return [...firstHalf].filter(item => secondHalf.includes(item)); }) .reduce((total, common) => { const priority = common[0].charCodeAt(0) <= 122 ? common[0].charCodeAt(0) - 96 : common[0].charCodeAt(0) - 38; return total + priority; }, 0);

console.log(sumOfPriorities); // 157 ```

Which is almost perfect, even though it's a new puzzle.


Since I found out about this last night I have already used it to do most of the work of three (relatively simple) programming tasks in bash and JavaScript. One of them was a simple class needed to prevent skipping/reusing numbers in some process that multiple users were accessing (and could sometimes fail). The other was adding up and deleting some temp files in my specific directory structure. And then there was a certain way that I wanted to add a duration to a date and format it.

For a few of them it made a small error of one line that I had to fix. But also in a few cases I was able to just reply that there was a bug and what it was doing incorrectly and to fix it, and it worked.

It gave me code that was very specific such as using the methods as I named and described them in the request for the class.


It really depends on what you want to do. I just told it to build a React-like framework in Prolog. It first told me that's not possible because Prolog can't be used to build GUIs (not really true either). I then told it that it could just output the DOM manipulations as text and it agreed. But said it's just a text model and can't write programs.


I tried asking it some programming questions -- I found if I asked it things that I was likely to find lots of copies of (or similar things), it did quite well. As soon as I asked it something a bit more unusual, even if the program was quite small, it broke.

One easy way of catching out is to ask "write X in language Y", where X isn't something most people would write in Y -- it usually got it wrong for me.


Right, it seems so cool at first, but very quickly just feels like... google. I know its a sophisticated thing, but I really can't sustain enthusiasm for it, I don't understand how all the literal and figurative energy that goes into this is ultimately worth it, beyond being like "huh, cool" for a few minutes. It is immediately apparent that there isn't, like, synthetic judgements happening in the Kantian sense or whatever, just parsed html with maybe wrong answers and a very good ability to make it feel conversational.

Ill stick to man pages and blogs for now. I like learning!


Just got kicked out of it an hour ago because it's too busy now.

Been playing with it the whole day - got an idea on how to fix an old bug in an old programme of mine(that is still running).

But, here's the thing, how good it is depends on the available knowledge out there.

I've now tried, js, python, solidity, appscript(not tested the code), Excel and google sheets.

When you're building common js, python, etc stuff, it's really handy to have one screen that basically gives you the answer rather than having to google it, parse through the posts, etc.

However, if you're on the bleeding edge of tech, or looking for more obscure topics, it really starts showing its limitations.

I managed to run into that for example looking for Jupiter notebook commands, optimisations, versioning (giving me python2 results instead of python3 with obsolete/unmaintained packages).

Its limit is basically what has been created and knowledge already available.

Depending on what you're building, would suggest understanding what you're copy pasting and possible vulnerabilities that may arise from the code.

Overall, I'm sold - defo wouldn't mind paying 20 to 40 a month for this. If it's pricier, not so sure.


This is the new threat vector. AI copy pasta. Gonna make pen test research way more fun. Just describe the website you’re at and read the source code.


Can it work to identify vulnerabilities? For instance, if you feed it code and ask it to find the vulnerability and provide a description of the issue?


Kind of; here's a couple of (somewhat related) examples:

- Solidity contract bug: https://twitter.com/gf_256/status/1598104835848798208?s=20

- Simple JS bug: https://twitter.com/bentossell/status/1598269697371185153?s=...

- Buffer overflow (though ChatGPT generated this one as a code example, and explains it): https://twitter.com/bentossell/status/1598269719202521088?s=...

- Explaining worst case bubble sort: https://twitter.com/bentossell/status/1598269721882681344?s=...

- Fixing a Kubernetes file: https://twitter.com/PhilipZe9/status/1598303853719560196?s=2...


"in the style of a fast-talkin' wise guy from a 1940's gangster movie" is hilarious - I'm going to steal this and try other things to see what it gets me


Here's a relatively simple example where it identifies and explains a buffer overflow: https://twitter.com/moyix/status/1598081204846489600


That will lower the barrier to entry for attackers. The script kiddy equivalent of the AI age.


Glory to Zalgo, it’s finally been solved:

https://ibb.co/KwGLy9C


It's decent for some results, but almost meaningless in others. I'd asked it to build a SwiftUI VideoPlayer application which is 5 lines of code. But I got an old solution, and despite giving proper guidance, it failed colossally and printed garbage towards the end.

Here's my convo with ChatGPT: https://imgur.com/a/eIjvh8C

If the text and code are just random stuff making no sense then I wonder how different it is from the Copilot results (besides the convincing style of answers).


This is due to it being trained on Apple docs. ;)


I wonder if this is lifted from a React tutorial, or if it is an amalgamation of other things.

The big question is: whose code is this? Is it, verbatim, someone else's copyrighted code (assuming a snippet like that is novel enough to be copyrightable)? Or is it derived from someone else's code? And if it's derived, is it a derivative work in the legal sense, or is it no more derivative than if a human had read a tutorial or some documentation and then later written code like it?


What happens when AI is learning from AI and no new content or advancement in a field is from people? Won't it be an eco chamber of recycled content?


That's most of Enterprise application dev as it is. We can have that run by robots and have actual people do things that actually matter.


Genuine question for everyone here: if you were switching careers from a non-technical profession to learning computer science, where would you focus? Do you have any confidence in which skills will be future-proof? Or is everything open to being automated and humans are just biding our time? :)


Theory has the longest shelf-life. (That's the name of a class, in many Computer Science curricula.)


We just had conversation about this in the company. I think it's a good way to give us the code that we know how to write but we waste some time to do that. I.e. I needed function that changes the index of the array, so I asked chatGPT which gave me correct function. I know how to do that but I need some time to think and to test it out. This is just a augmentation of the programming, tools that help us write better code.

Maybe it can substitute StackOverflow, we will get better and faster code / explanation. But in the next 30-50 years, who knows. I guess it can write a lot of code for us, maybe we will be obsolete in that period of time, but as we know, prediction of the future often times goes really really wrong.


I am stopping all my contributions to github. I dont want myself to be part of unemployed


Lol


I had a whole discussion with it yesyerday about category theory as applied to asynchronous streams, and it was great. I think if it was a person we'd both have come out better programmers, but I guess it was just me.


ChatGPT has the IP problem with fast inverse square root again, like Copilot did: https://i.imgur.com/BqYSq7C.png


That is not ChatGPT.



It should be noted (and admitted in the blog post) that the high-level results from the new GPT-3 text-davinci-003 model and chatGPT should be similar, albeit without fancy syntax highlighting.

Well, mostly. I tried the prompt in the OP into text-davinci-003 and got this:

    const handleLogin = (e) => {
      e.preventDefault();
      // check if the email and password are valid
      // perform login
      console.log(email, password);
    };
Although that was with temperature = 0.7. temperature = 0.0 gives more reasonable results.


I put this prompt in:

Write a javascript file using the Express framework in NodeJS containing CRUD routes for a TODO entity that contains a name, a description, and datapoint indicating whether the TODO is completed or not, and connect this entity to the Sequelize module from the npm package manager

And it spat out a full tutorial on how to write an express crud app hooked up to Sequelize with SQLLite with Sequelize types and all crud routes, it's pretty impressive


Uh? Is it me or this openai website is literally demanding a cellular phone number? Only have a landline number.

What is this? I just gave my email. Isn't this enough?.


I've been playing with it, asking it to create a rest api for example for some blockchain testnet etc.

My assumptions up until now is that it provides kind of a template result for many things, which you could have searched online. It's like a complement to a search engine.


Did the code work?


It looks correct from what I can tell.


The trouble is that it doesn't know where it copied from.


I would like to use this, but I find it weird that it requires a phone number to use.


This works wonders assuming all code moving forwards will only be written with tools it was trained against. I imagine it will have no clue how to get berate code for a fresh new framework.



Can it find errors though? I mean when people will start to rely too much on a computer brain, not using/training their own, they essentially become slaves to this system.


WITCH companies “consultants” would be avid users of this feature.


This is awesome. Would be amazing for a tutorial as well. I can code(defined loosely) in python, but seems like this would accelerate learning a new language significantly.


Can someone please try asking it to find proofs for unsolved mathematical conjectures?


Is this why Elon Musk is firing so many programmers at Twitter?


After playing with it, This is the first thing that came into my mind as well.


This seems like the ideal friend's developer, who can intelligently search google for nearly perfect answer.


I wonder if ChatGPT could solve a travelling salesman problem?


It will never will. It's just glorified autocompleter. It's not capable of coming up with good explanations.


I don't know, this other commenter begs to differ: https://news.ycombinator.com/item?id=33819439


https://regex101.com/ already does this and didn't needed 1 Billion dollars in funding to do that.


"I don't need a general purpose computer, I already have my trusty calculator right here."


After using this for 1 day I realised I jumped the gun. ChatGPT is awesome.


Glad to see someone changing their mind on here. I also would like to say that I have tempered my excitement since yesterday too, as like in your other comment I've found some ways that it can be wrong, often in subtle ways.


I would rather have my calculator than a general purpose noise generating computer. Thanks

https://pbs.twimg.com/media/Fi4A0W8UAAAFF47?format=jpg&name=...


Maybe, but I think AlphaTensor is a better bet when it comes to finding new algorithms.


Does it really generate the text like that, little by little, or is that just a needless animation?


It's literally the model's output, in real-time. The GPT-3 API lets you turn streaming on or off, but if you don't enable it you have to wait much longer for the completion to finish entirely, before getting anything back.


Yes because it's the way the AI's architecture works, it generates the next token in a phrase based on the previous tokens. You can see it happen in real time or you can wait until it's all finished.


Wow




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: