Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Alpha Fold is a game changer, but nowhere near the game changer ChatGPT(4) is, even if ChatGPT was only available for the subset of scientists that benefit from Alpha Fold. We are literally arguing semantics if this is AGI, and you're comparing it to a bespoke ML model that solves a highly specific domain problem (as unsolvable and impressive as it was).


> We are literally arguing semantics if this is AGI,

And if it isn't? Literally every single argument I've seen towards this being AGI is "We don't know at all how intelligence works, so let's say that this is it!!!!!"

> nowhere near the game changer ChatGPT(4) is, even if ChatGPT was only available for the subset of scientists that benefit from Alpha Fold

This is utter nonsense. For anyone who actually knows a field, ChatGPT generates unhelpful, plausible-looking nonsense. Conferences are putting up ChatGPT answers about their fields to laugh at because of how misleadingly wrong they are.

This is absolutely okay, because it can be a useful tool without being the singularity. I'd sure that in a couple of years time, most of what ChatGPT achieves will be in line with most of the tech industry advances in the past decade - pushing the bottom out of the labor market and actively making the lives of the poorest worse in order to line their own pockets.

I really wish people would stop projecting hopes and wishes on top of breathless marketing.


Your experience and my experience do not align.

I asked GPT-4 to give me a POSIX compliant C port of dirbuster. It spit one out with instructions for compiling it.

I asked it to make it more aggressive at scanning and it updated it to be multi-threaded.

I asked it for a word list, and it gave me the git command to clone one from GitHub and the command to compile the program and run the output with the word list.

I then told it that the HTTP service I was scanning always returned 200 status=ok instead of a 404 and asked it for a patch file. It generated that and gave me the instructions for applying it to the program.

There was a bug I had to fix: word lists aren’t prefixed with /. Other than that one character fix, GPT-4 wrote a C program that used an open source word list to scan the HTTP service running on the television in my living room for routes, and found the /pong route.

This week it’s written 100% of the API code that takes a CRUD based REST API and maps it to and from SQL queries for me on a cloudflare worker. I give it the method signature and the problem statement, it gives me the code, and I copy and paste.

If you’re laughing this thing off as generating unhelpful nonsense you’re going to get blind sided in the next few years as GPT gets wired into the workflows at every layer of your stack.

> pushing the bottom out of the labor market and actively making the lives of the poorest worse in order to line their own pockets.

I’m in a BNI group and a majority of these blue collar workers have very little to worry about with GPT right now. Until Boston Dynamics gets its stuff together and the robots can do drywalling and plumbing, I’m not sure I agree with your take. This isn’t coming for the “poorest” among us. This is coming for the middle class. From brand consultants and accountants to software engineers and advertisers.

Software engineers with GPT are about to replace software engineers without GPT. Accountants with GPT are about to replace accountants without GPT.

> Literally every single argument I've seen towards this being AGI is

Here is one: it can simultaneously pass the bar exam, port dirbuster to POSIX compliant C, give me a list of competing brands for conducting a market analysis, get into deep philosophical debates, and help me file my taxes.

It can do all of this simultaneously. I can't find a human capable of the simultaneous breadth and depth of intelligence that ChatGPT exhibits. You can find someone in the upper 90th percentile of any profession and show that they can out compete GPT4. But you can't take that same person and ask them to out compete someone in the bottom 50th percentile of 4 other fields with much success.

Artificial = machine, check. Intelligence = exhibits Nth percentile intelligence in a single field, check General = exhibits Nth percentile intelligence in more than one field, check

This is AGI, now we are nit-picking. It's here.


Maybe it's heavily biased towards programming and computing questions? I've tested GPT-4 on numerous physics stuff and it fails spectacularly at almost all of them. It starts to hallucinate egregious stuff that's completely false, misrepresents articles it tries to quote as references etc. It's impressive as a glorified search engine in those cases but can't at all be trusted to explain most things unless they're the most canonical curriculum questions.

This extreme difficulty in discerning what it hallucinates and what is "true" is what it's most obvious problem is. I guess it can be fixed somehow but right now it has to be heavily fact-checked manually.

It does this for computing questions as well, but there is some selection bias so people tend to post the success-stories and not the fails. However it's less dangerous if it's in computing as you'll notice it immediately so maybe require less manual labour to keep it in check.


> This is AGI, now we are nit-picking. It's here.

Hahaha, if you want nit-picking, all the language tasks chatGPT is good at are strictly human tasks. Not general tasks. Human tasks are all related to keeping humans alive and making more of us, they don't span the whole spectrum of possible tasks where intelligence could exist.

Of course inside language tasks it is as general as can be, yet still needs to be placed inside a more complex system with tools to improve accuracy, LLM alone is like brain alone - not that great at everything.


On the other hand if you browse around the web you will find various implementations of dirbuster, probably in C for sure in C++ which are multi-threaded , it’s not to take away from your experience but I mean, without knowing what’s in the training set it may have already been exposed to what you asked for, even several times over.

I have a feeling they had access to a lot of code on GH, who knows how much code they actually accessed. Copilot for a long time said it would use your code as training data, including context, if you didn’t opt out explicitly, so that’s already millions maybe hundreds of millions of lines of code scraped.

The conspiracy theorist in me wonders if MS just didn’t provide access to public and private code to train on, they wouldn’t have even told Open AI, just said, “here’s some nice data”, it’s all secret and we can’t see the models inputs so I’ll leave it at that. I mean they’ve obviously prepared the data for copilot, so it was there waiting to be trained on.

So yeah I feel your enthusiasm but if you think about it a little more, or maybe not so hard to imagine what you saw being actually rather simple ? Every time I write code I feel kind of depressed because I know almost certainly someone has already written the same thing and that it’s sitting in GitHub or somewhere else and I’m wasting my time.

ChatGPT just takes away the knowing where to find something (it’s already seen almost everything the average person can think of) you want and gives it to you directly. Have you never thought of this already ? Like you knew all the code you wanted already was there somewhere, but you just didn’t have an interface to get to it? I’ve thought about this for quite a while and I knew there would big data people doing experiments who could see that probably 80-90% of code on GitHub is pretty much identical.

Nothing is magic, right ?


> If you’re laughing this thing off as generating unhelpful nonsense you’re going to get blind sided in the next few years as GPT gets wired into the workflows at every layer of your stack.

Okay, now try being a scientist in a scientific field that isn't basic coding.

It's not people laughing at pretences, it's people who know even basic facts about their field literally looking at the output today and finding it deeply, fundamentally incorrect.


I do not believe that is a reasonable threshold for AGI. If it were, I believe a significant % of humans would individually fail to meet the threshold of AGI.

I wonder what your personal success rate would be if we did a Turing test with the “people” who “know basic facts about their field.” If they sat at a computer and asked you all these questions, would you get them right? Or would you end up in slide decks being held up as a reason why misnome doesn’t qualify as AGI?

I find comfort in knowing that it can’t “do science.” There is a massive amount of stuff it can do. I’m hopeful there will be stuff left for humans.

Maybe we’ll all be scientists in 10 years and I won’t have to waste my life on all this “basic coding” stuff.


> ChatGPT generates unhelpful, plausible-looking nonsense.

Absolutely not! I created a powershell script for converting one ASM label format to another for retro game development and i used ChatGPT to write it. Now, it fumbled some of the basic program logic, however, it absolutely nailed all of the specific regex and obtuse powershell commands that i needed and that i merely described to it in plain English.

It essentially aced the "hard parts" of the script and i was able to take what it generated and make it fit my needs perfectly with some minor tweaking. The end result was far cleaner and far beyond what i would have been able to write myself, all in a fraction of the time. This ain't no breathless marketing dude: this thing is the real deal.

ChatGPT is an extremely powerful tool and an absolute game changer for development. Just because it is imperfect and needs a bit of hand holding (which it may not soon), do not underestimate it, and do not discount the idea that it may become an absolute industry disrupter in the painfully near future. I'm excited ...and scared


>> ChatGPT generates unhelpful, plausible-looking nonsense. > Absolutely not!

It does, quite often. Not only that, as you describe. But it does.

For example, I asked it what my most cited paper is, and it made up a plausible-sounding but non-existent paper, along with fabricated Google Scholar citation counts. Totally unhelpful.

It also can produce very useful things.


Right, i think it's a question of how to use this tool in its current state, including prompting practice and learning its strengths. It can certainly be wrong sometimes, but man, it is already a game changer for writing, coding, and i'm sure other disciplines.

If you're a robotresearcher, maybe try getting it to whip up some ...verilog circuits or something? I don't know much about your field or what you do specifically, but tasks like regular expressions or specific code syntax it is absolutely brilliant at, whatever the equivalent to that is in hardware. ...I've only ever replaced capacitors and wired some guitar pickups.

> it made up a plausible-sounding but non-existent paper, along with fabricated Google Scholar citation counts

I ran into a similar issue: I asked it for codebases of similar romhacks to a project i'm doing, and it provided made up Github repos with completely unrelated authors for romhacks that do actually exist: non-existent hyperlinks and everything.

Now, studying the difference in GPT generations, it seems like more horsepower and more data solves alot of GPT problems and produces emergent capabilities with the same or similar architecture and code. The current data points to this trend continuing. I find it both super exciting and super ...concerning.


>> I asked it what my most cited paper is

You asked a machine learning model to tell you something about yourself that you already knew?


This seems like the perfect test, because it's something that does have information on the internet - but not infinite information, and you know precisely what is wrong about the answer.


I find it’s better at really mainstream things. The web is riddled with Powershell examples.


I'd recommend reading the Sparks of AGI paper[1] and watching the accompanying video[2]

[1] - https://arxiv.org/abs/2303.12712

[2] - https://m.youtube.com/watch?v=qbIk7-JPB2c


> I'd sure that in a couple of years time, most of what ChatGPT achieves will be in line with most of the tech industry advances in the past decade - pushing the bottom out of the labor market and actively making the lives of the poorest worse in order to line their own pockets

This is not what any of the US economic stats have looked like in the last decade.

Especially since 2019, the poorest Americans are the only people whose incomes have gone up!


> ChatGPT generates unhelpful, plausible-looking nonsense.

I use ChatGPT daily to generate code in multiple languages. Not only does it generate complex code, but it can explain it and improve it when prompted to do so. It's mind blowing.


GPT4 can pass the the neurosurgical medical boards, most of the people laughing at it are typically too dumb to note the difference between 3.5 and 4.

>pushing the bottom out of the labor market and actively making the lives of the poorest worse in order to line their own pockets.

This makes zero sense. GPT4 has little effect on a janitor or truck driver. It doesn't pick fruit, or wash cars.


FWIW, as a non-pathologist with a pathologist for a father, I can almost pass the pathology boards when taken as a test in isolation. Most of these tests are very easy for professionals in their fields, and are just a Jacksonian barrier to entry. Being allowed to sit for the test is the hard part, not the test itself.

As far as I know, the exception to this is the bar exam, which GPT-4 can also pass, but that exam plays into GPT-4's strengths much more than other professional exams.


> the exception to this is the bar exam

FWIW, this is more true for CA than most states.


What is a Jacksonian barrier to entry? I can't find the phrase "Jacksonian barrier" anywhere else on the internet except in one journal article that talks about barriers against women's participation in the public sphere in Columbia County NY during Andrew Jackson's presidency.


I may have gotten the president wrong (I was 95% sure it's named after Jackson until I Googled it), but the word "Jacksonian" was meant to refer to the addition of bureaucracy to a process to make it cost more to do it, and thus discourage people. I guess I should have said "red tape" instead...

Either it's a really obscure usage of the word or I got the president wrong.


"It's difficult to attribute the addition of bureaucracy or increased costs to a specific U.S. president, as many presidents have overseen the growth of the federal government and its bureaucracy throughout American history. However, it is worth mentioning that Lyndon B. Johnson's administration, during the 1960s, saw a significant expansion of the federal government and the creation of many new agencies and programs as part of his "Great Society" initiative. This expansion led to increased bureaucracy, which some argue made certain processes more expensive and inefficient. But it's important to note that the intentions of these initiatives were to address issues such as poverty, education, and civil rights, rather than to intentionally make processes more costly or discourage people.

Signed,

Guess Who"


Yeah, I asked ChatGPT about it too, and unsurprisingly (and unhelpfully) got answers that pointed to every president.


Hmm, which version did you ask? GPT4 went immediately to LBJ, whom I assume you had in mind originally.

That is not an instance of passing a standardized academic test through "autocompletion" or "regurgitation." It's rudimentary synthetic thought.

If it had named a different president, I could have argued with it, which is what I find especially interesting.


lmao do you have any idea how much time medical students spend studying the STEP exams(prior to them becoming P/F)


Exams are designed to be challenging to humans because most of us don’t have photographic memories or RAM based memory, so passing the test is a good predictor of knowing your stuff, i.e. deep comprehension.

Making GPT sit it is like getting someone with no knowledge but a computer full of past questions and answers and a search button to sit the exam. It has metaphorical written it’s answers on it’s arm.


This is essentially true. I explained it to my friends like this:

It knows a lot of stuff, but it can't do much thinking, so the minute your problem and its solution are far enough off the well-trodden path, its logic falls apart. Likewise, it's not especially good at math. It's great at understanding your question and replying with a good plain-english answer, but it's not actually thinking


That's a disservice to your friends, unless you spend a bunch of time defining thinking first, and even then, it's not clear that it, with what it knows and the computing power it has access to, doesn't "think". It totally does a bunch of problem solving; fails on some, succeeds on others (just like a human that thinks); GPT-4's better than GPT-3. It's quite successful at simple reasoning (eg https://sharegpt.com/c/SCeRkT7 and moderately successful at difficult reasoning (eg getting a solution to the puzzle question about the man, the fox, the chicken, and the grain trying to cross the river. GPT-3 fails if you substitute in different animals, but GPT-4 seems to be able to handle that. GPT-4's passed the bar exam, which has a whole section on logic puzzles (sample test questions from '07: https://www.trainertestprep.com/lsat/blog/sample-lsat-logic-... ).

It's able to define new concepts and new words. It's masters have gone to great lengths to prevent it from writing out particular types of judgements (eg https://sharegpt.com/c/uPztFv1). Hell, it's got a great imagination if you look at all the hallucinations it produces.

All of that sum up to many thinking-adjacent things, if not actual thinking! It all really hinges on your definition of thinking.


exactly. it's almost like say dictionaries are better at spelling bee hence smarter than humans, or that computers can easily beat humans in Tetris and smarter because of that.


It has metaphorical(ly) written its answers on its arm.

See GPT4's reply to pclmulqdq at https://news.ycombinator.com/item?id=35648144 .

That's not a response from someone who wrote the answers on the inside of their elbow before coming to class. That's genuine inductive reasoning at a level you wouldn't get from quite a few real, live human students. GPT4 is using its general knowledge to speculate on the answer to a specific question that has possibly never been asked before, certainly not in those particular words.


It is hard to tell what is really happening. At some level though, it is deep reasoning by humans, turned into intelligent text, and run through a language model. If you fed the model garbage it would spit out garbage. Unlike a human child who tends to know when you are lying to them.


If you fed the model garbage it would spit out garbage.

(Shrug) Exactly the same as with a human child.

Unlike a human child who tends to know when you are lying to them.

LOL. If that were true, it might have saved Fox News $800 million. Nobody would bother lying, either to children or to adults, if it didn't work as well as it does.


There's literally millions of kids who think a fat man with a beard delivers them presidents once a year if they're nice.


>We are literally arguing semantics if this is AGI

It isn't and nobody with any experience in the field believes this. This is the Alexa / IBM Watson syndrome all over again, people are obsessed with natural language because it's relatable and it grabs the attention of laypeople.

Protein folding is a major scientific breakthrough with big implications in biology. People pay attention to ChatGPT because it recites the constitution in pirate English.


This is like all other rocket companies undermining what spacex is doing as not a big deal. You can keep arguing semantics while they keep putting actual satellites and people into orbit every month.

I use chatGPT every day to solve real problems as if it’s my assistant, and most people with actual intelligence I know do as well. People with “experience in the field”, in my opinion can often get a case of sour grapes that they internalize and project with their seeming expertise and go blind to persist some sense of calm to avoid reality.


ChatGPT cannot reason from or apply its knowledge - it is nowhere near AGI.

For example, it can describe concepts like risk neutral pricing and replication of derivatives but it cannot apply that logic to show how to replicate something non-trivial (i.e., not repeating well published things).


The domain is the domain of protein structure, something which potentially has gigantic applications to life. Predicting proteins may yet prove more useful than predicting text.


“Predicting proteins”? I’m a biologist and I can assure you knowing the rough structure of a protein from sequence is nowhere near as important to biology as everyone makes it out to be. It is Nobel prize worthy to be sure but Nobel prizes are awarded once a year not once a century.


Could be interesting to correlate DNA with text produced by people. Both are self replicating, self evolving languages.


If so, where are application of this? Is it too early?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: