Hacker News new | past | comments | ask | show | jobs | submit | vishal0123's comments login

Just because you can't run it doesn't mean it wouldn't be useful. This likely will give insight on the kind of modelling, experiments, goal function, deployment methods, resiliency etc. And you could easily reverse engineer policy decision with code(IMO), but other way is not possible.


I'm definitely not expecting something runnable.

I think reverse engineering the policy from the code will be hard – this is why no one really understands these systems in full.

However I think given the policy, assuming we trust that Twitter does in fact attempt to implement that policy, the code doesn't really matter. We wouldn't be able to run the code anyway, and bugs aren't really a problem compared to what the intention is, as Twitter would supposedly be constantly working to make the code match the policy.


> some idiot executive don't put it in full charge of mortgage underwriting

There are two scenario that you are mixing:

Mortgage writing using GPT make more money for the lender: I don't think it is a tech community responsibility to give wrong suggestion against GPT, and it should be handled in legal way.

GPT fails and mortgage and using GPT could mean loss for the lender: It would correctly push the market against relying on LLM, and I don't have any sympathy for those companies.


The issue is when the lender is too big to fail and get bailed out at the taxpayer's expense


Exactly my setup. I tried to use `conda install` few times, but every time after just few globally installed packages, conda SAT solver always struggles, and I now live with assumption that if incompatible package combination does not throw any error in dev environment, it is likely fine.


FYI the libmamba solver released last year is way faster than the classic one at modifying environments.


Long form messaging like letters is close to dead due to the effect of telephone and SMS.

Internet eliminated the need to go to library to find information.

Hell, even books/writing changed the way people acquired and stored information and people no longer needed to be present in the university. Plato was against writing in a time when writing was just becoming accessible in a widespread way, and in just 2000 years we literally can't imagine a world without writing.


I think the biggest value would be products tracking users, just as it currently is, like Google, Facebook etc. If assuming AI could learn from low quality data like humans could now, these companies have a huge dataset consisting of multiple GBs per living person that is available to them now including (verified) human written texts, search history, browsing history, video call logs/transcripts, translation transcripts etc.

In future company could even pay users to get access to their keyboard and mic to get the data that is verified to be human.


From the paper

> We tested GPT-4 on a diverse set of benchmarks, including simulating exams that were originally designed for humans.3 We did no specific training for these exams. A minority of the problems in the exams were seen by the model during training; for each exam we run a variant with these questions removed and report the lower score of the two. We believe the results to be representative. For further details on contamination (methodology and per-exam statistics), see Appendix C.


They allow others to use it for noncommercial use. Other research groups won't have to use openAI APIs for some of the usecases, hence the model is competing.


I am surprised that they were able to launch this in stanford's domain. They clearly broke TOS of both Facebook and OpenAI, and even admitted doing that. I would be happy if the research decides to ignore openAI and facebook's useless restrictions.


What part of Facebook's tos did they break?


doing that would make it less likely that they would publish a model next time


68/2, not 682


So, if I understand correctly, that's what you need to run the best model?

With GPU:

VRAM + RAM >= 68/2

Without GPU:

RAM >= 68/2


Not sure about the "=" part. You'd want some memory for the compositor and other OS graphics, and regular RAM for OS and programs, no?


> Thus, on one hand, I'm glad they're doing this, as it should help prevent wider bank runs

Could you expand? My first thought was that the bank who is in verge of crisis could tip over with additional burden.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: