Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm equally excited and terrified. Excited for the possibilities of a new technological revolution, but terrified for all potential abuses of technology the said revolution would bring. What is stoping our adversaries from developing malicious AI models and unleashing them on us?


> What is stopping our adversaries from developing malicious AI models and unleashing them on us?

That fear is a big part of OpenAI’s reasoning behind not open sourcing their models. So in the immediate terms I’d say malicious uses are limited by its locked down nature. Of course, that’ll eventually end. The key research that makes this possible is open and eventually access will be democratized.

My personal take, which I know is controversial, is that by locking down these models, but still making them available over a GUI/API, the world can better prepare itself for the eventual AI onslaught. Just raising awareness that the tech has reached this level is helpful. Still not sure how we’ll deal with it when the bad actors come though.


Are you sure that access will be democratized? What if you need $100k worth of equipment to run it, partially from a large number of weights, and partially because corporations drive spectacularly high demand on GPUs, driving the price higher? Just having the algorithm is not enough to guarantee it unfortunately.


I would be very surprised if not.

At least some state actors will invest the very negligible money of getting to where gpt-4 is now. It does not need to be cost efficient to train or run.

It's total cost is not even near the scope of a space program or even a major military research project.

With 10-100 million dollars you can probably get most of the way there once it gets prioticed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: