Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ah yes, the user that modifies requests then gets to use an unreleased plug-in, which is a vulnerability because... The user could then ask to use a plugin that will return useless info?

Pray tell what a client side only tool that you have to modify requests to access will do that is dangerous?



> The user could then ask to use a plugin that will return useless info?

Plugins don’t just “return data” they perform arbitrary actions. [0] “Unreleased” presumably in some cases means “not adequately tested” regarding the instructions to the model as to when and how to use the API that the plugin wraps. An ubreleased plugin is effectively untested code wrapping an exposed, possibly read/write, API.

[0] examples given on the OpenAI plugins docs website include “booking a flight, ordering food, etc.”


So, let's get this straight.

You modify the request made, to display unverified plugins.

You turn one on (at random ? You just see a plugin that tells you they're going to invest all of your money in an ETF and give you 300%, and you just click it ?)

You give that plugin everything it needs to interact with the things that matter. You let it login to your bank account and your Robinhood account.

Then you get surprised when "untested" or harmful code gets executed when you run the hidden plugin ?

Do you blame the person who gave you a knife and told you to only use it on vegetables when you cut yourself trying to cut a slab of metal ?


If I’m the plugin supplier and OpenAI tells me they are going to impose limits to a group of <15 testers that I’ve cleared (all of whom are either internal or under contract), and then I’ve got to deal with external users using the unreleased plugin and causing fallout, I’m going to blame OpenAI for lying to me.

(Of course, to be fair, the basic model OpenAI uses creates exposure that doesn't even go through OpenAI, which would worry me from the start, and really dedicated API users could just implement ReAct and their own actions against any API, but that's higher effort and not particularly facilitated by OpenAI.)


So, without any information, you assume that _every_ plugin ever put out is there, rather than the ones that were publicly published, and go on and build your very own strawman. No possibility of the being internal testing tracks, or of having no testing tracks and OpenAI just telling you to handle it on your own for testing (by not publishing it, for example)


> So, without any information, you assume that _every_ plugin ever put out is there, rather than the ones that were publicly published

No, we know that the unlisted ones that are accessible are the ones that weren’t supposed to be publicly published that are in limited testing.

> No possibility of the being internal testing tracks, or of having no testing tracks and OpenAI just telling you to handle it on your own for testing (by not publishing it, for example)

I mean, no, because OpenAI publishes what the testing setup is, so we know that the case isn’t “no testing tracks and OpenAI just telling you to handle it on your own for testing (by not publishing it)”.

And its not clear how you could test it without using the OpenAI published approach; since we don’t know whether OpenAI tunes on the information, or references it in the hidden prompt somehow. Only in that latter case could an external party even in theory test before exposing the plugin, and that would require detailed information that OpenAI hasn’t provided in its developer information.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: