I don't think _this_ is a security check. These are essentially unlisted plugins. They will be released/approved and this is a way to allow some user to test the plugins (different client side software).
How is this a security flaw? Recently I made a browser extension to change the theme of a website. I discovered from the html that there was already a (work in progress) dark mode that could be enabled by adding a css class to the root element.
Did I hack the site by using this unreleased feature?
It's an indication of their ability to keep things secret they mean to keep secret. It's very reasonable for anyone but the absolute biggest sites to think "who cares if someone sees our WIP dark mode?". On the other hand, it really seems like OpenAI wouldn't have wanted this out yet and just half-assed hiding it.
> It's an indication of their ability to keep things secret they mean to keep secret.
This is a fairly large assumption, that the secret was important to them. Sometimes a curtain in front of the stage is all that’s needed, until things are ready.
It feels like you're trying to be just vague enough that you won't be called out on being wrong.
The plugin system is completely open. It's a manifest file like a robot.txt. You can hook up the API to those endpoints yourself with minimal technical skill.
Many people had already integrated Wolfram and that was before there was an open format specifically designed to be easily integrated into ChatGPT.
At the end of the day for how overused the term "FUD" is on this site, this is the first time I've actually seen it in action.
> It's an indication of their ability to keep things secret they mean to keep secret.
Leaving aside OpenAI’s intention, its an indication of OpenAI’s failure to restrict access via their system to something to which they have represented to plugin suppliers that their system will restricted access to only a small, supplier-identified group of testers.
They are saying that client-side only "ACL" is sloppy and that could be an indication of even more internal slop (of which title-leak may be another symptom)
I suspect it was a deliberate decision not to ACL plugins.
They let anyone create and use one after all.
The only reason approval exists at all is so users aren't tricked into running low quality or spammy plugins.
You could consider this similar to someone revealing that lots of apps banned from app stores are available on other websites and one could write the headline "banned app leaks onto apkmirror.com, is google security compromised?"
> The only reason approval exists at all is so users aren't tricked into running low quality or spammy plugins.
Really? Allowing the suppliers of plugins to verify that ChatGPT understands the descriptions and uses them as expected (especially for ones which perform actions beyond data retrieval) before releasing them into the wild as intermediaries between users and the systems exposed by them isn’t part of that?
No, you could consider this similar to someone revealing that lots of apps banned from app stores are available on the same app stores that they’re banned from, which, yeah, looks a bit dodgy.
Nope, banned is banned - this is exactly like “someone has found a way to distribute a certificate to allow you to install an in review app from the App Store, it might be rejected later though”
Whether something is a security problem or not requires a threat model and a notion of what the appropriate functioning of the system is. For all we know, OpenAI intended to release these plug-ins this way, sort of like those bars that require a "secret password" to create a sense of mystery.
As an external observer, all I can say is controlling access to plug-ins via client side validation was an unusual choice and it makes me worried they made the same unusual choice elsewhere to protect data I care about.
If you look at the list of plugins, some are non-prod versions, some are internal to other companies - eg the IAM server for Netflix’s Workforce tools.
I don't think the following plugins will be released to the public. Even that these plugins exist on a production server somewhere, and can be actively used, probably tells you how seriously OpenAI takes "alignment".
> evil plugin
> DAN plugin
> froge
> evil status
And it looks like there's many people high up in governance that get access to OpenAI's products before the general public.