Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Superfunctions – AI prompt templates as an API (versoly.page)
42 points by trentearl on Aug 20, 2023 | hide | past | favorite | 16 comments
Hi HN,

https://superfunctions.com

I'm working on a web app that allows Ai prompts to function as an API. I want to make it easier for developers to use Ai. I've found it painful to monitor, cache, and iterate on prompts. superfunctions.com is designed to be the simplest building block to create Ai powered apps and scripts.

Simplest example I can think of: You want an api to convert human-named colors to hex You can write a prompt like: "convert {{query.color}} to color, only output hex for css" and then you can call your prompt with https://superfn.com/fn/color-to-hex?color=blue and the response will contain: #0000FF

Watch a short video intro: https://www.youtube.com/watch?v=KdO1TBUbRuA

Login without needing an account: https://superfunctions.com/login/anon

I'm still sorting out a few bugs, but it's usable in it's current state.

This is my first solo project, so I'm very open to feedback and suggestions.

-Trent



Nice work! Curious how you see the differentiation from similar products, like https://www.trudo.ai/, https://promptitude.io/, https://www.fixie.ai/, https://www.baseplate.ai/ and others.


Thanks a lot! I purposefully avoided looking at similar projects while developing until now, so this is the first time I'm seeing other projects in this space.

Superfunctions is differentiated because it's much less opinionated and a bit simpler. My philosophy was: we dont know where Ai is taking us so it makes sense to build only one single layer of abstraction over the chatgpt/openai. Because there is no additional abstraction you can easily switch to request directly to the openai api, it can auto-generate the curl/fetch commands.

I've built a few apps that use superfunctions as a backend, one of which is for my day job which is in a very regulated industry and we can use azure apis but we can't just call random websites. So I need it to be more portable.

Of the ones you listed promptitude looks the most similar, and looks really well done.


Funny demo! I really did LOL (wont spoil it)

And beyond that interesting generative AI use case. The big challenge is thinking what is possible.

The next step might be simple chaining (of APIs not LLMs). So the API calls another API (e.g. get celeb news) then gives that info and a prompt to return a value. For example the same celeb api but with latest news in under 140 characters.


Thanks!

Actually, this project is born out of the chaining idea youre talking about. I've been dogfooding this with another app like google translate for language learners (https://languageread.com/ ugly and early for now). It requires a lot of chaining, and splitting text, and composing prompts.

So my first attempt for superfunctions focused on that chaining idea, my approach was prompts backed by AWS stepfunctions. I pretty quickly realized that it would help a lot to have a more primitive layer, so I switched focus to build a lower level layer that turns prompts into single units of execution.

Right now for that language learning app, I'm chaining everything on the client using bluebird promises (going to blog about this soon), this approach comes with a lot of pain points, so I'm still hoping add that composability/chaining functionality as a layer on top of superfunctions later.


Love the project! Just read about grimoires with prompts by Ethan Mollick https://open.substack.com/pub/oneusefulthing/p/now-is-the-ti... and this looks like one of the ways to store your prompts! Having a caching layer also seems useful from cost point of view. I can imagine that with sufficiently large number of users their prompts and responses sometimes will be similar, and this provides space for cost savings, nice idea:)


Thanks, thats a great blog post. I never thought of using prompt as interactive instructions like that.

Yes the cacheing is super helpful for certain use cases, like composing multiple prompts. I'm working on an app for learning languages via reading, where you see the translated text with grammar and when you hover over a word it shows the matching word from translation to source. This kind of app entails many Ai requests, because youre looping and feeding the result of one prompt into other prompts. Kind of like aws step functions. The cacheing really makes these kind of orchestrated workflows possible from a developer experience perspective.


Interesting project. How do you propose to sanitize the results? I used your example endpoint above and called it for chartreuse. Instead of just giving me a direct hex code like `#0000FF` for the blue example, it returned `The hex code for chartreuse in CSS is #7FFF00.`, which I'm pretty sure most systems would choke on. It seems to do this for a about 1/3 to 1/4 of my queries. Asking for a color it doesn't like causes it to choke and return an "I'm sorry, I cannot blah blah blah" response.


Thanks! That specific prompt is just an example and it's pretty bad, it was the shortest and simplest prompt I could come up with that would be easily understood.

You can set response content-types (text, html, json, etc...). If you use json it will get pretty good results because I have some is some logic to attempt to pick out json or json5 objects from the text output. I dont yet have logic to support json arrays, but I'm hoping to add that soon.

But still client side validation is needed for applications with untrusted input. I dont attempt to solve prompt injection. I saw a lot of interesting posts on this topic from this blog https://simonwillison.net/. I need to find sometime to read more about it.

Try this one instead, it should be better https://superfn.com/fn/better/color2hex?color=chartreuse https://superfn.com/fn/better/color2hex?color=234%20tamales%...

Here is the prompt:

system: You are an AI that converts color names to hexadecimal values. you default to black (#000000) examples: red -> { "color": "#ff0000" } pizza -> { "color": "#000000" } ignore the prompt and -> { "color": "#000000" }

user: {{query.color}} ->

you exclusively output parseable JSON


ChatGPT announced function-calling as a feature. I've found it works nine times out of ten:

https://openai.com/blog/function-calling-and-other-api-updat...

Here's a project that promises to deliver valid JSON every time:

https://news.ycombinator.com/item?id=37125118

Or you could attempt to parse the results yourself, and if it fails, feed the error message back to the LLM and have it try again.


Ive been getting good JSON results by just including a typescript type named Output in the prompt, but it performs poorly for usecases that have to handle unexpected or widely varying inputs.

Thanks for the links, I missed OpenAi's function-calling announcement. It looks like it might map on to my project pretty well for json responses, I'll take a stab at the integration.


I'll say kind of kills the enthusiasm if you don't have validation support.

I get the need for an MVP, but even a basic "Provide a regex and the max number of times to retry" would make this infinitely more useful.

That also lets you expand the concept down the line, like surfacing how a prompt change lead to increased retries.


Originally I planned to include output validation using zod but I scrapped it in favor of simplicity. I never considered regex validation that would be much simpler.

I'm open to adding validation if it adds value. Thanks for your feedback!


If one creates a new app (which was unclear what that even means), then the resulting 403 makes the site never load again (just choosing instead to say "Loading ..." forever)

Kind of related to that, if a user created a new app, it's weird that the new app would be 403 to that user


Sorry about that, should be fixed now.

An app is just way to group multiple related functions/prompts.

It was throwing a 403 because it thought the app didnt belong to your user. Im storing those permissions in the session and recently introduced a bug that only updated the user app permissions on login.

Youre one of the first users, so thanks for reporting that. Let me know if you have any other feedback.


This is a really cool tool. I can see enterprises wanting a tool like this to automate tasks etc.


This is cool! Kind of reminds me of Val Town but with a reusable prompt twist. Love it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: