Hacker Newsnew | past | comments | ask | show | jobs | submit | jimmyechan's commentslogin

Congrats Tom!


Congrats! Cool project! I’d been curious about whether GPT would be good for this task. Looks like this answers it!

Why did you choose markdown? Did you try other output formats and see if you get better results?

Also, I wonder how HMTL performs. It would be a way to handle tables with groupings/merged cells


I think that I'll add an optional configuration for HTML vs Markdown. Which at the end of the day will just prompt the model differently.

I've not seen a meaningful difference between either, except when it comes to tables. It seems like HTML tends to outperform markdown tables, especially when you have a lot of complexity (i.e. tables within tables, lots of subheaders).


No. At the moment all apps work on top of our internal framework. However, you can reuse your existing Python code/libraries. For instance, you could call existing Django models from Dropbase apps. You can also import any PyPI package.


Thank you! I read your blog post and checked out your project! If I understood it correctly, you’re trying to build a software engineering team in a box. Basically from first issue, to code, to live apps. Very interesting approach adding the collaboration angle! ASTs are neat but I’d imagine it could get hard to manage with more complex code.

In our case, we regenerate the `main.py` file each time. One of the hacks we did was to start with boilerplate code, which is why you see it modifying the code as opposed to generating from scratch the first time. We also feed the model with some context/rules on app building using our web framework, so the output is more bounded.

We haven’t tested it on really big files yet, though I'd imagine it could be a problem later. At the moment, we don’t generate HTML, JS/TS, or React code from scratch so our files tend to be relatively smaller than if we did. Our UI is defined via the `properties.json` file, which abstracts much of the underlying code, therefore keeping the files small. It’s much easier for LLMs to generate json and map it to UI behavior, than generate of the client code needed to do all of it.

We don’t have issues with the LLMs changing function/method code, but it occasionally implements one of boilerplate methods we didn’t explicitly ask for. In those cases, a developer has to remove that code manually, which is why showing code diff is critical.

Many other hacks come down to lots of prompt engineering! Something along the lines of "Only implement or modify a method/function corresponding to a user's prompt. Leave all others intact"

Happy to chat more!

Also you might find this blog post we wrote interesting: https://www.dropbase.io/post/an-internal-tools-builder-that-...


It seems to me the more killer product here is the "Writing two files to build a webapp", and you could comfortably rip out ChatGPT and market to a wider audience?


I like your take on that! I hadn't thought about it that way before but "Writing two files to build a webapp" indeed sounds quite intriguing. And we could extend that idea to "...and deploy it with 1 click" or some version of that.

I'm curious about what audience you have in mind and what kind of apps would you be interested in building this way? Would love to hear more of your thoughts!

Edit: I should add that our main motivation for integrating GPT is that we had to introduce some new concepts to make this experience work, which increased the app-building learning curve. We thought having GPT generate code and highlighting diffs would be a neat way to teach users how to develop apps without reading a lot of documentation.


Aha, thanks for that detailed answer! Really fascinating to hear others' approaches to this area of building simple but full apps with LLMs. I'll definitely be following your progress, curious to see where this goes. And I will read your blog post this afternoon!


Interesting approach to just do json instead of protobuf!


Can you provide examples of 2 services that interact with one another? Do the services call each other, or do clients call a service that then calls another service?


Something we do often is need to fetch metadata from a centralized service for some kind of data before sending the original data + the metadata to an ml inference server. The call to the fetch the metadata and the call to the ml model would both be done over grpc.

The calls originate from both inside the system and connected systems. The internal stuff is batch batch jobs that handle small tasks. The external callers are often other (internal) systems that want to access the data our system produces and rarely to send in commands to do stuff.


This isn't difficult to research on your own.


Why not share knowledge is someone is asking earnestly?


Because there's literally nothing in it for me to help this guy unlock the stunning achievement of understanding why and in what manner computer systems communicate. This is intern level information, trivial to learn.


What was in it for you to post the message that there is nothing in it for you?

If you cannot post a message to help because "there is nothing in it for you"... please tell me what you got from posting this?

I really wanna know


You must be a joy to work with /s


Co-founder of Dropbase here. We're a better fit for you given your comments about Django and being in the same repo.

We are an internal tools builder that works with your existing Python codebase, in the same repo as your core app. You can call/use your exiting Django models from Dropbase UI components, and build fullstack internal tools with just Python.


This is awesome!!! I'm really impressed with the demo. In particular, with how fast it seems to work given the number of models you use, the client-server back and forth, and the required processing and text gen. How did you do that? And at which point do you start to get bigger latencies e.g. writing an email, an essay, or a novel where you change the spelling of a character's name 2 chapters earlier.


Cofounder @ Dropbase (dropbase.io) here. If you just need to build internal tools with Python or want to build internal tools on top of your existing Python codebase, give us a try. We have granular permission built-in so you can share internal tools with others easily, down to who can use or edit each internal tool you build.

For context, we're a more niche cousin to Reflex that's specifically designed to build internal tools. Reflex seems to be a more general framework to build anything you want; quite powerful, and possibly an easier to use Django replacement.

It's awesome to see frameworks like Reflex. I think the Python ecosystem needs this.


Dropbase looks neat. Couple questions based on my read of the site/github/docs:

- is the architecture exclusively hybrid SaaS? Looks like I host a worker node but a cloud connection is still required.

- how can I monitor the data flow between the local worker and hosted orchestrator? I assume it’s straightforward to turn on verbose logs on the worker’s requests out.

- is the IDE hosted locally or remotely?

- authentication to local services appears to be done primarily through credentials hardcoded in the ENV file. How can I use SSO and pass user authentication to the upstream data sources?


It depends on your use case or development preference. Check us out at Dropbase (https://dropbase.io)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: