> Firstly, can there be an easier way to stop a microVM mid execution in this single executable bottlefire format and then rerun that and it would start mid execution. (something akin to how criu does it?)
Not yet - Firecracker supports snapshotting so this should be doable though!
> if something like microvm could be run in normal cloud infrastructure?
Some cloud providers - like GCP and DigitalOcean - do support nested virtualization, and they work pretty well with Firecracker. Using VM migration to run stable workloads on spot instances sounds very interesting :)
> Some cloud providers - like GCP and DigitalOcean - do support nested virtualization, and they work pretty well with Firecracker. Using VM migration to run stable workloads on spot instances sounds very interesting :)
not necessarily. you can build custom kernel with pvm[1] and do it on aws.
Yes I also came to know across pvm. I feel like doing it on top of aws instances can bring a really nice way of migrating from spot isntances and paying less bills.
What are your thoughts on the other hand in using criu with docker and then deploying it on aws spot instances, is it possible
> What are your thoughts on the other hand in using criu with docker and then deploying it on aws spot instances, is it possible
I don't see why not really
but the last few years, spot has been reclaimed way too often and the price discount is not as good as it used to be (era 2016-2017) so I prefer to use saving plan now. although quite a big portion of our fleet still use spot.
hm that was an interesting take, I had seen this youtube video by codedamn [1] on how spot instances are really cheap and had always wondered why people weren't using this, well now I understand that the incentives have changed. Thanks for telling me, I didn't knew it or maybe the creator of that video had created it quite recently (10 months from now isn't that much of a time unless things have changed)
Have things changed quite a lot in 10 months or was it the author maybe overhyping the usecase I suppose.
I am really wondering but is there any software stack that can work with multi cloud approach the best way. I feel like typescript is really great for such purposes for the most part, I hope that this doesn't get counted as too off topic. I am not a dev ops guy but I just like being frugal and checking different options etc. and I am just wondering what is the best "just works" cloud 2025 without being too much expensive like vercel or netlify.
I genuinely hope that you can please consider the 1st question regarding snapshotting and its doability in bottlefire's / bake's roadmap.
Regarding the 2nd question, I feel like something can definitely be crafted that can enable running stable workloads on gcp/digitalocean spot instances and maybe what bake can do is really make the automation aspect of spot instances / VM migration easier...
Please don't get me wrong, this project looks really cool but I would actually like a first hand response as to (preferably) why this/bake project was created and when/why should someone use this..
I also have many more questions and I feel like haivng a community place can be really helpful here.
Although my open source purist heart wishes for you to use matrix, Its also understandable if you use discord. Do note that there are bridges so you could technically have both matrix and discord and bridge them.
A read-modify-write retry loop causes a high number of commit conflicts, e.g. for atomic increments on integers. Getting higher than 1/RTT per-key throughput requires the "backend" to understand the semantics of the operations - apply a function to the current value, instead of just checking whether timestamp(value) < timestamp(txn.start) and aborting commit if not.
A small tangent on the subject of databases on top of FDB -- mvsqlite is such an insanely cool project and it looks like it's gone quiet. Any plans to pursue that further?
Distributed consensus + fsync() determines the lower bound on commit latency. We have to wait for the data to be durable on a quorum of transaction logs before returning success for a transaction. That's usually 5-10ms, even within a single region.
For user based keys, that sounds nice. Except on multiplayer cases, again in these cases one might find alternative solution rather than KV. I don't remember reading that any other service offers this speed.
I wrote a self-hostable control plane for Nebula (a Tailscale-like overlay networking tool), and have been using it for about a year: https://github.com/losfair/supernova
Built this because existing solutions like ZeroTier and Tailscale are trying to be too "smart" (auto-selecting relays, auto-allocating IPs, etc.) and do not work well for complex network topologies.
Data is always replicated to three of our "big" regions currently. Extending the list of storage regions and providing more flexible data distribution configuration is one of the next things we want to do.
> what about pricing?
During the closed beta it's free with a 1 GiB per project limit.
From the API docs (https://deno.land/api@v1.33.1?unstable&s=Deno.Kv): "Keys have a maximum length of 2048 bytes after serialization. Values have a maximum length of 64 KiB after serialization."
Engineer working on Deno KV here. Building on FDB is mostly a pleasant experience since it solves the hard part of the problem for us (concurrency control and persisting mutations).
We sometimes run into its limitations - the way we are using FDB is a bit beyond what it was originally designed for. But when it works, it works great.
Shameless plug of my mvSQLite [1] project here! It's basically another distributed SQLite (that is API-compatible with libsqlite3), but with support for everything expected from a proper distributed database: synchronous replication, strictly serializable transactions, + scalable reads and writes w/ multiple concurrent writers.
I really wanted to give this a try but the lack of WAL support has prevented me from using it. With the recent addition of WAL support[1] in litefs, would it be possible to add the same to mvsqlite too?
What are you trying to achieve with WAL mode? Is it some kind of application compatibility issue?
The entire SQLite journaling mechanism is not used by mvSQLite (you can set journal_mode=off safely - although SQLite won't be happy to do explicit rollback in this case)
Yeah, I am trying to plugin mvsqlite into a binary-only app which is using WAL mode. Changing to any other journaling mode, just causes the app to not start at all.
> Distributed, MVCC SQLite that runs on top of FoundationDB.
FYI to anyone here, FoundationDB is fucking awesome for something like this.
Question @losfair: Did you find the Rust bindings for FDB to be very good? The Go bindings are OK, but are pretty out-of-date with some cool new features on the HEAD of the FDB source repo.
mvSQLite looks great, though I’m curious how you’d implement a schema migration given the locking properties of SQLite combined with the transaction limits of fdb.
I imagine you’d get an FDB transaction time limit error preventing any schema migrations with non trivial amounts of data.
I’ve read it - there’s still a time limit and so long schema migrations still would be an issue. Even without FoundationDB long schema migrations are problems.
Online DDL is now a WIP feature. This will allow to convert DB into read-only mode & run arbitrarily large schema migration concurrently (first stage), and eventually fully concurrent DDL by replaying logs.
This makes SQLite transactions no longer serializable (in regard to the schema), and breaks the safety of any kind of external concurrency (e.g. mvSQLite and Litestream).
Not yet - Firecracker supports snapshotting so this should be doable though!
> if something like microvm could be run in normal cloud infrastructure?
Some cloud providers - like GCP and DigitalOcean - do support nested virtualization, and they work pretty well with Firecracker. Using VM migration to run stable workloads on spot instances sounds very interesting :)