I tried the Arduino + RC low-pass filter example (using PWM as a DAC) and it was pretty impressive to see the output voltage actually smooth out instead of just flipping between 0 and 5V
What I found interesting is that the ADC reading follows the filtered signal, so you can actually observe the analog behavior from the firmware side.
Feels like this could be really useful for teaching, especially to show how digital signals turn into analog in real circuits without needing a full lab setup
A free, open-source emulator for 19 embedded boards: Arduino, ESP32, Raspberry Pi, RISC-V , running real compiled code in your browser.
The best part: it's fully local.
No cloud dependency. No student accounts. No data leaving your network. Self-hostable with a single Docker container.
Universities and bootcamps can deploy it on their own servers and give every student access to a complete embedded development environment, for free.
I've been working on this for over a year, and just shipped v2.0 with ESP32 emulation (via QEMU), a custom RISC-V core, and Raspberry Pi 3 support that runs real Python
Yes,peripherals are fully emulated, not just the CPU. LEDs blink, buttons respond to clicks, Serial Monitor works, servos rotate, displays render (ILI9341 TFT), and we have 48 o more components from the wokwi-elements library. The Blink example should show the built-in LED toggling on pin 13. If it didn't blink for you, it might be a compilation issue
try the example Traffic Light : Simulate a traffic light with red, yellow, and green LEDs
Both, with a nuance. The AVR simulator syncs to wall-clock time and each frame calculates cycles from real elapsed deltaMs, so delay(1000) takes 1 real second and timer-dependent code (PWM, millis()) runs at correct real time rates. The RP2040 and ESP32-C3 simulators use a fixed cycles-per-frame budget (125MHz/60 and 160MHz/60 respectively), which targets real time but doesn't compensate for frame drops . if the browser stutters, emulated time stretches slightly. All three are cycle-accurate at the instruction level though, so the logic and peripheral behavior is faithful to real hardware regardless of frame timing
The main advantage is accessibility and ease of use: with the browser, no setup is required on the user’s side, no toolchains need to be installed, and there’s no need to be familiar with SSH or terminal workflows
It also provides a more visual and interactive environment (editor, peripherals, simulation controls), which is especially useful for teaching and for beginners.
The Docker image is there so you can easily install it on your own machine if you want to run it locally or work on development
Thanks! The visual editor is actually a big part of the project
I used a little AI to create the graphical interface since I focused heavily on emulation, testing, and refining and optimizing the circuit editor. But now I have plans to improve the UI and make it faster and more intuitive
Still a lot to improve there, but glad it’s useful already
Is it easy to feed an elf or bin and run that (esp32c3)? I see compilation available, but I'm playing with asm and have my toolchain figured out already and would just like to emulate the firmware.
Another +1 for this one as this is what turns this tool from a toy environment with basic sketches into something that's actually useful for larger projects with a full toolchain, libraries, and so forth.
A lot of simulators stop at simple sketches, but the goal with Velxio is to support more realistic workflows , multiple boards interacting, real toolchains, and more complex setups
Still early, but definitely moving in that direction
Creator here! Just saw this was posted. I've been working on the 2.0 release to move beyond simple AVR emulation. Integrating QEMU for the ESP32 and Pi 3 (Linux) was a massive challenge, especially maintaining sync between the different emulators in a single browser tab.
I recently spent some time going through MiniMind, and it’s a remarkably clean resource for understanding the modern LLM stack under the hood. It’s a minimal, end-to-end implementation of a ~25M parameter GPT-style model in pure PyTorch, designed to be trained from scratch on a single GPU
Instead of heavy abstractions, it uses straightforward PyTorch while still implementing modern architectural choices like RMSNorm, SwiGLU, RoPE, and even MoE variants. What makes it valuable is that it doesn't stop at the forward pass; the repo covers the entire training lifecycle. You can trace the data flow from tokenizer training and pretraining, right through to Supervised Fine-Tuning (SFT), LoRA, preference optimization (DPO/PPO), and distillation
It’s small enough to actually read the source code end-to-end, but realistic enough to serve as a baseline for architectural experiments rather than just a toy example.
Curious if anyone here has used this (or similar minimal codebases) to test custom architecture modifications or train highly specialized small-scale models
I'm currently testing the pipeline locally on a PC with an RTX 4060, and it's a great fit for this kind of hardware
I’ve been looking into this space recently while exploring ESP32 emulation.
I came across this repo: https://github.com/lcgamboa/qemu/
which is quite interesting, but it’s not really a JavaScript/browser-based simulation. It relies on QEMU and actual hardware emulation rather than running in a JS/WebAssembly environment.
From what I can tell, it handles the Xtensa architecture at a much lower level, which makes sense for accuracy, but also makes it harder to bring into the browser.
It made me wonder whether a higher-level approach (similar to how AVR emulators work in JS) could be viable for ESP32, or if the complexity of peripherals (WiFi, BLE, etc.) makes that impractical.
Curious if anyone has explored alternative approaches here.
Has anyone explored this or knows of existing approaches/tools?
Is there a reason for Javascript Specifically? My team has recently been using https://renode.io/ for this kind of task and it's working exceptionally well. Though they don't yet support ESP32, depending on your needs/goals working with them to get ESP32 support may be the most effective path.
Thanks, I didn’t know about Renode.I’ll definitely take a look.
The reason I’m specifically exploring JavaScript is because of a project I’m working on (velxio.dev), where everything runs in the browser, so having a JS/WASM-based approach would make integration much simpler
Right now I’m experimenting with a QEMU based setup and exposing it through WebSockets, but the performance isn’t great and the emulation tends to be unstable (I’ve been hitting crashes under certain workloads).
That’s partly why I’m looking into alternative approaches , either pushing more into the browser, or finding a more robust backend model.
One thing that’s been particularly frustrating is trying to find complete documentation for Xtensa. I’ve looked around quite a bit, and it feels like there isn’t a fully open, detailed spec available. Most of what exists is either partial, behind NDA, or spread across different sources.
I've donated about $100 USD to it. KidCAD is great software because many engineering systems are too expensive for students. Another very interesting project that's gaining traction is the Arduino emulator https://velxio.dev
The Qwen 3.5 models are currently the best open-source models, but they are far behind proprietary models in speed and accuracy. I'd say they're about 60% on par with OpenAI and Anthropic models.
Different constraint. DaveLovable looks more like an AI-assisted editor for ongoing development.
CapsuleWeb has no editor at all. One prompt, deployed permanently, done. It's for when you need a page live in the next two minutes, not when you're building something to iterate on.
What would be the general purpose of storing the history in a remote database? Is it for use by agents? It's not the same as agents cloning the project and running "git log".
I’ve been working on DaveLovable, an open-source experiment around AI agents for web development.
The core idea:
Instead of a single LLM generating code, I use a small multi-agent system (Planner + Coder) that can:
* generate React apps
* modify existing code
* run terminal commands
* commit changes to git automatically
Everything runs with a real Node.js environment in the browser using WebContainers, so the preview is not mocked.
I also added a visual editing mode (click on UI → modify styles) and multimodal input (you can upload mockups or PDFs and ask the agent to recreate them).
One interesting challenge was coordinating agents reliably without them getting stuck or over-planning.
Still early, but I’d love feedback, especially from people working on dev tools or AI agents.
What I found interesting is that the ADC reading follows the filtered signal, so you can actually observe the analog behavior from the firmware side.
Feels like this could be really useful for teaching, especially to show how digital signals turn into analog in real circuits without needing a full lab setup