> Motion is tracked using the laptop camera via optical flow and mapped to continuous control over dynamics, while the sound is generated in real-time.
Author here. We checked for APIs like this at the time, but since approximately every laptop has a webcam, the cv approach is much more accessible. It would be a fun rewrite though; I’m sure polling this would be a few orders of magnitude more efficient. There was definitely lag if you ran the app on a very underpowered machine which did impact the “playability” of the velocity parameter.
The TinyML niche, doing ML on microcontroller grade hardware (usually for sensor data analysis), is close in terms of constraints. Models measuring in kilobytes of RAM/FLASH.
But usually missing the flair and showmanship of demoscene, and the just-because-we-can attitude. So I agree, need more demoscene style!
As the maintainer of an open source library in this space (emlearn), I would be interested in contributing to such an effort.
Your video is so pedagogically beautiful. The subtext of what you’re doing in those two minutes hints deeply at the cyclical, iterative process practiced by most engineers and many other creatives. Concise, illustrative, memorable. I’ll be showing this to students regularly. Well done.