Generally speaking yes, they are different. One pattern is that you do something to create a source of audio, and the layer below you starts handing you buffers to fill. With each buffer you send back you can indicate if you want to keep getting called with fresh buffers — if not, the calls stop until you poke something to resume them.
Another pattern is that you push buffers to the layer below you and there’s backpressure to keep you sending at the same rate they’re being played out. In that case you can just stop sending buffers when you have nothing to play.
Your reasons are correct. There are so many layers between an app (or web page) and the physical layer of sound which all burn power; phones and earbuds owe quite a bit of their battery life to shutting down bits of hardware when unused.
EDIT: this reminds me of a WWDC many years ago — Apple got really excited about timer coalescing and added parameters to all the low level timer APIs which let you indicate how much slop you want to allow for each individual timer. Ideally then the OS can keep the CPU asleep for longer and wake it up to do work in batches. Code that deals with real time sound has tight timing requirements and can’t be delayed as easily, so in a timer-coalesced world distinguishing between playing silence and playing nothing has an even bigger power impact.
Another pattern is that you push buffers to the layer below you and there’s backpressure to keep you sending at the same rate they’re being played out. In that case you can just stop sending buffers when you have nothing to play.
Your reasons are correct. There are so many layers between an app (or web page) and the physical layer of sound which all burn power; phones and earbuds owe quite a bit of their battery life to shutting down bits of hardware when unused.
EDIT: this reminds me of a WWDC many years ago — Apple got really excited about timer coalescing and added parameters to all the low level timer APIs which let you indicate how much slop you want to allow for each individual timer. Ideally then the OS can keep the CPU asleep for longer and wake it up to do work in batches. Code that deals with real time sound has tight timing requirements and can’t be delayed as easily, so in a timer-coalesced world distinguishing between playing silence and playing nothing has an even bigger power impact.