Hm, that is an interesting question. In fact, you should be able to always rewrite a generator as an iterator, either by using a class like the one you propose or by using a particular language construct (what Python does with the __iter__/next methods.)
From a conceptual point of view, I think the main difference is where the perceived 'driver' of the action is. At least when I use generators I think of them as streaming building blocks for another, more complex, step being built (and driven) on top of the stream. In other words, I think from the bottom up. It allows me to forget about keeping important state, because that step is going to be taken care "above me". Let me give you a simple example.
Let's say you want to write an interpreter for a simple language. The base building block for the parser will be reading characters from a file ('input'). After that step, comes breaking that stream into tokens (say, at the first whitespace), and after that comes parsing the grammar by identifying what tokens are being returned by the tokenizer. If the tokenizer returns 'exit', then we should exit. You can think of the parser as a function composition like this:
The beauty of this is that the responsibility for each step is clearly defined by the level you are in, and 'input' could be infinite for all we care, the interpreter will still work (think streaming, persistent-connection protocols like XMPP.)
If you go the other way around and write this as an Interpreter class encapsulating a Tokenizer, which in turn encapsulates a CharReader, then each class has to keep more state (the 'input' has to be passed down) and know a lot more about each other. Interpreter needs to tell Tokenizer what to tokenize, and Tokenizer needs to ask CharReader to read.
Well, at least in my mind. Obviously it is completely subjective at this point. At least I like when I can drive all the logic or my (possibly infinite) loop from a single loop statement :)
From a conceptual point of view, I think the main difference is where the perceived 'driver' of the action is. At least when I use generators I think of them as streaming building blocks for another, more complex, step being built (and driven) on top of the stream. In other words, I think from the bottom up. It allows me to forget about keeping important state, because that step is going to be taken care "above me". Let me give you a simple example.
Let's say you want to write an interpreter for a simple language. The base building block for the parser will be reading characters from a file ('input'). After that step, comes breaking that stream into tokens (say, at the first whitespace), and after that comes parsing the grammar by identifying what tokens are being returned by the tokenizer. If the tokenizer returns 'exit', then we should exit. You can think of the parser as a function composition like this:
exit_status = interpreter . tokenizer . char_reader (input)
The beauty of this is that the responsibility for each step is clearly defined by the level you are in, and 'input' could be infinite for all we care, the interpreter will still work (think streaming, persistent-connection protocols like XMPP.)
If you go the other way around and write this as an Interpreter class encapsulating a Tokenizer, which in turn encapsulates a CharReader, then each class has to keep more state (the 'input' has to be passed down) and know a lot more about each other. Interpreter needs to tell Tokenizer what to tokenize, and Tokenizer needs to ask CharReader to read.
Well, at least in my mind. Obviously it is completely subjective at this point. At least I like when I can drive all the logic or my (possibly infinite) loop from a single loop statement :)