> 1) Raw inference speed matters [most] for dev UX—agree or disagree?
Or maybe incremental content-assist and full-file problem-solving are two significantly different uses, though they're both dev UX use cases.
Because they're confusingly similar, comparing them (and denigrate full-file solutions) wastes time/energy. You muddy your own message.
Just concentrate on showing the value of what you do where and when. To wit...
In the inference case, you're really using context to provide affordances -- next steps. In the full-file case, you're starting instead from a goal statement, with context providing constraints.
I think where you want to go is to show when the tool anticipates where you *should* go; i.e., the extent to which it can lead junior developers to the next step, and senior developers to the next constraint/issue they're ignoring.
I believe just as "attention is all you need" surprised people, this kind of bottom-up approach has more legs than people expect.
I understand the naked probability model is trained on world code corpus; what would interest me is whether you can also create a model that learns the developer's biases.
Then the work is to see the issues in the context, but address them in the order and manner that the developer would. Lock-in would occur because, well, the system understands me. And it would be particularly nice when Programmer A wants to code like Programmer B. If your assistant has a model of Programmer B, the assistant could guide Programmer A in that direction.
"Creating models that learn developer biases" has a great ring to it - maybe we should make that our mission statement. Thats exactly what we're doing with our models. The Next-Edit completion model especially resonates with this
now if you meant one step further and meaning the literal single developer, that's probably best serve in context - albiet with a model that's learned developer biases
> 1) Raw inference speed matters [most] for dev UX—agree or disagree?
Or maybe incremental content-assist and full-file problem-solving are two significantly different uses, though they're both dev UX use cases.
Because they're confusingly similar, comparing them (and denigrate full-file solutions) wastes time/energy. You muddy your own message.
Just concentrate on showing the value of what you do where and when. To wit...
In the inference case, you're really using context to provide affordances -- next steps. In the full-file case, you're starting instead from a goal statement, with context providing constraints.
I think where you want to go is to show when the tool anticipates where you *should* go; i.e., the extent to which it can lead junior developers to the next step, and senior developers to the next constraint/issue they're ignoring.
I believe just as "attention is all you need" surprised people, this kind of bottom-up approach has more legs than people expect.
I understand the naked probability model is trained on world code corpus; what would interest me is whether you can also create a model that learns the developer's biases.
Then the work is to see the issues in the context, but address them in the order and manner that the developer would. Lock-in would occur because, well, the system understands me. And it would be particularly nice when Programmer A wants to code like Programmer B. If your assistant has a model of Programmer B, the assistant could guide Programmer A in that direction.