The thing is...you shouldn't be able to guarantee that, and the idea that you can is the root of a lot of academia's problems.
The results of your experiment should be determined by the nature of whatever you're studying, nothing more. Sometimes, they'll be very clean with obvious, high-impact applications. Sometimes they'll be muddled with unforeseeable complications. Obviously, skill can help you turn the latter into the former, but there's a huge amount of luck involved there too.
It would be far, far better if people were judged on their ability to find and rigorously test interesting questions, rather than whatever crapshoot nature spits out.
>> It would be far, far better if people were judged on their ability to find and rigorously test interesting questions, rather than whatever crapshoot nature spits out.
"Strong results" does not mean "positive results".
For example, two seminal works in ILP are the doctoral theses of Ehud Shapiro and Gordon Plotkin, both of which found strong negative results regarding the learning of first-order logic theories from examples. I would also point out the work of Mark. E. Gold and others on inductive inference that was mostly negative regarding the ability to learn anything above finite automata from examples.
It is true that in machine learning it is considered mandatory to show, experimentally, significant improvements in performance- personally I think that's a big mistake and a severe impediment to actual progress. In any case, in my field, you need theoretical results (theorems and proofs) for your work to pass muster.
This. What kind of prospectus defense or oral exam did he have if he didn’t come up with the question? Why even have a different committee or a prospectus at all if the advisor just hands you the question?
The thing is...you shouldn't be able to guarantee that, and the idea that you can is the root of a lot of academia's problems.
The results of your experiment should be determined by the nature of whatever you're studying, nothing more. Sometimes, they'll be very clean with obvious, high-impact applications. Sometimes they'll be muddled with unforeseeable complications. Obviously, skill can help you turn the latter into the former, but there's a huge amount of luck involved there too.
It would be far, far better if people were judged on their ability to find and rigorously test interesting questions, rather than whatever crapshoot nature spits out.