Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that professors can probably develop ai resistant evaluations.

But there is minimal institutional support for this. Everybody is going it on their own. And the ai tools are changing rapidly. Telling a ntt faculty member teaching a 4/4 that they just need to stop being lazy and redo their entire evaluation method is not really workable.

Iteration is also slow. You do a new syllabus to try to discourage cheating. You run the course for a semester. You see the results with now just a few weeks to turn around for your spring semester courses. At best you get one iteration cycle per six months. More likely it is one per year since it is basically impossible to meaningfully digest the outcome of what you tried and to meaningfully try something different mere weeks later.



I've been urging this kind of adaptation for the past couple years, with very little success. Switching to evaluation by only on-site work was simple to do, but creating new work forms has been largely a non-starter. Young professors are focused on their publication count and most old codgers (like me) have their eyes on retirement and hoping to escape before accountability hits the fan. When I offer to help explore restructuring their teaching materials, the typical response is glazed eyes and sudden awareness of the need to be elsewhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: