We need to distinctly think about what tasks are actually suitable for LLMs. Used poorly, they'll gut our abilities to think thoughtfully. The push, IMO, should be for using them for verification and clarification, but not for replacements in understanding and creativity.
Example: Do the problem sets yourself. If you're getting questions wrong, dig deeper with an AI assistant to find gaps in your knowledge. Do NOT let the AI do the problem sets first.
I think it was similar to how we used calculators in school in the 2010s at least. We learned the principles behind the formulae and how to do them manually, before introducing the calculators to abstract the usage of the tools.
I've let that core principle shape some of how we're designing our paper-reading assistant, but still thinking through the UX patterns -- https://openpaper.ai/blog/manifesto.
Example: Do the problem sets yourself. If you're getting questions wrong, dig deeper with an AI assistant to find gaps in your knowledge. Do NOT let the AI do the problem sets first.
I think it was similar to how we used calculators in school in the 2010s at least. We learned the principles behind the formulae and how to do them manually, before introducing the calculators to abstract the usage of the tools.
I've let that core principle shape some of how we're designing our paper-reading assistant, but still thinking through the UX patterns -- https://openpaper.ai/blog/manifesto.