Though I do remember now that we solved it by having a separate mechanism for accessing pages that required logging in or had significant client-side rendering by allowing the user to record a macro that was played back in a headless browser. Within a few years though it was obvious a crawler would need to be able to automatically handle client scripts.