Link:
https://www.nlpdemystified.org/Hi HN:
After a year of work, I've published my free NLP course. The course helps anyone who knows Python and a bit of math go from the basics to today's mainstream models and frameworks.
I strive to balance theory and practice, so every module consists of detailed explanations and slides along with a Colab notebook putting the ideas into practice (in most modules).
The notebooks cover how to accomplish everyday NLP tasks including extracting key information, document search, text similarity, text classification, finding topics in documents, summarization, translation, generating text, and question answering.
The course is divided into two parts. In part one, we cover text preprocessing, how to turn text into numbers, and multiple ways to classify and search text using "classical" approaches. And along the way, we'll pick up valuable bits on how to use tools such as spaCy and scikit-learn.
In part two, we dive into deep learning for NLP. We start with neural network fundamentals and go through embeddings and sequence models until we arrive at transformers and the mainstream models of today.
No registration required: https://www.nlpdemystified.org/
Along the way, I've been struggling with a question and I hope someone can help me understand how to go about this: how would you build a model that does more than one NLP task? For a simple classifier like input: text (a tweet) and output: text (an emotion), you can fine-tune an existing classifier on such a data set. But, how would you build a model that does NER and sentiment analysis? E.g. input: text (a Yelp review of a restaurant) and output: list of (entity, sentiment) tuples (e.g. [("tacos", "good"), ("margaritas", "good"), ("salsa", "bad")]). If you have a data set structured this way, and want to fine-tune a model, how does that model know how to make use of a Python list of tuples?