nlp
T0 builds on T5 by fine-tuning on more natural prompts and testing the model’s generalization to held-out tasks.
Compare the training format diagrams for T5 (top) and T0 (bottom):
Intuitively, the T0 prompts are more likely to be similar to implicit/explicit prompting that’s present in the pretraining data. The authors created several prompts for each dataset.
Read moreUpdate 2022-04-27: The beta is over, but the apk is still installable with the instructions below and any feedback sent from inside the app will be received by me. I’m going to be working on this more over the summer, and eventually publishing it on the app store. :)
Ever since learning Spanish, it has been a dream of mine to create a vocabulary study app that meets my needs. Duolingo won’t cover advanced vocabulary, Anki requires manually-generated decks, and other apps have expensive subscription plans.
Read moreThis was a paper I presented about in Bang Liu’s research group meeting on 2022-04-11. You can view the slides I used here.
We presented this paper as a mini-lecture in Bang Liu’s IFT6289 course in winter 2022. You can view the slides we used here.
This post was created as an assignment in Bang Liu’s IFT6289 course in winter 2022. The structure of the post follows the structure of the assignment: summarization followed by my own comments.
paper summarization permalink Word embeddings have gotten so good that state-of-the-art sentence classification can often be achieved with just a one-layer convolutional network on top of those embeddings. This paper dials in on the specifics of training that convolutional layer for this downstream sentence classification task.
Read more