Artificial Intelligence

   

Improving Language Model Performance with Smarter Vocabularies

Authors: Brad Jascob

In the field of Language Modeling, neural-network models have become popular due to their ability to reach low Perplexity scores. A common approach to training these models is to use a large corpus, such as the Billion Word Corpus, and restrict the vocabulary to the top-N most common words (aka tokens). The less common words are then replaced with an “unknown” token. These unknown tokens then become a single representation for all low occurrence words which may not be closely related semantically. In addition, some closely related tokens, such as numbers, may be common enough to be given a unique integer ID when we might prefer that they be combined under a single ID. In the following article, we’ll explore using part-of-speech (POS) tagging to identify word types and then use this information to create a “smarter” vocabulary. Using this smarter vocabulary, we’ll show that it achieves a lower perplexity score, for a given epoch, than a similar model using a top-N type vocabulary.

Comments: 5 Pages.

Download: PDF

Submission history

[v1] 2019-01-22 08:30:24

Unique-IP document downloads: 18 times

Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.

Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.

comments powered by Disqus