Personalized language models improve social media user experience and engagement.
Personalized language models tailored to individual social media users can outperform generic models. By combining n-gram and neural language models trained on large background data with small amounts of text from users, better performance is achieved. User-specific models alone perform poorly but improve when combined with larger models. N-gram and neural models work well together and give better results when combined. Perplexity, a common measure for language models, does not accurately reflect performance in next word prediction tasks on smart-phones.