Revolutionizing Answer Selection: BERT Model Boosts Accuracy by Up to 18.7%
The researchers used a pre-trained language model called BERT to improve answer selection in question answering tasks. By fine-tuning BERT on different datasets, they found significant improvements in both Question Answering and Community Question Answering tasks. The BERT model showed a maximum improvement of 13.1% in QA datasets and 18.7% in CQA datasets compared to previous methods.