This project conducts research on the "Performance Analysis of Different Word Embeddings and Transformers on Fake News Detection". In this research, we compare between Word2Vec, GloVe, and Elmo; and also between BERT, ALBERT, and DistilBERT. We adopt three phases, each focusing on a different analyses of fake news detection: Phase 1 emphasized on comparing the performances of different embedding layers and transformers in a general setting, Phase 2 attempted to observe the capabilities of representative models under low-resource settings, and Phase 3 explored the possibility of transfer learning through pretraining of these representative models.
This project is used for COMP4211: Machine Learning