dc.description.abstract |
Sentiment analysis, spam detection, and subject classification are just a few of the areas
where cybertext classification is crucial. The demand for precise and effective lassification
methods has increased dramatically with the exponential expansion of digital content. Using transfer learning and four cutting-edge models — DistilBERT, BERT, XLNet, and RoBERTa. This research paper gives a thorough comparative analysis of cybertext
categorization.Through the use of pre-trained models & extensive corpus-based
knowledge,transfer learning has demonstrated to be very successful in tasks involving
natural language processing. Powerful transformer-based models with outstanding
performance on numerous NLP benchmarks include DistilBERT BERT, XLNet, and
RoBERTa. Their effectiveness and application in cybertext classification tasks, however,
have not been fully investigated. On a sizable dataset of cybertext, the study begins by
honing the aforementioned models using methods including tokenization, attention
mechanisms, and contextual embeddings. The usefulness of the models for categorizing
cybertext is measured using performance metrics like accuracy, precision, recall, and
F1-score. The research also explores the effect of transfer learning on model performance by contrasting it with conventional training techniques. Through the use of transfer learning, the models are trained on a distinct but connected task, and the learned information is then applied to the cybertext classification problem, producing better results. |
en_US |