Transformer-based Mobile Traffic Prediction in Internet of Vehicular Networks
DOI:
https://doi.org/10.59543/jidmis.v2i.14303Keywords:
Internet of Vehicular Things, Mobile Traffic Prediction, Deep Learning, TransformerAbstract
This paper presents a Transformer-based mobile traffic prediction model for the Internet of Vehicular Things (IoVT), addressing the challenges of handling long-term dependencies and improving computational efficiency in mobile traffic forecasting. The proposed model, which integrates a gated residual attention unit (GRAU) and channel embedding (CE) technique, leverages the strengths of the Transformer architecture to enhance predictive accuracy and efficiency while maintaining recurrent dynamics. Through experiments on a real-world dataset, the model demonstrated superior performance over existing baselines in terms of root mean square error (RMSE) and mean absolute error (MAE), showcasing its effectiveness in capturing complex temporal patterns in mobile traffic data. The study contributes to the IoVT by providing a robust prediction tool that can optimize network resource allocation and support the development of intelligent transport systems within smart city frameworks.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
JIDMIS is published Open Access under a Creative Commons CC-BY 4.0 license. Authors retain full copyright, with the first publication right granted to the journal.






