AbstractIn recent years, spatio-temporal graph neural networks (GNNs) have successfully been used to improve traffic prediction by modeling intricate spatio-temporal dependencies in irregular traffic networks. However, these approaches may not capture the intrinsic properties of traffic data and can suffer from overfitting due to their local nature. This paper introduces the Implicit Sensing Self-Supervised learning model (ISSS), which leverages a multi-pretext task framework for traffic flow prediction. By transforming data into an alternative feature space, ISSS effectively captures both specific and general representations through self-supervised tasks, including contrastive learning and spatial jigsaw puzzles. This enhancement promotes a deeper understanding of traffic features, improved regularization, and more accurate representations. Comparative experiments on six datasets demonstrate the effectiveness of ISSS in learning general and discriminative features in both supervised and unsupervised modes. ISSS outperforms existing models, demonstrating its capabilities in improving traffic flow predictions while addressing challenges associated with local operations and overfitting. Comprehensive evaluations across various traffic prediction datasets, have established the validity of the proposed approach. Unsupervised learning scenarios have shown the improvements in RMSE for the METR-LA and PEMSBAY datasets of 0.39 and 0.35 for location-dependent and location-independent tasks, respectively. In supervised learning scenarios, for the same datasets, the improvements were 1.16 for location-dependent tasks and 0.55 for location-independent tasks.