Wireless sensor networks are deployed for the purpose of monitoring an area of interest. Even when the sensors are properly calibrated at the time of deployment, they develop drift in their readings leading to erroneous network inferences. Based on the assumption that neighbouring sensors have correlated measurements and that the instantiations of drifts in sensors are uncorrelated, the authors present a novel algorithm for detecting and correcting sensor measurement errors. The authors use statistical modelling rather than physical relations to model the spatio-temporal cross-correlations among sensors. This in principle makes the framework presented applicable to most sensing problems. Each sensor in the network trains a support vector regression algorithm on its neighbours' corrected readings to obtain a predicted value for its future measurements. This phase is referred to here as the training phase. In the running phase, the predicted measurements are used by each node, in a recursive decentralised fashion, to self-assess its measurement and to detect and correct its drift and random error using an unscented Kalman filter. No assumptions regarding the linearity of drift or the density (closeness) of sensor deployment are made. The authors also demonstrate using real data obtained from the Intel Berkeley Research Laboratory that the proposed algorithm successfully suppresses drifts developed in sensors and thereby prolongs the effective lifetime of the network.