In this paper, we employ graph embeddings for classification tasks. To do this, we explore the relationship between kernel matrices, spaces of inner products and statistical inference by viewing the embedding vectors for the nodes in the graph as a field on a Riemannian manifold. This leads to a setting where the inference process may be cast as a Maximum a Posteriori (MAP) estimation over a Gibbs field whereby the graph Laplacian can be related to a Gram matrix of scalar products. This not only allows for a better understanding of graph spectral techniques, but also provides a means for classifying nodes in the graph without the need to compute the embedding explicitly by using a Mercer kernel. We illustrate how the developments presented here can be used for purposes of classification, where we use the graph Laplacian as a kernel matrix. We present classification results on synthetic data and four UCI datasets. We also apply our method to real-world image labelling and compare our results to those yielded by alternatives elsewhere in the literature.