Surprise is an emotion emerging from the mismatch between expectations and what is actually experienced or observed. In this master’s dissertation our concern is not with the emotion of surprise but rather with the quantification of surprise. We look at surprise in the context of deep learning, where being surprised means that the neural network is processing data that it did not expect as it is too dissimilar to the data on which the network was trained. When a neural network is surprised, it makes inaccurate predictions, which can only be measured when the true labels of the inputs are known. In this work we define three surprise metrics that make it possible to measure surprise in the context of deep learning, without the need of having the true labels. These surprise metrics are reconstruction accuracy surprise, hidden activations surprise and reconstruction loss surprise. The proposed metrics make it possible to adapt deep learning models to unexpected changes to the input. This can be used to train sensor systems in an online fashion so that they can adapt to surprising situations.