Much effort has been expended in deconstructing deep neural networks, that is, in trying to understand their internal representations of data. For example, understanding what convolutional neural networks are doing layer by layer has been the focus of much research. I argue that this effort is largely misplaced. Of far greater importance, in my view, is understanding what these functions approximate and how well they do so. In this talk, I briefly review the so-called Bayesian interpretation of these highly non-linear functions and follow with an exploration of how that interpretation might be exploited.