Why do convolutional networks work well for images? What happens in a neural network when it 'learns’? What is machine learning, actually? These are the type of questions that we should all be wondering about if we use machine learning, and especially deep neural networks, on a daily basis. The field of deep learning is developing rapidly with new architectures being invented to try to solve ever more challenging problems, and this zoo of neural networks needs a taxonomy.
One way to bring order to the chaos is by using a physicist's intuition. Bridges are being built, formalizing the link between well-developed fields in physics and neural networks, which allow us to understand extracting information relevant on the macroscopic scale as both a machine learning problem and a problem that has been known in the physics community for a long time, namely describing physical systems at different length scales.