In the field of causality, we want to understand how a system reacts under interventions. These questions go beyond statistical dependences and can therefore not be answered by standard regression or classification techniques. In this tutorial, you will be introduced to the interesting problem of causal inference as well as recent developments in the field. We will introduce structural causal models, formalize interventional distributions, and define causal effects as well as show how to compute them. We will present three ideas that can be used to infer causal structure from data: (1) finding (conditional) independences in the data, (2) restricting structural equation models and (3) exploiting the fact that causal models remain invariant in different environments. If time allows, we will also show how causal concepts could be used in more classical machine learning problems. No prior knowledge about causality is required. The material is also covered in a recently published book (open access).
The course will offer an introduction to deep learning along with an extensive practical hands-on session in Python. We will cover deep feedforward models, convolutional networks used mainly in image processing, recurrent neural networks used commonly in text processing, autoencoders, word2vec, as well as introduce optimization for deep learning. During the hands-on workshop, we will use deep learning techniques on images and natural-language text.