Deep Learning Seminar, Summer 2017/18

In recent years, deep neural networks have been used to solve complex machine-learning problems and have achieved significant state-of-the-art results in many areas. The whole field of deep learning has been developing rapidly, with new methods and techniques emerging steadily.

The goal of the seminar is to follow the newest advancements in the deep learning field. The course takes form of a reading group – each lecture a paper is presented by one of the students. The paper is announced in advance, hence all participants can read it beforehand and can take part in the discussion of the paper

  • Deep Learning – Course introducing deep neural networks, from the basics to the latest advances, focusing both on theory as well as on practical aspects.


In summer semester 2017/18, the Deep Learning Seminar takes place on Tuesday at 14:00 in S1. We will first meet on Tuesday Feb 27.

If you want to receive announcements about chosen papers, sign up to our mailing list. To add your name & paper to the table below, edit the source code on GitHub and send a PR.

Date Who Paper(s)
27 Feb 2018 Milan Straka Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou: Word Translation Without Parallel Data
06 Mar 2018 Martin Popel Michal Rolínek, Georg Martius: L4: Practical loss-based stepsize adaptation for deep learning
13 Mar 2018 Jan Hajič Matthias Dorfer, Jan Schlüter, Andreu Vall, Filip Korzeniowsky, Gerhard Widmer: End-to-End Cross-Modality Retrieval with CCA Projection and Pairwise Ranking Loss (accepted, not yet published - I can't publicly hang up the text, so if you want to read it before the seminar, let me know:
20 Mar 2018 Tomas Soucek Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le: Learning Transferable Architectures for Scalable Image Recognition
27 Mar 2018 Petr Bělohlávek TBA
03 Apr 2018 Petr Houška  
10 Apr 2018    
17 Apr 2018    
24 Apr 2018    
01 May 2018 No DL Seminar Holiday – May Day
08 May 2018 No DL Seminar Holiday – Victory Day
15 May 2018      
22 May 2018 Karel Ha CapsuleGAN: Generative Adversarial Capsule Network or Thinking Fast and Slow with Deep Learning and Tree Search or Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm or Emergent Complexity via Multi-Agent Competition or Neural Architecture Search with Reinforcement Learning (in the order of descending preference)

Papers for Inspiration

You can choose any paper you find interesting, but if you would like some inspiration, a list of interesting papers will appear here soon.

Current Deep Learning Papers


Neural Machine Translation

Language Modelling


Natural Language Generation

Speech Synthesis

  • Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C. Cobo, Florian Stimberg, Norman Casagrande, Dominik Grewe, Seb Noury, Sander Dieleman, Erich Elsen, Nal Kalchbrenner, Heiga Zen, Alex Graves, Helen King, Tom Walters, Dan Belov, Demis Hassabis: Parallel WaveNet: Fast High-Fidelity Speech Synthesis
  • Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, RJ Skerry-Ryan, Rif A. Saurous, Yannis Agiomyrgiannakis, Yonghui Wu: Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions

Image Classification

Image Recognition

Image Enhancement

Image 3D Reconstruction

Training Methods

Activation Functions


Network Architectures

Network Interpretation

Reinforcement Learning

Explicit Memory

Hyperparameter Optimization

Generative Adversarial Networks

Adversarial Images

  • Tom B. Brown, Dandelion Mané, Aurko Roy, Martín Abadi, Justin Gilmer: Adversarial Patch

Adversarial Text

Adversarial Speech

Artificial Intelligence