Deep Learning Seminar, Winter 2018/19

In recent years, deep neural networks have been used to solve complex machine-learning problems and have achieved significant state-of-the-art results in many areas. The whole field of deep learning has been developing rapidly, with new methods and techniques emerging steadily.

The goal of the seminar is to follow the newest advancements in the deep learning field. The course takes form of a reading group – each lecture a paper is presented by one of the students. The paper is announced in advance, hence all participants can read it beforehand and can take part in the discussion of the paper.

If you want to receive announcements about chosen paper, sign up to our mailing list ufal-rg@googlegroups.com.

About

SIS code: NPFL117
Semester: winter + summer
E-credits: 3
Examination: 0/2 C
Guarantor: Milan Straka

Timespace Coordinates

The Deep Learning Seminar takes place on Tuesday at 14:00 in S8. We will first meet on Tuesday Oct 09.

Requirements

To pass the course, you need to present a research paper and sufficiently attend the presentations.

If you want to receive announcements about chosen paper, sign up to our mailing list ufal-rg@googlegroups.com.

To add your name and paper to the table below, edit the source code on GitHub and send a PR.

Date Who Paper(s)
09 Oct 2018 Milan Straka Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, Armand Joulin: Advances in Pre-Training Distributed Word Representations
Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, Tomas Mikolov: Learning Word Vectors for 157 Languages
Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power: Semi-supervised sequence tagging with bidirectional language models
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer: Deep contextualized word representations
Alan Akbik, Duncan Blythe, Roland Vollgraf: Contextual String Embeddings for Sequence Labeling
Samuel L. Smith, David H. P. Turban, Steven Hamblin, Nils Y. Hammerla: Offline bilingual word vectors, orthogonal transformations and the inverted softmax
Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou: Word Translation Without Parallel Data
Anders Søgaard, Sebastian Ruder, Ivan Vulić: On the Limitations of Unsupervised Bilingual Dictionary Induction
Mareike Hartmann, Yova Kementchedjhieva, Anders Søgaard: Why is unsupervised alignment of English embeddings from different algorithms so hard?
Mikel Artetxe, Gorka Labaka, Eneko Agirre: A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
16 Oct 2018 Tomas Soucek Martin Arjovsky, Soumith Chintala, Léon Bottou: Wasserstein GAN
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron Courville: Improved Training of Wasserstein GANs
Zhiming Zhou, Yuxuan Song, Lantao Yu, Hongwei Wang, Zhihua Zhang, Weinan Zhang, Yong Yu: Understanding the Effectiveness of Lipschitz-Continuity in Generative Adversarial Nets
23 Oct 2018
30 Oct 2018
06 Nov 2018 Dean's Sport Day
13 Nov 2018
20 Nov 2018
27 Nov 2018 Ondřej Měkota Thomas SCHLEGL, Philipp SEEBÖCK, Sebastian M. WALDSTEIN, Ursula SCHMIDT-ERFURTH and Georg LANGS: Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery
04 Dec 2018
11 Dec 2018
18 Dec 2018
25 Dec 2018 Christmas Holiday
01 Jan 2019 New Year's Day
08 Jan 2019

You can choose any paper you find interesting, but if you would like some inspiration, you can look at the following list.

Current Deep Learning Papers

Parsing

Neural Machine Translation

Language Modelling

Natural Language Generation

Speech Synthesis

  • Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C. Cobo, Florian Stimberg, Norman Casagrande, Dominik Grewe, Seb Noury, Sander Dieleman, Erich Elsen, Nal Kalchbrenner, Heiga Zen, Alex Graves, Helen King, Tom Walters, Dan Belov, Demis Hassabis: Parallel WaveNet: Fast High-Fidelity Speech Synthesis
  • Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, RJ Skerry-Ryan, Rif A. Saurous, Yannis Agiomyrgiannakis, Yonghui Wu: Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions

Image Recognition

Image Enhancement

Image 3D Reconstruction

Training Methods

Activation Functions

Regularization

Network Interpretation

Reinforcement Learning

Explicit Memory

Hyperparameter Optimization

Generative Adversarial Networks

Adversarial Text

Adversarial Speech

Artificial Intelligence