Deep Learning Seminar, Winter 2018/19

In recent years, deep neural networks have been used to solve complex machine-learning problems and have achieved significant state-of-the-art results in many areas. The whole field of deep learning has been developing rapidly, with new methods and techniques emerging steadily.

The goal of the seminar is to follow the newest advancements in the deep learning field. The course takes form of a reading group – each lecture a paper is presented by one of the students. The paper is announced in advance, hence all participants can read it beforehand and can take part in the discussion of the paper.

If you want to receive announcements about chosen paper, sign up to our mailing list ufal-rg@googlegroups.com.

About

SIS code: NPFL117
Semester: winter + summer
E-credits: 3
Examination: 0/2 C
Guarantor: Milan Straka

Timespace Coordinates

The Deep Learning Seminar takes place on Tuesday at 14:00 in S8. We will first meet on Tuesday Oct 09.

Requirements

To pass the course, you need to present a research paper and sufficiently attend the presentations.

If you want to receive announcements about chosen paper, sign up to our mailing list ufal-rg@googlegroups.com.

To add your name and paper to the table below, edit the source code on GitHub and send a PR.

Date Who Paper(s)
09 Oct 2018 Milan Straka Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, Armand Joulin: Advances in Pre-Training Distributed Word Representations
Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, Tomas Mikolov: Learning Word Vectors for 157 Languages
Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power: Semi-supervised sequence tagging with bidirectional language models
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer: Deep contextualized word representations
Alan Akbik, Duncan Blythe, Roland Vollgraf: Contextual String Embeddings for Sequence Labeling
Samuel L. Smith, David H. P. Turban, Steven Hamblin, Nils Y. Hammerla: Offline bilingual word vectors, orthogonal transformations and the inverted softmax
Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou: Word Translation Without Parallel Data
Anders Søgaard, Sebastian Ruder, Ivan Vulić: On the Limitations of Unsupervised Bilingual Dictionary Induction
Mareike Hartmann, Yova Kementchedjhieva, Anders Søgaard: Why is unsupervised alignment of English embeddings from different algorithms so hard?
Mikel Artetxe, Gorka Labaka, Eneko Agirre: A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
16 Oct 2018 Tomas Soucek Martin Arjovsky, Soumith Chintala, Léon Bottou: Wasserstein GAN
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron Courville: Improved Training of Wasserstein GANs
Zhiming Zhou, Yuxuan Song, Lantao Yu, Hongwei Wang, Zhihua Zhang, Weinan Zhang, Yong Yu: Understanding the Effectiveness of Lipschitz-Continuity in Generative Adversarial Nets
23 Oct 2018 Marek Černý Nikolaus Mayer, Eddy Ilg, Philip Häusser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, Thomas Brox: A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation
Clément Godard, Oisin Mac Aodha, Gabriel J. Brostow: Unsupervised Monocular Depth Estimation with Left-Right Consistency
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe:Unsupervised Learning of Depth and Ego-Motion from Video
Reza Mahjourian, Martin Wicke, Anelia Angelova: Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
Andrea Pilzer, Dan Xu, Mihai Marian Puscas, Elisa Ricci, Nicu Sebe: Unsupervised Adversarial Depth Estimation using Cycled Generative Networks
Sudeep Pillai, Rares Ambrus, Adrien Gaidon: SuperDepth: Self-Supervised, Super-Resolved Monocular Depth Estimation
Richard Chen, Faisal Mahmood, Alan Yuille, Nicholas J. Durr: Rethinking Monocular Depth Estimation with Adversarial Training
30 Oct 2018 No seminar
06 Nov 2018 Dean's Sport Day
13 Nov 2018 Petr Laitoch Timothy Dozat, Christopher D. Manning: Deep Biaffine Attention for Neural Dependency Parsing
Michael Ringgaard, Rahul Gupta, Fernando C. N. Pereira: SLING: A framework for frame semantic parsing
20 Nov 2018 Eric Lief Dušan Variš, Natalia Klyueva: Improving a Neural-based Tagger for Multiword Expression Identification
Regina Stodden, Behrang QasemiZadeh, Laura Kallmeyer: TRAPACC and TRAPACC_S at PARSEME Shared Task 2018: Neural Transition Tagging of Verbal Multiword Expressions
27 Nov 2018 Ondřej Měkota Thomas SCHLEGL, Philipp SEEBÖCK, Sebastian M. WALDSTEIN, Ursula SCHMIDT-ERFURTH and Georg LANGS: Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery
04 Dec 2018 Martin Víta Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, Antoine Bordes: Supervised Learning of Universal Sentence Representations from Natural Language Inference Data
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Qianlong Du, Chengqing Zong, Keh-Yih Su: Adopting the Word-Pair-Dependency-Triplets with Individual Comparison for Natural Language Inference
11 Dec 2018 Miroslav Krabec Survey of 3D classification methods
18 Dec 2018 Karry Hořeňovská Robin Jia, Percy Liang: Adversarial Examples for Evaluating Reading Comprehension Systems
25 Dec 2018 Christmas Holiday
01 Jan 2019 New Year's Day
08 Jan 2019 Petr Houška

You can choose any paper you find interesting, but if you would like some inspiration, you can look at the following list.

Current Deep Learning Papers

Parsing

Neural Machine Translation

Language Modelling

Natural Language Generation

Speech Synthesis

  • Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C. Cobo, Florian Stimberg, Norman Casagrande, Dominik Grewe, Seb Noury, Sander Dieleman, Erich Elsen, Nal Kalchbrenner, Heiga Zen, Alex Graves, Helen King, Tom Walters, Dan Belov, Demis Hassabis: Parallel WaveNet: Fast High-Fidelity Speech Synthesis
  • Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, RJ Skerry-Ryan, Rif A. Saurous, Yannis Agiomyrgiannakis, Yonghui Wu: Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions

Image Recognition

Image Enhancement

Image 3D Reconstruction

Training Methods

Activation Functions

Regularization

Network Interpretation

Reinforcement Learning

Explicit Memory

Hyperparameter Optimization

Generative Adversarial Networks

Adversarial Text

Adversarial Speech

Artificial Intelligence