In recent years, research into deep neural networks lead to significant advancements in many fields, ranging from NLP, across computer vision to playing games like Chess and Go. Even though deep neural networks were originally inspired by biological neurons, there are still many differences between deep nets and their biological counterparts.
In this talk, we focus on three potential weaknesses of the neural network training: generalization, catastrophic forgetting and knowledge composition. We demonstrate how deep neural networks struggle with these phenomena, even though they are crucial to learning in their biological counterparts. We also discuss current approaches that focus on solving these issues.
***The talk will be streamed via Zoom. For details how to join the Zoom meeting, please write to sevcikova et ufal.mff.cuni.cz***