Cross-lingual transfer of NLP tools is motivated by the fact that sufficient training data and high-performance supervised NLP tools are available only for maybe 1% of world's languages; the remaining 99% of languages are under-resourced and therefore difficult to process automatically. In cross-lingual transfer methods, we try to find effective ways of utilizing supervised training data for resource-rich languages and various automatic transfer methods to be able to process the under-resourced languages (think of e.g. translating the training data with an MT system).
In my talk, I will review my research on cross-lingual transfer of dependency parsers (and taggers to some extent), including both positive and negative results. My work on this problem has involved several subproblems, which I will also address in the talk, including: Universal Dependencies and other annotation harmonization, morphological segmentation and other subword units, Giza++ and other word alignment, Moses and other machine translation systems.