While the 1990s had seen lively interactions in the discourse community regarding the "right" set of coherence relations for describing text structure, these debates have since by and large subsided. The reason, I argue in this talk, is not that the questions have been settled but probably has to do with the availability of large annotated corpora since the early 2000s (RST-DT, PDTB), causing interest to shift to actual automatic discourse parsing via machine learning. These corpora are one of several examples of empirical data that are now at hand for studying different aspects of coherence relations and their usage. I take these sources of evidence as an incentive to revisit the old question of the "right" set of relations, which largely - yet not exclusively - is a matter of deciding on an appropriate level of granularity. To restrict the scope of the study, the field of "Contrast" is chosen as an exemplary target area.