Keynotes

Language Understanding and Misunderstanding, Michael Roth, Tuesday 13.9., 9:00-10:00

Abstract: When we use language, we usually assume that the meaning of our statements is clear and that others can understand precisely this meaning. However, that this may not always be the case is for example demonstrated by vague statements in politics and by humor based on wordplay. Even unwittingly, it is possible for statements to be understood differently. Such cases are commonly referred to as “ambiguities” and the result, when at least one understood meaning does not match the intended one, as a “misunderstanding”. The potential for ambiguities and misunderstandings raises the question in how far computational models of language should be capable of preventing, for example, users of voice assistants from being misunderstood or texts from being mistranslated.

In this talk, I will present a series of recent studies towards the automatic detection of potential sources of misunderstanding in instructional texts. I will argue that these instructional texts are, by virtue of their function, particularly suited to this task and I will show the extent to which potential sources of misunderstanding can be found through the revision history of such texts. Finally, I will discuss current results and findings, which may provide an outlook on how to account for misunderstandings in future NLP models.

Bio: Michael Roth is an independent research group leader in the DFG Emmy Noether program. He studied computational linguistics at Saarland University and received his PhD from Heidelberg University in 2014. He then worked as a postdoc in Stuttgart, Edinburgh, Urbana-Champaign, and Saarbrücken, where he conducted research on models of lexical and role-based semantics, implicit meaning, and script knowledge. His current group is based at the University of Stuttgart and focuses on modeling sources of misunderstanding in complex instructional texts. Roth co-organized a number of workshops on semantics and commonsense knowledge, he is a regular area chair at *ACL conferences, and he recently received two best paper awards for work in his current research group (at EACL-SRW 2021 and SemEval 2022).

Generation of Subjective Language: Chances and Risks, Henning Wachsmuth, Wednesday 14.9., 9:00-10:00

Abstract: Research on natural language generation has made tremendous advances in the last years, due to powerful neural language models, such as BART, T5, and GPT-3. While generation technologies have been studied extensively for fact-oriented applications such as machine translation and customer service chatbots, they are recently also employed increasingly for creating and modifying subjective language – from the encoding of human beliefs in newly produced text to the debiasing of corpora and the transfer of subjective style characteristics of human-written texts. This bring up the question whether there are generation tasks that we should refrain from doing research on, due to the ethical issues they may entail. In this talk, I will give an overview of recent research on the generation of subjective language and present selected approaches in detail, covering the areas of computational argumentation, media framing, and social bias mitigation. On this basis, I will discuss both the chances for humans and society emerging from respective generation technologies and the ethical risks that come with their application. The interaction of chances and risks defines a red line that, I argue, should not be crossed without important reasons.

Bio: Henning Wachsmuth leads the Computational Social Science Group at Paderborn University. After receiving his PhD from Paderborn University in 2015, he worked as a PostDoc at Bauhaus-Universität Weimar, before he returned to Paderborn as a junior professor in 2018. His group studies how intentions and views of people are reflected in language and how machines can understand and imitate this with natural language processing methods. Henning’s main research interests including computational argumentation, the mitigation of social bias and media bias, and the construction of human-like explanations for educational and explainable NLP.

In Other Words: Models and Evaluation for Text Style Transfer, Malvina Nissim, Thursday 15.9., 9:00-10:00

Abstract: Whenever we write about something, we make a choice (consciously or not) on how we do it. For example, I can write about a series I watched while I was COVID-bound at home like this: ‘I viewed it and I believe it is a high quality program.’ but also like this: ‘I’ve watched it and it is AWESOME!!!!’. The content is (approximately) the same, but the style I’ve used is different: informal in the second formulation, much more formal in the first one. In the larger field of Natural Language Generation, text style transfer is, broadly put, the task of converting a text of one style (for example informal) into another (for example formal) while preserving its content. How can models be best trained for this task? What can be expected of a system performing text style transfer? And what does it mean to do it well, especially given the broad range of rewriting possibilities? In this talk I will present various strategies to model the task of style transfer under different conditions and I will discuss insights from both human and automatic evaluations. Chiefly, through the analysis of both modelling and evaluation and through engagement with audience, I will also reflect on the nature, the definition, and the the future of the task itself.

Bio: Malvina Nissim holds a Chair in Computational Linguistics and Society at the University of Groningen, The Netherlands. Her research interests span across several aspects of automatic text analysis and generation, and she’s been recently focusing on writing style and more specifically on how the same content can be (re)written in different ways. She is the author of 100+ publications in international venues, is member of the main associations in the field, annually reviews for the major conferences and journals, organises and/or (co-)chairs large-scale scientific events, and is a member of the recently established Ethics Committee of ACL. She’s also interested in science dissemination, including the popularisation of NLP among younger students and the general public through events organised (mainly in Italy) at large science festivals. In this spirit, she recently co-authored (with Ludovica Pannitto) an introductory book to Computational Linguistics (“Che cos’è la linguistica computazionale”, Carocci Editore, 2022) for undergraduate programmes in Italian Universities (and for anyone who’s curious about the topic).
She graduated in Linguistics from the University of Pisa, and obtained her PhD in Linguistics from the University of Pavia. Before joining the University of Groningen, she was a tenured researcher at the University of Bologna (2006-2014), and a post-doc at the Institute for Cognitive Science and Technology in Rome (2006) and at the University of Edinburgh (2001-2005). In 2017, she was elected as the 2016 University of Groningen Lecturer of the Year.