One minute summary
Say “I don’t know” and “it depends” with confidence
(Get to) know possible use cases of Automatic Speech Translation
(make students) understand AI
Let’s do meaningful empirical research
Barely a month and a half into the new year, a lot has already been said about AI in conference interpreting, after last week’s Interpreting Europe conference, and of course the AIIC Assembly in Lima in January. What I have taken home from Brussels are, admittedly, not a lot of hard facts, but all the more loverly memories as well as plans and directions for my future work.
My one big political takeaway: I don’t know and it depends are great statements.
Times and technologies are unpredictable, and I sometimes feel that I know less than ever, although or precisely because I have more knowledge than ever at my fingertips (not even to mention fake news, alternative facts and the like). That’s why I think saying I don’t know should become more fashionable these days, expressing a humble, nuanced, even scientific mindset. For as much as I enjoyed the Oxford style debate at Interpreting Europe, I feel that we are wasting our energy on these dichotomies. I really don’t want to decide whether I believe that human interpreters/lawyers/journalists/nurses will be replaced by machines one day or not, which is the best political party to vote for, or if eating meat is a sin or not. I am all for a positive I-don’t-know-ism. (And by the way, humans, as opposed to AI, let you know if they are not sure about what they are saying.) After all, the least which can be said about the relation between AI and Interpreting is that “it’s complicated” and doesn’t have any straightforward answers. And as complicated questions require sophisticated answers, it depends seems to be an equally valid thing to say. It is not for nothing that the AI landscape we have drawn up at AIIC’s AI workstream to get our heads around the topic can only be printed in a tablecloth format.
In practice, Automatic Speech Translation (AST) is a reality. It can indeed be a “good enough” or “better than nothing” solution in some scenarios, especially if communication is not too interactive and spontaneous, the meeting is not especially “high stake” or “high risk”, and there are no confidentiality or data protection issues. We as conference interpreters need be able to make sound recommendations here, and I dearly recommend the AST decision tree we have created at AIIC’s AI workstream as a user-friendly guidance! This decision aid might also help in the constructive dialogue between AST providers and potential users/clients that is urgently needed, as Claudio Fantinuoli put it in Brussels.
Another question that gets a clear “it depends” from me is whether automatic speech translation will democratise multilingual communication or not. It might do so because it is accessible to more people, in more languages and in more situations. But it could also increase inequality, because some situations/people/languages are deemed to be “high stake”, thus will be awarded human interpretation, while others are left to their own devices, literally, with AST and no control of its reliability (or just as much English as they happen to speak).
Final thought on the political side of things: I find it very reassuring to hear that the European Commission, European Parliament, Court of Justice and also the Council of Europe all have some kind of body or structure in place to discuss and study AI, and interpreters are involved everywhere.
My academic takeaways (not new but still very important)
I would like to have an AI literacy in interpreting model following the example of LT-Lider. It will be difficult to judge the implications of AI on interpreting or to be actively involved in the development of tools to support interpreters in their workflow without some basic technical expertise. Whether qualified interpreters will be needed as interpreters, or to monitor the output of AST, we need to train them well!
Speaking of having AI output monitored or vetted by professionals (the famous human in the loop): What clearly distinguishes interpreters from translators or other professions like lawyers or journalists is real-time delivery. In simultaneous interpreting there is no such thing as post-editing. Real-time, simultaneous vetting of AST output is quite a challenge, to say the least. Irrespective of the additional layer of cognitive load, it is hard to imagine how you would correct words that have already been spoken. Unless you check the target text in writing, and only have it spoken by the machine once a human in the loop has ticked the ok box for each sentence. This would again create an extreme time lag, and be cognitively even more demanding than “simple” human simultaneous interpreting, so nothing would be gained in the first place.
We need empirical research. That’s what Sally Bailey-Ravet, Head Interpreter at the Council of Europe, pointed out when she shared her insights from their in-house testing of AST. Time lag, segmentation, sentence structure, logical coherence and errors in meaning are still a problem. Output is sometimes incomprehensible, but what is even worse: it sounds all right if you don’t listen properly, even if the message is completely different or misleading.
Speaking of research: Interestingly, if you listened carefully, both Franz Pöckhacker, guru of interpreting studies, and Jürgen Schmidhuber, playing the sci-fi adept during the Oxford style debate at Interpreting Europe, were pretty much on the same page, acknowledging that interpreting is a situated and embodied activity. For AI to really reach human parity (if at all), it would take what Schmidhuber describes as humanoid agents equipped with sensors, who could learn to act and interact in the real world like humans by experience.
All in all, exchanges of expertise and experience such as the one in Brussels show how different the use cases are, be it for AST or for CAI support. Different levels of interactivity, language combinations, topics, requirements for confidentiality, data and IP protection, risk of failure etc. Thus empirical research needs to be equally nuanced to be meaningful in practice. A lot of research to be done, and I am very much looking forward to it!
————–
About the author:
Anja Rütten has specialised in tech, information and terminology management since the mid-1990s. She holds a professorship in interpreting studies and Computer-Aided Interpreting at the Cologne University of Applied Sciences.
——————-
Disclaimer:
Views or opinions expressed are solely my own and do not express the views or opinions of my employer.
Leave a Reply