Just in case you missed out on last week’s Innovation In Interpreting Summit by @adrechsel and @Goldsmith_Josh, aka Techforword, here comes my short and personal recap.
The good news is: If you want to watch all the panels and loads of useful video contributions around how technology can support you in the booth, setting up your own interpreting studio, sim-consec, digital notetaking, remote simultaneous interpreting, remote teaching any many more, it’s not too late! You can still buy the power pack (including access to all videos and lots of bonus material) until midnight on 3 March 2021 here.
This is my video contribution on How to be boothmates without sharing a booth. (which is in German with English subtitles by my dear colleague Leonie Wagener). It is about digitalising – instead of just digitising – collaboration between interpreters.
Most of what’s in this video has also been – at least briefly – mentioned in our collaborative Unfinished Handbook For Remote Simultaneous Interpreters. If you feel there is something missing, please drop me a line!
I also had the honour to moderate a panel on New Frontiers in Interpreting Technology. My four wonderful panellists were Bart Defrancq, Bianca Prandi, Jorn Rijckaert, and Oliver Pouliot. There was interpretation from spoken English into International Sign Language and vice versa provided by Helsa Borinstein and Romy O’Callaghan, and we even had live captions in English by Norma MacHaye. Even without the inspiring discussion, I could have just watched the sign language interpreting and live captioning for ages. But then the discussion as such wasn’t too bad either 🙂
Looking back on the last 25 years, it seems to me like around every five years some innovative technology comes about and changes our professional lives in a way for us to ask “how could we ever …?”
1995 – … write translations on a typewriter?
2000 – … do translations without Google/electronic dictionaries/translation memories?
2005 – … travel and run a business with no mobile internet/ phone?
2010 – … live without linguee?
2015 – … survive without your laptop/tablet in the booth?
2020 – … prepare technical conferences on your own? Live without Zoom?
So I asked my panellist colleagues what they thought the next big thing would be in 2025. For Bart, and also for Bianca, it is definitely ASR (Automatic Speech Recognition) that is going to help create a new kind of artificial boothmate, displaying difficult elements like numbers, acronyms, and named entities in real time. Bianca also thinks that the majority of interpreter colleagues will finally embrace computers as a valuable support in the booth. Oliver made me a bit envious when he said that as a sign language interpreter, for a very long time he just brought his physical self to the booth, with no technical support whatsoever (not even pen and paper I suppose). He and Jorn mentioned sign language avatars as a new technology in sign language interpreting. Jorn also explained how ASR could be a good way for deaf sign language interpreters to be able to interpret from spoken language into sign language with automatic live captions being their intermediate language.
We then discussed skills. Are there any skills, like knowing how to read a map or remembering phone numbers for our kids, that will become obsolete for conference interpreters? Won’t we memorise key terminology before each conference in the future?
There was general agreement that interpreters shouldn’t become “lazy” and rely on a virtual boothmate to spit out any terminology needed in real-time. We should rather develop strategies as to best use CAI tools in the booth and in preparation, as Bianca put it. So predicting a “virtual boothmate’s” errors might be a decisive skill in the future. After all, the strengths and weaknesses of humans and machines are quite different and should be used so that they complement each other, as Bart explained. Jorn gave us a very interesting account of how sign language interpreters due to COVID-19 started to do their own recordings and video editing at home instead of relying on a cameraman.
My final question was twofold: What do you wish had never been invented (like built-in laptop microphones), and which piece of hardware (e.g. a rollable 34-inch screen which I can bring to the booth) or software (for me: fully functional abstracting/automatic mind-mapping) is top on your wishlist?
Oliver explained how video auto-focus was a real nightmare for sign language interpreters, something I had never thought about before. It tends to never get the focus right, what with sign language interpreters constantly moving and gesturing. Just like Jorn, he saw perfect ASR as a real opportunity in sign language interpreting. Bart referred to the downsides of data sharing in the remote simultaneous interpreting industry. He saw speaker control as a promising feature of the future so that instead of waving at the speaker to slow down, the system will simply slow down the speech electronically as soon as certain threshold values are reached – very promising indeed! Bianca’s wishes were the nicest ones: computers serving as a “second brain” in the booth, and – most importantly – being able to see our boothmates on RSI platforms. I couldn’t have thought of any better concluding remarks!
About the author
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C), and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.