The Unfinished Handbook for Remote Simultaneous Interpreters

Together with Angelika Eberhardt and Peter Sand, I have compiled a collection of tips and tricks around remote simultaneous interpreting (be it from a hub or from home) that we have been collecting ourselves or that colleagues have shared with us.

It is a work-in-progress document that we will try to constantly update – so feel free to share your best practices, workarounds, and tips in the comments or via email!

For a fullscreen view, click here.

Natural Language Processing – What’s in it for interpreters?

„Natural Language Processing? Hey, that’s what I do for a living!“ That’s what I thought when I heard about the live talk „A Glimpse at the Future of NLP“ (big thank you to Julia Böhm for pointing this out to me). As I am always curious about what happens in AI and language processing, I registered right away. And I was not disappointed.

In this conference, Marco Turchi, Head of the Machine Translation group at Fondazione Bruno Kessler, presented recent developments in automatic speech translation. And just to make this clear: This was not about machine interpretation, but about spoken language translation (SLT): spoken language is translated into written language. This text can then be used, e.g., for subtitling. Theoretically, it could then also be passed through TTS (text to speech) in order to deliver spoken interpretation, although this is not the purpose of SLT.

The classic approach of SLT, which has been used in the past decades, is cascading. It consists of two phases: First, the source speech is converted into written text by means of automatic speech recognition (ASR).  This text is then passed through a machine translation (MT) system. The downside of this approach is that once the spoken language has been converted into written text, the MT system is ignorant of, e.g., the tone of the voice, background sounds (i.e. context information), age or gender of the speaker.

Now another, rather recent approach relies on using a single neural network to directly translate the input audio signal in one language into text in a different language without first transcribing it, i.e. converting it into written text. This end-to-end SLT translates directly from the spoken source text, thus has more contextual information available than what a transcript provides. The source speech is neither „normalised“ while being converted into written text, nor divided into segments that are treated separately from each other. Despite being very new, the quality of end-to-end SLT this year has already reached parity with the 30-year-old cascade approach. But it also has its peculiarities:

As the text is not segmented automatically (or naturally by punctuation, like in written text), the system must learn how to organise the text into meaningful units (similar to, but not necessarily sentences). I was intrigued to hear that efforts are being made to find the right „ear-voice-span“ or décalage, as we human interpreters call it. While a computer does not have this human problem of limited working memory, it still has to decide when to start producing its output – a tradeoff between lagging and performance. This was the point when I decided I wanted to ask some more questions about this whole SLT subject, and had a video chat with Marco Turchi (thank you, Marco!), just to ask him some more questions that maybe only interpreters find interesting:

Question: Could an end-to-end NLP system learn from human interpreters what a good ear-voice-span is? Are there other strategies from conference interpreting that machine interpreting systems are taught to deal with difficult situations, like for example chunking, summarising, explaining/commenting, inferencing, changes of sentence order, or complete reshaping of longer passages? (and guessing, haha)? But then I guess a machine won’t necessarily struggle with the same problems humans have, like excessive speed  …

Marco Turchi: Human interpreting data could indeed be very helpful as a training base. But you need to bear in mind that neural systems can’t be taught rules. You don’t just tell them „wait until there is a meaningful chunk of information you can process before you start speaking“ like you do with students of conference interpreting. Neural networks, similar to human brains, learn by pattern recognition. This means that they need to be fed with human interpreting data so that they can listen to the work of enough human interpreters in order to „intuitively“ figure out what the right ear-voice-span is. These patterns, or strategies, are only implicit and difficult to interpret. So neural networks need to observe a huge amount of examples in order to recognise a pattern, much more than the human brain needs to learn the same thing.

Question: If human training data was used, could you give me an idea of if or how the learning system would deal with all those human imperfections, like omissions, hesitations, and also mistakes?

Marco Turchi: Of course, human training data would include pauses, hesitations, and errors. But researchers are studying ways of weighing these „errors“ in a smart way, so it is a good way forward.

Question: And what happens if the machine is translating a conference on mechanical engineering and someone makes a side remark about yesterday’s football match?

Marco Turchi: Machine translation tends to be literal, not creative. It produces different options and the problem is to select from it.  To a certain extent, machines can be forced to comply with rules: They can be fed preferred terminology or names of persons, or they can be told that a speech is about a certain subject matter, let’s say car engines. Automatic domain adaptation, however, is a topic still being worked on. So it might be a challenge for a computer to recognise an unforeseen change of subject. Although of course, a machine does not forget its knowledge about football just because it is translating a speech about mechanical engineering. However, it lacks the situational awareness of a human interpreter to distinguish between the purposes of different elements of a spoken contribution.

Question: One problem that was mentioned in your online talk: real-live, human training data is simply not available, mainly due to permission and confidentiality issues. How do you go about this problem at the moment?

Marco Turchi: The current approach is to create datasets automatically. For our MuSt-C corpus, we have TED talks transcribed and translated by humans. These translations with their spoken source texts are then fed into our neural network for it to learn from. There are other such initiatives, like Facebook’s CoVoSt or Europarl-ST.

Question: So when will computers outperform humans? What’s the way forward?

Marco Turchi: Bringing machine interpreting to the same level as humans is not a goal that is practically relevant. It is just not realistic. Machine learning has its limitations. There is a steep learning curve at the beginning, which then flattens at a certain level with increasing specificity. Dialects or accents, for example, will always be difficult to learn for a neural network, as it is difficult to feed it with enough of such data for the system to recognise it as something „worth learning“ and not just noise, i.e. irrelevant deviations of normal speech.

The idea of all this research is always to help humans where computers are better. Computers, unlike humans, have no creativity, which is an essential element of interpreting. But they can be better at many other things. The most obvious are recognising numbers and named entities or finding a missing word more quickly. But there will certainly be more tasks computers can fulfill to support interpreters, which we are still to discover while the technology improves.

Thank you very much, Marco!

After all, I think that I prefer being supported by a machine than the other way around. The other day, in the booth, I had to read out pre-translated questions and answers provided by the customer. It was only halfway through the first round of questions that my colleague and I realised that we were reading out machine translations that had not been post-edited. While some parts were definitely not recognisable as machine translations, others were complete nonsense content-wise (although they still sounded good). So what we did was a new kind of simultaneous on-the-fly post-editing … Well, at least we won’t get bored too soon!


Further reading and testing:

beta.matesub.com (generates subtitles)

http://voicedocs.com/transcriber (transcribes audio and video files)

https://elitr.eu/technologies (European live translator – a current project to provide a solution to transcribe audio input for hearing-impaired listeners in multiple languages)

https://towardsdatascience.com/human-learning-vs-machine-learning-dfa8fe421560

http://iwslt.org/doku.php?id=offline_speech_translation

https://www.spektrum.de/news/kuenstliche-intelligenz-der-textgenerator-gpt-3-als-sprachtalent/1756796?utm_source=pocket-newtab-global-de-DE

https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Will 3D audio make remote simultaneous interpreting a pleasure?

Now THAT’S what I want: Virtual Reality for my ears!

Apparently, simultaneous interpreters are not the only ones suffering from Zoom fatigue, i.e. the confusion and inability to physically locate a speaker using our ears can lead to a condition that’s sometimes referred to as “Zoom Fatigue”. But it looks like there is still reason to hope that, acoustically, video conferences become more of a pleasure and less of an ordeal in years to come. At least that’s what current discussions around this topic make me hope … This (four year old!) video by BBC Click illustrates quite nicely what 3D sound for headphones is about:

https://www.youtube.com/watch?v=51za5u3LtEc&feature=youtu.be

Binaural recording is a method of recording sound with the intent to create an immersive 3D stereo sound sensation for the listener of actually being in the room with the performers. A dummy head, aka „Kunstkopf“, is used for recording, which has two dummy ears in the shape of human ears with microphones in them so as to simulate the perception of the human ear. Sound recorded via binaural recording is intended for replay using headphones. And the good thing is: What was originally made for the gaming and movie industry is now also bound to conquer the video conferencing market.

Mathias Johannson, the CEO of Dirac, is rather confident In less than a decade, 3D audio over headsets with head-tracking capabilities will allow us to have remote meetings in which you can move about an actual room, having sidebar discussions with one colleague or another as you huddle close or step away. What is more, Johannson reckons that spatial audio could be made available to videoconference users early in 2021. Dirac wants to offer 3D audio technology to video chat platforms via an off-the-shelf software solution, so no expensive hardware would be required. Google, on the other hand, already advertises its Google Meet hardware for immersive sound. But from what we have learned six months into Covid-19, it is difficult even to persuade participants to wear a cabled headset (not to mention using an ethernet cable instead of a wifi connection), I am personally not too optimistic that expensive hardware is the way forward to high-quality remote simultaneous interpretation.

So, will such a software-based solution possibly not only provide a more immersive meeting experience but also be able to provide decent sound even without remote participants connected from their home offices having to use special equipment, i.e. headsets? I asked Prof. Jochen Steffens, who is a sound and video engineer, for his opinion. The answer was rather sobering regarding equipment: For 3D audio, a dry and clean recording is required, which at the moment is not possible using built-in microphones. Equipment made for binaural recording, however, would not really serve the purpose of simultaneous interpreting either, as the room sound would actually be more of a disturbance for interpreting. Binaural recording is rather made for recording the sound of real threedimensional sound impressions like in concert halls and the like. For video conferencing, rather than headsets, Steffens recommends using unidirectional microphones; he suggests, for example, an inexpensive cardioid large-diaphragm microphone, mounted on a table stand. And the good news: If you are too vain to wear a headset in video conferences, with decent sound input being delivered by a good microphone, any odd wireless in-ear earphones can be used for listening, or even the built-in speakers of your laptop as long as you turn them off while speaking.

But what about the spatial, immersive experience? And how will a spatial distribution of participants happen if, in fact, there is no real room to match the virtual distribution to? As Prof. Steffens explained to me, once you have a good quality sound input, people can indeed be mapped into a virtual space, e.g. around a table, rather easily. The next question would be if, in contrast to the conference participants, we as interpreters would really appreciate such a being-in-the-room experience. While this immersion could indeed allow for more situational awareness, we might prefer to always be acoustically positioned right in front of the person who is speaking instead of having a „round table“ experience. After all, speakers are best understood when they are placed in front of you and both ears get an equally strong input (the so-called cocktail party effect of selectively hearing only one voice works best with binaural input). And this would, by the way, nicely match a close front view of the person speaking.

And then, if ever video conferencing can offer us a useful immersive experience, couldn’t it even end up being more convenient than a „normal“ on-site simultaneous interpreting setting? More often than not, we are rather isolated in our booths with no more than a poor view of the meeting room from a faraway/high above/hidden-next-door position. So much so that I am starting to wonder if 3D audio (and video, for that matter) could also be used in on-site conference rooms. According to Prof. Steffens, this would be perfectly feasible by „simply“ using sound engineering software.

But then the next question arises: While simultaneous interpreters used to be „the voice in your ear“, they might now be assigned a position in the meeting space … the voice from above, from behind (like in chuchotage), or our voices could even come from where the speaker is sitting who is being interpreted at the moment. Although for this to happen, the speaker’s voice would have to be muted completely, which might not be what we want. Two voices coming from the same position would be hard to process for the brain. So the interpreter’s voice would need to find its own „place in space“ after all – suggestions are welcome!


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

 

 

 

 

 

Conference Interpreters and Their Listening Behaviour—a Guest Article by Lisa Woytowicz

Listening is an integral part of every conference interpreter’s job. It might therefore be surprising that there is hardly any research on conference interpreters’ listening behaviour. Since I did not find too much on the issue, I conducted my own study.

About Listening

Studies on listening behaviour exist. But generally, they are conducted by scholars in the field of psychology or communication studies. According to these experts, listening is a multidimensional construct which consists of behavioural, cognitive, and affective processes.

Every time we listen, we—or rather our brains—process information on several levels: When somebody speaks, we receive (verbal and non-verbal) signals. We identify sounds and put them together. We recognise words, sentences and what they mean. During this process, our short-term memory continuously verifies whether the incoming information corresponds to the information stored in our long-term memory. Besides, it adds new information and establishes new links.

There is evidence that the more we already know about an issue, the faster our short-term memory processes the information. This is not only fascinating; it also is one of the reasons why preparing an interpreting assignment is key.

Listening as a Skill

However, there is a tiny but important step in the listening process which is often ignored or at least underestimated: every listener has an intention, a goal she pursues. Selecting a listening goal is the very first step of the listening process which commonly happens subconsciously. Nevertheless, it is a decision every listener makes. And it determines which of the incoming signals are considered relevant and which will be ignored.

When interpreting simultaneously, conference interpreters are special listeners because they are “double listeners”. They need to listen to the speaker and—at the same time—to themselves. They listen to the information they interpret while also making sure that their rendition makes sense and is grammatically and semantically correct. This kind of listening behaviour might be part of the job description. Nevertheless, it is quite unnatural.

Experts agree that listening is “an identifiable set of skills, attitudes, and abilities [that] can be formulated and taught to improve individual performance” (Worthington & Bodie, 2017, p. 8). And this is brilliant! It means that interpreters can learn to make conscious listening decisions to become better listeners and thus (even) better interpreters.

Different Listening Styles

The Listening Styles Profile (LSP) is a concept to describe listening behaviour. According to the latest version of the LSP, listening styles are listening goals which are triggered by individual predispositions (i.e., they are partially stable) and elements of the listening situation (i.e., they are partially not stable).

There are four different listening styles:

  • Relational listening: a concern with and awareness of the speakers’ feelings and emotions,
  • Analytical listening: focussing on the full message before forming an opinion,
  • Task-oriented listening: a concern with the amount of time spent listening and a desire to interact with focused speakers,
  • Critical listening: a tendency to evaluate and critically assess messages for accuracy and consistency. (Bodie & Worthington, 2017, p. 403)

Data on listening behaviour is collected using self-assessment questionnaires. For my research project, I used the LSP-R8 (Rinke, 2016).

Assessing the Listening Behaviour of Different Professions

I asked representatives of three different professions as well as students enrolled in the respective university courses about their listening behaviour. Using an online questionnaire, I was able to gather data on the listening behaviour of 242 (future) psychologists, teachers, and conference interpreters.

Several t-tests were performed to determine statistically relevant differences between the groups mentioned above. If you are into statistics, let me know and I am happy to give you the details. But for now, let us skip the statistical part and get straight to the results. So, here is what I found:

  • Conference interpreters have a stronger tendency toward Critical listening than the other professionals.
  • Conference interpreters have a weaker tendency toward Relational listening than the other professionals.

To my surprise, there were no statistically relevant differences among the student groups. Apparently, future conference interpreters’ listening behaviour does not differ very much from the way future psychologists or future teachers listen.

Therefore, I concluded that the frequent use of a certain listening style on-the-job might result in applying it frequently, even in other contexts. If you think about it, this is not very far-fetched. The more we use a certain skill, the more we train it and the better we get at it. And when we are good at something, we tend to do it more often. In the end, this cycle might lead to partially automatising a certain listening behaviour.

Remember, interpreters are double listeners who always make sure that their rendition is correct. So, they often apply Critical listening when sitting in the booth. Psychologists and teachers—in their professional contexts—surely use a lot more Relational listening. In the end, psychologists are paid to know how people feel; and teachers regularly need to put themselves into the shoes of their students to meet their needs.

Conclusions

What are these findings good for? Well, competent listeners can flexibly switch between different listening styles, always adapting to new listening contexts. Irrespective of one’s profession, this might be a goal everybody could strive for. At the end of the day, being a good listener is a great asset.

It looks as though conference interpreters should train to use Relational listening more often. They could start thinking about situations in which this listening style (or the others) could come in handy, particularly if Critical listening is more of a hindrance than a help. These might be situations which involve talking to clients, colleagues, family, and friends.

Furthermore, conference interpreters could try to consciously apply different listening styles in the booth. Depending on the speaker, they might grasp more of the relevant information by focussing on her emotions (Relational listening) or on the full message (Analytical listening).

Interpreting trainers could consider establishing listening behaviour as part of the curriculum. Besides, the LSP might help explain certain flaws, such as omissions, contresens, etc., which could be relevant for giving (better) feedback.

Since listening plays such an important role in every conference interpreter’s (professional) life, there are plenty of other conclusions to be drawn. Are you interested in discussing your suggestions? Just send me an e-mail: info@lw-dolmetschen.de

 

References

Bodie, G. D. & Worthington, D. L. (2017). Profile 36 listening styles profile-revised (LSP-R). In D. L. Worthington & G. D. Bodie (Eds.), The sourcebook of listening research. Methodology and measures (pp. 402–409). Wiley-Blackwell.

Imhof, M. (2010). Zuhören lernen und lehren. Psychologische Grundlagen zur Beschreibung und Förderung von Zuhörkompetenzen in Schule und Unterricht. In M. Imhof & V. Bernius (Eds.), Zuhörkompetenz in Unterricht und Schule. Beiträge aus Wissenschaft und Praxis (pp. 15–30). Vandenhoeck & Ruprecht.

Rinke, E. M. (2016, May 14). A general survey measure of individual listening styles: Short form of the listening styles profile-revised (LSP-R8) [AAPOR Poster Session 3]. Annual Conference of the American Association for Public Opinion Research, Hilton Austin, Austin, TX, United States.

Worthington, D. & Bodie, G. D. (2017). Defining listening. A historical, theoretical, and pragmatic assessment. In D. L. Worthington & G. D. Bodie (Eds.), The sourcebook of listening research. Methodology and measures (pp. 3–17). Wiley-Blackwell.

Woytowicz, L. (2019). Persönlichkeitseigenschaften und Listening Styles von Konferenzdolmetschern im Vergleich zu anderen Berufsgruppen [unpublished master’s thesis]. Johannes Gutenberg University Mainz.


About the author

Lisa Woytowicz is a professional conference interpreter for German, English, and Portuguese, based in Essen (Germany).

www.lw-dolmetschen.de

Ein Hoch auf den guten Ton beim hybriden #DfD2020 | Good sound and vibes at Interpreters for Interpreters Workshop

+++ for English version see below +++

In einer Mischung aus ESC („Hello from Berlin“) und Abiprüfung (getrennte Tische) hat am heutigen 18. Juli 2020 der bislang teilnehmerstärkste Dolmetscher-für-Dolmetscher-Workshop als Hybridveranstaltung in Bonn stattgefunden.

169 Dolmetscher*innen waren angemeldet, davon 80 Dolmetscher corona-konform persönlich vor Ort. Dies alles organisiert vom Fortbildungs-Dreamteam der AIIC Deutschland, Inés de Chavarría, Ulla Schneider und Fernanda Vila Kalbermatten, technisch möglich gemacht durch das bewährte Team von PCS.

Lisa Woytowicz hat über ihre Masterarbeit zum Thema Listening Styles referiert. Relational Listening scheint ein unter Dolmetschern unterschätztes Thema zu sein (dazu mehr später auf diesem Kanal).

Monika Ott über den Umgang mit Kunden gibt uns noch einmal als Hausaufgabe auf, uns als Allround-Dienstleister (Conference Interpreting Service Provider laut der noch in Entwicklung befindlichen ISO 23155) zu verstehen und unser Netzwerk zu nutzen, um auf Kompetenzen von Kolleg*innen zugreifen zu können. Denn nur gemeinsam können wir die eilerlegende Wollmilchsau sein: RSI-Plattformen, Hubs, Hardware, Terminologiemanagement, Finanzen, Datenschutz.

Dr. Karin Reithofer hat uns das Thema Englisch als Lingua Franca sehr anschaulich nahegebracht. In ihrer Forschungsarbeit hat sie herausgefunden, dass das Verständnis eines gedolmetschten Fachvortrags (monologisches Setting) signifikant besser ist als das Verständnis bei der Kommunikation in nicht-muttersprachlichem Englisch. In Dialogsituationen hingegen kann nicht-muttersprachliches Englisch durchaus funktionieren. Auch interessant: Wenn ein Nicht-Muttersprachler Englisch spricht, fällt es uns leichter, ihn zu verstehen bzw. zu dolmetschen, wenn wir die Muttersprache dieses Redners kennen.

Gemeinsam mit Alex Drechsel aus dem „Studio Brüssel“ durfte ich dann im Hybridformat unseren subjektiven Saisonrückblick präsentieren und diskutieren:

Das Thema Hub oder Heimdolmetschen haben wir weitestgehend ausgeklammert. Meine Gedanken aus Prä-Corona-Zeiten nach dem RSI-Seminar im Mai 2019 finden sich im Blogbeitrag zu Hub or Home.

Was mir dabei neben dem Thema Geld aktuell am meisten umtreibt, ist die Frage, was wir tun, wenn der Ton der Wortbeiträge zu schlecht ist, um simultan zu dolmetschen. Hier wünsche ich mir von uns allen ein souveränes und nouanciertes Vorgehen, das über „das kann nicht gedolmetscht werden“ hinausgeht.

Ein Vorschlag zum Thema „Was tun bei schlechtem Ton“:

  1. Ein technischer Co-Host oder „Event-Koordinator“, wie PCS es nennt, überwacht die Tonqualität und greift moderierend ein, wenn nicht mehr simultan gedolmetscht wird. Diese Entscheidung sollte zentral durch eine Person für das ganze Dolmetschteam getroffen werden.
  2. Dann bei einigermaßen brauchbarem Ton: auf Konsekutivdolmetschen umstellen.
  3. Wenn keine Tonübertragung mehr möglich: Beiträge aus dem Chat dolmetschen.

Grundsätzlich gut: Während der virtuellen Sitzung einen Hinweis für die Teilnehmer einblenden, dass remote gedolmetscht wird.

Abschließend wurde unser AIIC-Präsident Uroš Peterc zugeschaltet, der mit seiner Bewertung der Lage den DfD würdig abgerundet hat. Er erwartet, dass das sehr diverse Angebot an RSI-Plattformen und Hubs sich sortieren wird. Trotz der Planungsunsicherheit sieht er die AIIC in der Pflicht, nicht nur zu reagieren, sondern zu agieren. Ein besseres Saisonfazit könnte ich nicht formulieren.

————————
Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.


+++ English version +++

Disclaimer: This blog post was written by a non-native ELF speaker ;-).

What seemed like a mixture of the European Song Contest („Hello from Berlin“) and end-of-term exams (with tables set up far apart) was in fact the first ever hybrid edition of the annual Interpreters for Interpreters Workshop.

On 18 July 2020, 169 interpreters (a record high!) had registered for this IfI workshop in Bonn. 80 of us were there on-site, with strict hygiene measures in place, while the others were connected via Zoom. All this had been organised by a fantastic org team consisting of Inés de Chavarría, Ulla Schneider and Fernanda Vila Kalbermatten, while the technical setup was in the experienced hands of PCS.

The first speaker was Lisa Woytowicz, she presented her master’s thesis about Listening Styles. It looks like Relational Listening isn’t exactly what interpreters are best at … but we will read more about it on this blog soon.

Monika Ott reminded us to be all-round service providers to our customers (CISP, Conference Interpreting Service Providers according to ISO 23155, which is still under development) and use our network to draw upon the expertise of our colleagues. For if we join forces, we can cover the whole range of service and advice needed: RSI platforms, hubs, hardware, terminology management, pricing, data protection etc.

Dr Karin Reithofer told us what we need to know about English as a Lingua Franca (ELF).  Her research has shown that in monologue settings, technical presentations are significantly better understood when people speak their mother tongues and use interpretation than when they communicate in non-native English. In dialogues, however, non-native (“ELF“) English may work as well. What’s also interesting: When interpreting non-native English speakers, it is easier for us to understand them if we know their mother tongues.

Alex Drechsel and I then gave our “hybrid“ review of the past – and first ever – virtual  interpreting season, me on-site, and Alex from „Studio Brussels“:

The hub vs. home discussion was left aside in our review. It has been largely discussed already (see my pre-Covid blog article Hub or Home). The main points that keep my mind busy after this virtual interpreting season are money and sound quality.

As to the latter, I would like to see a nuanced way of dealing with situations where sound quality is not sufficient for simultaneous interpreting. I would like us to be more constructive and go beyond the usual black and white approach, i.e. either keep interpreting or stop interpreting.

My suggestion for a professional way of handling poor sound situations:

  1. Have a technical co-host or event coordinator, like PCS put it, monitor sound quality, and intervene as a moderator when sound is too poor for simultaneous interpretation. The decision of when to stop interpreting should be in the hands of one person for the whole team.
  2. If sound quality allows for it: switch to consecutive interpreting.
  3. If not: Have participants type in the chat and do sight translation.

I also like the idea of displaying a disclaimer to the meeting participants in videoconferences, stating that the meeting is being interpreter remotely.

Finally, our AIIC president, Uroš Peterc, joined us via Zoom. His view of the current situation perfectly rounded off the day. He expects the vast offer of RSI platforms and hubs to consolidate over time. In these times of uncertainty, he wants AIIC not only to react to market developments but to be a proactive player. I couldn’t have put it better myself.


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

You can never have too many screens, can you?

I don’t know about you, but I sometimes struggle, when using my laptop in the booth, to squeeze the agenda, list of participants,  glossary, dictionary, web browser and meeting documents/presentations onto one screen. Not to mention email, messenger or shared notepad when working in separate booths in times of COVID-19 … Or even the soft console of your RSI provider?

Well, I have more than once found myself wondering if anybody would mind me bringing my 24 inch desktop monitor to the booth to add some additional screenspace to this tiny 12 inch laptop screen – until, finally, I came across a very useful little freeware application called spacedesk. It lets you use your Android, iOS or Windows tablet as an external monitor to complement your Windows computer quite easily (to all tablet aficionados: unfortunately it does not work the other way around). You simply install it on both your main Windows device, the „server“ or „Primary Machine“, and the tablet as a „client“ or „secondary device“, and you can connect both devices via USB, ethernet or WiFi and then  use your tablet to either extend or duplicate your computer screen just like you do it with any external monitor on your desk.

There is just a tiny delay when moving the mouse (if that’s not due to my low-end tablet’s poor performance), so it might be better to move the more static elements, like the agenda, to it rather than your terminology database, which you might want to handle very swiftly.

So if ever you feel like going back to printing your documents for lack of screen space, bringing your tablet as a screen extension might be a good alternative.


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

 

Was kostet Remote-Dolmetschen und warum?

Wann ist es sinnvoll, mit Dolmetschern vor Ort zu tagen, und wann ist Dolmetschen über das Internet sinnvoll? Nach unserem Web-Meeting der AIIC Deutschland am vergangenen Freitag (22. Mai 2020) mit dem herzerfrischenden Titel „TACHELES – RSI auf dem deutschen Markt“ teile ich hier gerne mit Euch meine in eine rechnende Tabelle gegossenen Überlegungen zum Thema:

Preisvergleich Remotedolmetschen und Präsenzdolmetschen

Ergänzund zu den finanziellen Überlegungen findet Ihr hier die technischen Empfehlungen der AIIC zum Ferndolmetschen.

Über Fragen und Anregungen freue ich mich natürlich!


Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

 

Videos on fee calculation for conference interpreters | Videos sobre como calcular honorarios para intérpretes de conferencias

My tutorials on how to use the Time&money calculator, an Excel spreadsheet developed by AIIC Germany’s former profitability working group, finally have English and Spanish subtitles! Comments and questions welcome 🙂


Video on calculating working hours and fees for conference interpreters | Video sobre como calcular horas de trabajo y honorarios para intérpretes de conferencias (11 min):


Video on how to calculate interpreting projects | Video sobre el cálculo de proyectos de interpretación (6 min):


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Simultaneous interpreting in the time of coronavirus – Boothmates behind glass walls

Yesterday was one of the rare occasions where conference interpreters were still brought to the client’s premises for a multilingual meeting. Participants from abroad were connected via a web meeting platform, while the few people who were on-site anyway were sitting at tables 2 meters apart from each other. But what about the interpreters, who usually share a booth of hardly 2 x 2 m, and who are not exactly known for their habit of social distancing in the first place? Well, PCS, the client’s conference technology provider of choice, came up with a simple, yet effective solution: They just split up the teams and gave us one booth each. So there we were, my colleague Inés de Chavarría and I, spreading our stuff in our private booths, separated by no more than a window.

Separate booths

Now, apart from having to bring our own food (no catering available), by the time we met in the morning of this meeting, we had already figured out which would probably be the main challenges of being boothmates while separated by a glass wall:

1 How do we agree on when to take turns?

2 How do we help each other by writing down numbers, names and difficult words?

3 How do we tell each other that we want coffee/are completely knackered/need to go to the loo, complain about the sound/accent/temperature/chairman’s haircut or ask how the kids are?

Luckily, after an exciting day, we felt that we had found great solutions to all our communicative needs:

1 Taking over: Although the colleague who was not working couldn’t listen to the original and the interpretation at the same time, she could tell quite reliably from gestures and eye-contact when to take over. So, no countdown or egg timer needed as long as you can see each other.

2 Helping out – These were the options we tried:

Write down things with pen and paper, show it through the window: Rather slow and hard to read due to reflections from the booth windows. The same goes for typing on the computer and looking at the screen through the window.

Scribbling in a shared file in Microsoft Whiteboard (great), One Note (ok), Google Drawings (a bit slow and unprecise): Fine as long as all parties involved have a touchscreen and decent pen. Sometimes hard to read, depending on the quality of the pen/screen and handwriting.

Typing in a shared file like Google Sheets or Docs: This was our method of choice. The things we typed appeared on the other’s screen in real-time, plus it was perfectly legible, in contrast to some people’s handwriting. A perfect solution as long as there is decent Wifi or mobile data connection. And although I am usually of the opinion that there is no such thing as a decent spreadsheet, in this case, a plain word processing document has one clear advantage: When you type in Google Docs, each character you type will appear on your colleague’s screen practically in real-time, whereas when typing in the cell of a Google Sheet, your colleague won’t be able to see it until you „leave“ this cell and jump to the next one.

3  The usual chitchat:

WhatsApp, or rather the WhatsApp Web App, was the first thing we all spontaneously resorted to for staying in contact with a glass wall between us. But it quickly turned out to be rather distracting, with all sorts of private messages popping up.

Luckily, all Google documents come with a chat function included, so we had both our meeting-related information exchange and our personal logistics neatly displayed next to each other in the same browser window.

If we had worked with many different documents that needed to be managed while interpreting, I would have liked to try Microsoft Teams. With its chat function and shared documents, among other features, it seems very promising as a shared booth platform. But their registration service was down due to overload anyway, so that’s for next time.

So, all in all, a very special experience, and rather encouraging thanks to the many positive contributions from all people involved. And the bottom line, after having to accommodate on my laptop screen the booth chat and notes next to the usual glossary, online resources, agenda and meeting documents: My next panic purchase will be a portable touchscreen in order to double my screen space in the booth.


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

 

 

How to Make CAI Tools Work for You – a Guest Article by Bianca Prandi

After conducting research and providing training on Computer-Assisted Interpreting (CAI) for the past 6 years, I feel quite confident in affirming that there are three indisputable truths about CAI tools: they can potentially provide a lot of advantages, do more harm than good if not used strategically, and most interpreters know very little about them.

The surveys conducted so far on interpreters’ terminological strategies[1] have shown that only a small percentage has integrated CAI tools in their workflow. Most interpreters still prefer “traditional” solutions such as Word or Excel tables to organize their terminology. There can be many reasons for this. Some may have already developed reliable systems and processes, and don’t see a point in reinventing the wheel. Others believe the cons outweigh the pros when it comes to these tools and are yet to find a truly convincing alternative to their current solutions. Others may simply never have heard about Flashterm, InterpretBank or Interpreter’s Help before.

Even though a lot still remains to be investigated empirically, the studies conducted so far have highlighted both advantages and disadvantages in the use of CAI tools.  On the positive side, CAI tools can provide effective preparation through automatic term extraction and in-built concordancers[2] (Xu 2015). They seem to contribute to higher terminological accuracy than paper glossaries[3] or even Excel tables[4] when used to look up terms in the booth. They help interpreters organize, reuse and share their resources, rationalize and speed up their preparation process, make the most of preparation documents, work efficiently on the go and go paperless if desired. On the negative side, they are often perceived as potentially distracting and less flexible than traditional solutions. When working with CAI tools, we might run the risk of relying too much on the tool, both during the preparation phase and interpretation proper[5].

I would argue that, if used strategically, the pros easily outweigh the cons billigastemobilabonnemang.nu telefonabonnemang. Just as with any tool and new technology, it all comes down to how you use them. Whether you are still sceptical, already CAI-curious, or a technology enthusiast, here are three tips on how to make CAI tools work for you.

  1. Take time to test your tools

Most tools offer a free demo to test out their functionalities. I know we are all busy, but you can use downtimes to work on improving your processes, just as you would (should!) do to work on your CPD and marketing strategy. I suggest you do the following:

  • Choose one of your recent assignments, something you had to do research on because the topic was unfamiliar to you.
  • Set aside 1-2 hours a day, or even just 30 minutes, to simulate preparing for the assignment again.
  • Set yourself a clear goal for each phase of your workflow (glossary creation, terminology extraction, memorization, debriefing).
  • Build your baseline: dedicate 1 session to assessing your current approach. Then, dedicate each of the following sessions to testing out a different tool, here is some key differences.
  • For a systematic comparison, keep track of the time needed for each activity, the pros and cons for each tool, your preferences and things that you found irritating.

You can conduct this analysis and selection process over a week or even a month if you are very busy. Once you have identified what might work for you, keep using those tools! Maybe test them out on a real assignment for a client you already know, where the risk of mishaps is lower.

  1. There is no perfect tool

Unless you can write code and develop your own tool, chances are there will always be something you don’t like about a tool, or that some functions you deem essential might be missing. But given the advantages that come from working with Wunder-Mold these solutions, it is definitely worth it to try and see whether you can find a tool that satisfies even just 50% of your interpreting needs. It may not seem much, but it’s already 50% of your workflow that you can optimize.

Once you get a feeling for what each tool can do for you, you might find out that there are some options you love that aren’t available in your tool of choice. My suggestion: mix and match. Most CAI tools are built modularly and allow users to only work with a specific function. For instance, I love Intragloss’ terminology extraction module, so I use that tool to work with documents, but I use InterpretBank for everything else. In a word: experiment and be creative!

  1. Tools can’t do the work for you

If you’re passionate about technology, you will agree that CAI tools are quite cool. However, we should never forget that they are a tool and, as such, they fulfil their function as long as we use them purposefully. Think before you use them, always make sure you follow a strategic course of action.

If you have the feeling you had never been as ill-prepared as when you worked with a CAI tool, here are some questions you can ask yourself:

  • Am I sure this is the right tool for me? Have I taken enough time to test it out?
  • Did I have a clear goal when I started preparing for my assignment? Or was I simply trying to cram together as many terms as possible?
  • Am I aware of my learning preferences? If I’m an auditory learner, does it make sense to use a flashcard method to study the terminology?
  • Did I include in my glossary just any term that came up in my documents? Or did I start from the relevant terminology I found to further explore the topic?

As for many things in life, reflection and a structured, strategic approach can really go a long way. For busy interpreters needing some guidance, Interpremy is preparing a course series that will help you effectively use CAI tools to optimize all phases of your workflow and avoid potential pitfalls. Get in touch at info@interpremy.com!


[1] See for instance: Zielinski, Daniel and Yamile Ramírez-Safar (2006). Onlineumfrage zu Terminologieextraktions- und Terminologieverwaltungstools. Wunsch und Wirklichkeit noch weit auseinander.” MDÜ. and Corpas Pastor, Gloria and Lily May Fern (2016). A Survey of Interpreters’ Needs and Practices Related to Language Technology.

[2] See Xu, Ran (2015). Terminology Preparation for Simultaneous Interpreters. University of Leeds.

[3] Biagini, Giulio (2015). Glossario cartaceo e glossario elettronico durante l’interpretazione simultanea: uno studio comparativo. Università degli studi di Trieste.

[4] Prandi, Bianca (2018). An exploratory study on CAI tools in simultaneous interpreting: Theoretical framework and stimulus validation. In Claudio Fantinuoli (ed.), Interpreting and technology, 29–59. Berlin: Language Science Press.

[5] Prandi, Bianca (2015). The Use of CAI Tools in Interpreters’ Training: A Pilot Study. 37th Conference Translating and the Computer, 48–57.


About the author:

Bianca Prandi

bianca@interpremy.com

  • Conference Interpreter IT-EN-DE, MA Interpreting (University of Bologna/Forlì), based in Mannheim (Germany), www.biancaprandi.com;
  • PhD candidate – University of Mainz/Germersheim. Research topic: impact of computer-assisted interpreting tools on terminological quality and cognitive processes in simultaneous interpreting;
  • CAI trainer and co-founder of InterpreMY – my interpreting academy: online academy for interpreters with goal-centered, research-based courses, www.interpremy.com (coming soon: July 2020).

Publications:

  • Prandi, B. (2015). L’uso di InterpretBank nella didattica dell’interpretazione: uno studio esplorativo. Università di Bologna/Forlì.
  • Prandi, B. (2015). The Use of CAI Tools in Interpreters’ Training: A Pilot Study. 37th Conference Translating and the Computer, 48–57. London.
  • Prandi, B. (2017). Designing a Multimethod Study on the Use of CAI Tools during Simultaneous Interpreting. 39th Conference Translating and the Computer, 76–88. London: AsLing.
  • Prandi, B. (2018). An exploratory study on CAI tools in Simultaneous Interpreting: theoretical framework and stimulus validation. In C. Fantinuoli (Ed.), Interpreting and technology, 28–59.
  • Fantinuoli, C., & Prandi, B. (2018). Teaching information and communication technologies: a proposal for the interpreting classroom. Trans-Kom, 11(2), 162–182.
  • Prandi, B. (forthcoming). CAI tools in interpreter training: where are we now and where do we go from here? InTRAlinea.