Save the Date – Innovation in Interpreting Summit – February 23-25, 2021

Looking forward to talking about How to be boothmates without sharing a booth on the Innovation in Interpreting Summit, hosted by our two favourite tech geeks, Josh Goldsmith & Alex Drechsel, aka @techforword.

Registration for free tickets will start soon!

Hope to see you there on 23-25 February 🙂



About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Wishing you Happy Holidays and plenty of time for coffee breaks

The Unfinished Handbook for Remote Simultaneous Interpreters

Together with Angelika Eberhardt and Peter Sand, I have compiled tips and tricks around remote simultaneous interpreting (be it from a hub or from home) that we have been collecting ourselves or that colleagues have shared with us. It is meant as an informal collection of personal experiences that work for some while others may find them completely pointless. The purpose of this document is not to discuss the pros and cons of remote vs. on-site interpreting, or recommend either of them.

This is a work-in-progress document that we will try to constantly update – so feel free to share your best practices, workarounds, and tips in the comments or via email! We will try to keep track of the credits as best we can.

For a fullscreen view, click here.

Natural Language Processing – What’s in it for interpreters?

“Natural Language Processing? Hey, that’s what I do for a living!” That’s what I thought when I heard about the live talk “A Glimpse at the Future of NLP” (big thank you to Julia Böhm for pointing this out to me). As I am always curious about what happens in AI and language processing, I registered right away. And I was not disappointed.

In this conference, Marco Turchi, Head of the Machine Translation group at Fondazione Bruno Kessler, presented recent developments in automatic speech translation. And just to make this clear: This was not about machine interpretation, but about spoken language translation (SLT): spoken language is translated into written language. This text can then be used, e.g., for subtitling. Theoretically, it could then also be passed through TTS (text to speech) in order to deliver spoken interpretation, although this is not the purpose of SLT.

The classic approach of SLT, which has been used in the past decades, is cascading. It consists of two phases: First, the source speech is converted into written text by means of automatic speech recognition (ASR).  This text is then passed through a machine translation (MT) system. The downside of this approach is that once the spoken language has been converted into written text, the MT system is ignorant of, e.g., the tone of the voice, background sounds (i.e. context information), age or gender of the speaker.

Now another, rather recent approach relies on using a single neural network to directly translate the input audio signal in one language into text in a different language without first transcribing it, i.e. converting it into written text. This end-to-end SLT translates directly from the spoken source text, thus has more contextual information available than what a transcript provides. The source speech is neither “normalised” while being converted into written text, nor divided into segments that are treated separately from each other. Despite being very new, the quality of end-to-end SLT this year has already reached parity with the 30-year-old cascade approach. But it also has its peculiarities:

As the text is not segmented automatically (or naturally by punctuation, like in written text), the system must learn how to organise the text into meaningful units (similar to, but not necessarily sentences). I was intrigued to hear that efforts are being made to find the right “ear-voice-span” or décalage, as we human interpreters call it. While a computer does not have this human problem of limited working memory, it still has to decide when to start producing its output – a tradeoff between lagging and performance. This was the point when I decided I wanted to ask some more questions about this whole SLT subject, and had a video chat with Marco Turchi (thank you, Marco!), just to ask him some more questions that maybe only interpreters find interesting:

Question: Could an end-to-end NLP system learn from human interpreters what a good ear-voice-span is? Are there other strategies from conference interpreting that machine interpreting systems are taught to deal with difficult situations, like for example chunking, summarising, explaining/commenting, inferencing, changes of sentence order, or complete reshaping of longer passages? (and guessing, haha)? But then I guess a machine won’t necessarily struggle with the same problems humans have, like excessive speed  …

Marco Turchi: Human interpreting data could indeed be very helpful as a training base. But you need to bear in mind that neural systems can’t be taught rules. You don’t just tell them “wait until there is a meaningful chunk of information you can process before you start speaking” like you do with students of conference interpreting. Neural networks, similar to human brains, learn by pattern recognition. This means that they need to be fed with human interpreting data so that they can listen to the work of enough human interpreters in order to “intuitively” figure out what the right ear-voice-span is. These patterns, or strategies, are only implicit and difficult to interpret. So neural networks need to observe a huge amount of examples in order to recognise a pattern, much more than the human brain needs to learn the same thing.

Question: If human training data was used, could you give me an idea of if or how the learning system would deal with all those human imperfections, like omissions, hesitations, and also mistakes?

Marco Turchi: Of course, human training data would include pauses, hesitations, and errors. But researchers are studying ways of weighing these “errors” in a smart way, so it is a good way forward.

Question: And what happens if the machine is translating a conference on mechanical engineering and someone makes a side remark about yesterday’s football match?

Marco Turchi: Machine translation tends to be literal, not creative. It produces different options and the problem is to select from it.  To a certain extent, machines can be forced to comply with rules: They can be fed preferred terminology or names of persons, or they can be told that a speech is about a certain subject matter, let’s say car engines. Automatic domain adaptation, however, is a topic still being worked on. So it might be a challenge for a computer to recognise an unforeseen change of subject. Although of course, a machine does not forget its knowledge about football just because it is translating a speech about mechanical engineering. However, it lacks the situational awareness of a human interpreter to distinguish between the purposes of different elements of a spoken contribution.

Question: One problem that was mentioned in your online talk: real-live, human training data is simply not available, mainly due to permission and confidentiality issues. How do you go about this problem at the moment?

Marco Turchi: The current approach is to create datasets automatically. For our MuSt-C corpus, we have TED talks transcribed and translated by humans. These translations with their spoken source texts are then fed into our neural network for it to learn from. There are other such initiatives, like Facebook’s CoVoSt or Europarl-ST.

Question: So when will computers outperform humans? What’s the way forward?

Marco Turchi: Bringing machine interpreting to the same level as humans is not a goal that is practically relevant. It is just not realistic. Machine learning has its limitations. There is a steep learning curve at the beginning, which then flattens at a certain level with increasing specificity. Dialects or accents, for example, will always be difficult to learn for a neural network, as it is difficult to feed it with enough of such data for the system to recognise it as something “worth learning” and not just noise, i.e. irrelevant deviations of normal speech.

The idea of all this research is always to help humans where computers are better. Computers, unlike humans, have no creativity, which is an essential element of interpreting. But they can be better at many other things. The most obvious are recognising numbers and named entities or finding a missing word more quickly. But there will certainly be more tasks computers can fulfill to support interpreters, which we are still to discover while the technology improves.

Thank you very much, Marco!

After all, I think that I prefer being supported by a machine than the other way around. The other day, in the booth, I had to read out pre-translated questions and answers provided by the customer. It was only halfway through the first round of questions that my colleague and I realised that we were reading out machine translations that had not been post-edited. While some parts were definitely not recognisable as machine translations, others were complete nonsense content-wise (although they still sounded good). So what we did was a new kind of simultaneous on-the-fly post-editing … Well, at least we won’t get bored too soon!

Further reading and testing: (generates subtitles) (transcribes audio and video files) (European live translator – a current project to provide a solution to transcribe audio input for hearing-impaired listeners in multiple languages)

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Will 3D audio make remote simultaneous interpreting a pleasure?

Now THAT’S what I want: Virtual Reality for my ears!

Apparently, simultaneous interpreters are not the only ones suffering from Zoom fatigue, i.e. the confusion and inability to physically locate a speaker using our ears can lead to a condition that’s sometimes referred to as “Zoom Fatigue”. But it looks like there is still reason to hope that, acoustically, video conferences become more of a pleasure and less of an ordeal in years to come. At least that’s what current discussions around this topic make me hope … This (four year old!) video by BBC Click illustrates quite nicely what 3D sound for headphones is about:

Binaural recording is a method of recording sound with the intent to create an immersive 3D stereo sound sensation for the listener of actually being in the room with the performers. A dummy head, aka “Kunstkopf”, is used for recording, which has two dummy ears in the shape of human ears with microphones in them so as to simulate the perception of the human ear. Sound recorded via binaural recording is intended for replay using headphones. And the good thing is: What was originally made for the gaming and movie industry is now also bound to conquer the video conferencing market.

Mathias Johannson, the CEO of Dirac, is rather confident In less than a decade, 3D audio over headsets with head-tracking capabilities will allow us to have remote meetings in which you can move about an actual room, having sidebar discussions with one colleague or another as you huddle close or step away. What is more, Johannson reckons that spatial audio could be made available to videoconference users early in 2021. Dirac wants to offer 3D audio technology to video chat platforms via an off-the-shelf software solution, so no expensive hardware would be required. Google, on the other hand, already advertises its Google Meet hardware for immersive sound. But from what we have learned six months into Covid-19, it is difficult even to persuade participants to wear a cabled headset (not to mention using an ethernet cable instead of a wifi connection), I am personally not too optimistic that expensive hardware is the way forward to high-quality remote simultaneous interpretation.

So, will such a software-based solution possibly not only provide a more immersive meeting experience but also be able to provide decent sound even without remote participants connected from their home offices having to use special equipment, i.e. headsets? I asked Prof. Jochen Steffens, who is a sound and video engineer, for his opinion. The answer was rather sobering regarding equipment: For 3D audio, a dry and clean recording is required, which at the moment is not possible using built-in microphones. Equipment made for binaural recording, however, would not really serve the purpose of simultaneous interpreting either, as the room sound would actually be more of a disturbance for interpreting. Binaural recording is rather made for recording the sound of real threedimensional sound impressions like in concert halls and the like. For video conferencing, rather than headsets, Steffens recommends using unidirectional microphones; he suggests, for example, an inexpensive cardioid large-diaphragm microphone, mounted on a table stand. And the good news: If you are too vain to wear a headset in video conferences, with decent sound input being delivered by a good microphone, any odd wireless in-ear earphones can be used for listening, or even the built-in speakers of your laptop as long as you turn them off while speaking.

But what about the spatial, immersive experience? And how will a spatial distribution of participants happen if, in fact, there is no real room to match the virtual distribution to? As Prof. Steffens explained to me, once you have a good quality sound input, people can indeed be mapped into a virtual space, e.g. around a table, rather easily. The next question would be if, in contrast to the conference participants, we as interpreters would really appreciate such a being-in-the-room experience. While this immersion could indeed allow for more situational awareness, we might prefer to always be acoustically positioned right in front of the person who is speaking instead of having a “round table” experience. After all, speakers are best understood when they are placed in front of you and both ears get an equally strong input (the so-called cocktail party effect of selectively hearing only one voice works best with binaural input). And this would, by the way, nicely match a close front view of the person speaking.

And then, if ever video conferencing can offer us a useful immersive experience, couldn’t it even end up being more convenient than a “normal” on-site simultaneous interpreting setting? More often than not, we are rather isolated in our booths with no more than a poor view of the meeting room from a faraway/high above/hidden-next-door position. So much so that I am starting to wonder if 3D audio (and video, for that matter) could also be used in on-site conference rooms. According to Prof. Steffens, this would be perfectly feasible by “simply” using sound engineering software.

But then the next question arises: While simultaneous interpreters used to be “the voice in your ear”, they might now be assigned a position in the meeting space … the voice from above, from behind (like in chuchotage), or our voices could even come from where the speaker is sitting who is being interpreted at the moment. Although for this to happen, the speaker’s voice would have to be muted completely, which might not be what we want. Two voices coming from the same position would be hard to process for the brain. So the interpreter’s voice would need to find its own “place in space” after all – suggestions are welcome!

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.






Conference Interpreters and Their Listening Behaviour—a Guest Article by Lisa Woytowicz

Listening is an integral part of every conference interpreter’s job. It might therefore be surprising that there is hardly any research on conference interpreters’ listening behaviour. Since I did not find too much on the issue, I conducted my own study.

About Listening

Studies on listening behaviour exist. But generally, they are conducted by scholars in the field of psychology or communication studies. According to these experts, listening is a multidimensional construct which consists of behavioural, cognitive, and affective processes.

Every time we listen, we—or rather our brains—process information on several levels: When somebody speaks, we receive (verbal and non-verbal) signals. We identify sounds and put them together. We recognise words, sentences and what they mean. During this process, our short-term memory continuously verifies whether the incoming information corresponds to the information stored in our long-term memory. Besides, it adds new information and establishes new links.

There is evidence that the more we already know about an issue, the faster our short-term memory processes the information. This is not only fascinating; it also is one of the reasons why preparing an interpreting assignment is key.

Listening as a Skill

However, there is a tiny but important step in the listening process which is often ignored or at least underestimated: every listener has an intention, a goal she pursues. Selecting a listening goal is the very first step of the listening process which commonly happens subconsciously. Nevertheless, it is a decision every listener makes. And it determines which of the incoming signals are considered relevant and which will be ignored.

When interpreting simultaneously, conference interpreters are special listeners because they are “double listeners”. They need to listen to the speaker and—at the same time—to themselves. They listen to the information they interpret while also making sure that their rendition makes sense and is grammatically and semantically correct. This kind of listening behaviour might be part of the job description. Nevertheless, it is quite unnatural.

Experts agree that listening is “an identifiable set of skills, attitudes, and abilities [that] can be formulated and taught to improve individual performance” (Worthington & Bodie, 2017, p. 8). And this is brilliant! It means that interpreters can learn to make conscious listening decisions to become better listeners and thus (even) better interpreters.

Different Listening Styles

The Listening Styles Profile (LSP) is a concept to describe listening behaviour. According to the latest version of the LSP, listening styles are listening goals which are triggered by individual predispositions (i.e., they are partially stable) and elements of the listening situation (i.e., they are partially not stable).

There are four different listening styles:

  • Relational listening: a concern with and awareness of the speakers’ feelings and emotions,
  • Analytical listening: focussing on the full message before forming an opinion,
  • Task-oriented listening: a concern with the amount of time spent listening and a desire to interact with focused speakers,
  • Critical listening: a tendency to evaluate and critically assess messages for accuracy and consistency. (Bodie & Worthington, 2017, p. 403)

Data on listening behaviour is collected using self-assessment questionnaires. For my research project, I used the LSP-R8 (Rinke, 2016).

Assessing the Listening Behaviour of Different Professions

I asked representatives of three different professions as well as students enrolled in the respective university courses about their listening behaviour. Using an online questionnaire, I was able to gather data on the listening behaviour of 242 (future) psychologists, teachers, and conference interpreters.

Several t-tests were performed to determine statistically relevant differences between the groups mentioned above. If you are into statistics, let me know and I am happy to give you the details. But for now, let us skip the statistical part and get straight to the results. So, here is what I found:

  • Conference interpreters have a stronger tendency toward Critical listening than the other professionals.
  • Conference interpreters have a weaker tendency toward Relational listening than the other professionals.

To my surprise, there were no statistically relevant differences among the student groups. Apparently, future conference interpreters’ listening behaviour does not differ very much from the way future psychologists or future teachers listen.

Therefore, I concluded that the frequent use of a certain listening style on-the-job might result in applying it frequently, even in other contexts. If you think about it, this is not very far-fetched. The more we use a certain skill, the more we train it and the better we get at it. And when we are good at something, we tend to do it more often. In the end, this cycle might lead to partially automatising a certain listening behaviour.

Remember, interpreters are double listeners who always make sure that their rendition is correct. So, they often apply Critical listening when sitting in the booth. Psychologists and teachers—in their professional contexts—surely use a lot more Relational listening. In the end, psychologists are paid to know how people feel; and teachers regularly need to put themselves into the shoes of their students to meet their needs.


What are these findings good for? Well, competent listeners can flexibly switch between different listening styles, always adapting to new listening contexts. Irrespective of one’s profession, this might be a goal everybody could strive for. At the end of the day, being a good listener is a great asset.

It looks as though conference interpreters should train to use Relational listening more often. They could start thinking about situations in which this listening style (or the others) could come in handy, particularly if Critical listening is more of a hindrance than a help. These might be situations which involve talking to clients, colleagues, family, and friends.

Furthermore, conference interpreters could try to consciously apply different listening styles in the booth. Depending on the speaker, they might grasp more of the relevant information by focussing on her emotions (Relational listening) or on the full message (Analytical listening).

Interpreting trainers could consider establishing listening behaviour as part of the curriculum. Besides, the LSP might help explain certain flaws, such as omissions, contresens, etc., which could be relevant for giving (better) feedback.

Since listening plays such an important role in every conference interpreter’s (professional) life, there are plenty of other conclusions to be drawn. Are you interested in discussing your suggestions? Just send me an e-mail:



Bodie, G. D. & Worthington, D. L. (2017). Profile 36 listening styles profile-revised (LSP-R). In D. L. Worthington & G. D. Bodie (Eds.), The sourcebook of listening research. Methodology and measures (pp. 402–409). Wiley-Blackwell.

Imhof, M. (2010). Zuhören lernen und lehren. Psychologische Grundlagen zur Beschreibung und Förderung von Zuhörkompetenzen in Schule und Unterricht. In M. Imhof & V. Bernius (Eds.), Zuhörkompetenz in Unterricht und Schule. Beiträge aus Wissenschaft und Praxis (pp. 15–30). Vandenhoeck & Ruprecht.

Rinke, E. M. (2016, May 14). A general survey measure of individual listening styles: Short form of the listening styles profile-revised (LSP-R8) [AAPOR Poster Session 3]. Annual Conference of the American Association for Public Opinion Research, Hilton Austin, Austin, TX, United States.

Worthington, D. & Bodie, G. D. (2017). Defining listening. A historical, theoretical, and pragmatic assessment. In D. L. Worthington & G. D. Bodie (Eds.), The sourcebook of listening research. Methodology and measures (pp. 3–17). Wiley-Blackwell.

Woytowicz, L. (2019). Persönlichkeitseigenschaften und Listening Styles von Konferenzdolmetschern im Vergleich zu anderen Berufsgruppen [unpublished master’s thesis]. Johannes Gutenberg University Mainz.

About the author

Lisa Woytowicz is a professional conference interpreter for German, English, and Portuguese, based in Essen (Germany).

Ein Hoch auf den guten Ton beim hybriden #DfD2020 | Good sound and vibes at Interpreters for Interpreters Workshop

+++ for English version see below +++

In einer Mischung aus ESC (“Hello from Berlin”) und Abiprüfung (getrennte Tische) hat am heutigen 18. Juli 2020 der bislang teilnehmerstärkste Dolmetscher-für-Dolmetscher-Workshop als Hybridveranstaltung in Bonn stattgefunden.

169 Dolmetscher*innen waren angemeldet, davon 80 Dolmetscher corona-konform persönlich vor Ort. Dies alles organisiert vom Fortbildungs-Dreamteam der AIIC Deutschland, Inés de Chavarría, Ulla Schneider und Fernanda Vila Kalbermatten, technisch möglich gemacht durch das bewährte Team von PCS.

Lisa Woytowicz hat über ihre Masterarbeit zum Thema Listening Styles referiert. Relational Listening scheint ein unter Dolmetschern unterschätztes Thema zu sein (dazu mehr später auf diesem Kanal).

Monika Ott über den Umgang mit Kunden gibt uns noch einmal als Hausaufgabe auf, uns als Allround-Dienstleister (Conference Interpreting Service Provider laut der noch in Entwicklung befindlichen ISO 23155) zu verstehen und unser Netzwerk zu nutzen, um auf Kompetenzen von Kolleg*innen zugreifen zu können. Denn nur gemeinsam können wir die eilerlegende Wollmilchsau sein: RSI-Plattformen, Hubs, Hardware, Terminologiemanagement, Finanzen, Datenschutz.

Dr. Karin Reithofer hat uns das Thema Englisch als Lingua Franca sehr anschaulich nahegebracht. In ihrer Forschungsarbeit hat sie herausgefunden, dass das Verständnis eines gedolmetschten Fachvortrags (monologisches Setting) signifikant besser ist als das Verständnis bei der Kommunikation in nicht-muttersprachlichem Englisch. In Dialogsituationen hingegen kann nicht-muttersprachliches Englisch durchaus funktionieren. Auch interessant: Wenn ein Nicht-Muttersprachler Englisch spricht, fällt es uns leichter, ihn zu verstehen bzw. zu dolmetschen, wenn wir die Muttersprache dieses Redners kennen.

Gemeinsam mit Alex Drechsel aus dem “Studio Brüssel” durfte ich dann im Hybridformat unseren subjektiven Saisonrückblick präsentieren und diskutieren:

Das Thema Hub oder Heimdolmetschen haben wir weitestgehend ausgeklammert. Meine Gedanken aus Prä-Corona-Zeiten nach dem RSI-Seminar im Mai 2019 finden sich im Blogbeitrag zu Hub or Home.

Was mir dabei neben dem Thema Geld aktuell am meisten umtreibt, ist die Frage, was wir tun, wenn der Ton der Wortbeiträge zu schlecht ist, um simultan zu dolmetschen. Hier wünsche ich mir von uns allen ein souveränes und nouanciertes Vorgehen, das über “das kann nicht gedolmetscht werden” hinausgeht.

Ein Vorschlag zum Thema “Was tun bei schlechtem Ton”:

  1. Ein technischer Co-Host oder “Event-Koordinator”, wie PCS es nennt, überwacht die Tonqualität und greift moderierend ein, wenn nicht mehr simultan gedolmetscht wird. Diese Entscheidung sollte zentral durch eine Person für das ganze Dolmetschteam getroffen werden.
  2. Dann bei einigermaßen brauchbarem Ton: auf Konsekutivdolmetschen umstellen.
  3. Wenn keine Tonübertragung mehr möglich: Beiträge aus dem Chat dolmetschen.

Grundsätzlich gut: Während der virtuellen Sitzung einen Hinweis für die Teilnehmer einblenden, dass remote gedolmetscht wird.

Abschließend wurde unser AIIC-Präsident Uroš Peterc zugeschaltet, der mit seiner Bewertung der Lage den DfD würdig abgerundet hat. Er erwartet, dass das sehr diverse Angebot an RSI-Plattformen und Hubs sich sortieren wird. Trotz der Planungsunsicherheit sieht er die AIIC in der Pflicht, nicht nur zu reagieren, sondern zu agieren. Ein besseres Saisonfazit könnte ich nicht formulieren.

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

+++ English version +++

Disclaimer: This blog post was written by a non-native ELF speaker ;-).

What seemed like a mixture of the European Song Contest (“Hello from Berlin”) and end-of-term exams (with tables set up far apart) was in fact the first ever hybrid edition of the annual Interpreters for Interpreters Workshop.

On 18 July 2020, 169 interpreters (a record high!) had registered for this IfI workshop in Bonn. 80 of us were there on-site, with strict hygiene measures in place, while the others were connected via Zoom. All this had been organised by a fantastic org team consisting of Inés de Chavarría, Ulla Schneider and Fernanda Vila Kalbermatten, while the technical setup was in the experienced hands of PCS.

The first speaker was Lisa Woytowicz, she presented her master’s thesis about Listening Styles. It looks like Relational Listening isn’t exactly what interpreters are best at … but we will read more about it on this blog soon.

Monika Ott reminded us to be all-round service providers to our customers (CISP, Conference Interpreting Service Providers according to ISO 23155, which is still under development) and use our network to draw upon the expertise of our colleagues. For if we join forces, we can cover the whole range of service and advice needed: RSI platforms, hubs, hardware, terminology management, pricing, data protection etc.

Dr Karin Reithofer told us what we need to know about English as a Lingua Franca (ELF).  Her research has shown that in monologue settings, technical presentations are significantly better understood when people speak their mother tongues and use interpretation than when they communicate in non-native English. In dialogues, however, non-native (“ELF“) English may work as well. What’s also interesting: When interpreting non-native English speakers, it is easier for us to understand them if we know their mother tongues.

Alex Drechsel and I then gave our “hybrid“ review of the past – and first ever – virtual  interpreting season, me on-site, and Alex from “Studio Brussels”:

The hub vs. home discussion was left aside in our review. It has been largely discussed already (see my pre-Covid blog article Hub or Home). The main points that keep my mind busy after this virtual interpreting season are money and sound quality.

As to the latter, I would like to see a nuanced way of dealing with situations where sound quality is not sufficient for simultaneous interpreting. I would like us to be more constructive and go beyond the usual black and white approach, i.e. either keep interpreting or stop interpreting.

My suggestion for a professional way of handling poor sound situations:

  1. Have a technical co-host or event coordinator, like PCS put it, monitor sound quality, and intervene as a moderator when sound is too poor for simultaneous interpretation. The decision of when to stop interpreting should be in the hands of one person for the whole team.
  2. If sound quality allows for it: switch to consecutive interpreting.
  3. If not: Have participants type in the chat and do sight translation.

I also like the idea of displaying a disclaimer to the meeting participants in videoconferences, stating that the meeting is being interpreter remotely.

Finally, our AIIC president, Uroš Peterc, joined us via Zoom. His view of the current situation perfectly rounded off the day. He expects the vast offer of RSI platforms and hubs to consolidate over time. In these times of uncertainty, he wants AIIC not only to react to market developments but to be a proactive player. I couldn’t have put it better myself.

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

You can never have too many screens, can you?

I don’t know about you, but I sometimes struggle, when using my laptop in the booth, to squeeze the agenda, list of participants,  glossary, dictionary, web browser and meeting documents/presentations onto one screen. Not to mention email, messenger or shared notepad when working in separate booths in times of COVID-19 … Or even the soft console of your RSI provider?

Well, I have more than once found myself wondering if anybody would mind me bringing my 24 inch desktop monitor to the booth to add some additional screenspace to this tiny 12 inch laptop screen – until, finally, I came across a very useful little freeware application called spacedesk. It lets you use your Android, iOS or Windows tablet as an external monitor to complement your Windows computer quite easily (to all tablet aficionados: unfortunately it does not work the other way around). You simply install it on both your main Windows device, the “server” or “Primary Machine”, and the tablet as a “client” or “secondary device”, and you can connect both devices via USB, ethernet or WiFi and then  use your tablet to either extend or duplicate your computer screen just like you do it with any external monitor on your desk.

There is just a tiny delay when moving the mouse (if that’s not due to my low-end tablet’s poor performance), so it might be better to move the more static elements, like the agenda, to it rather than your terminology database, which you might want to handle very swiftly.

So if ever you feel like going back to printing your documents for lack of screen space, bringing your tablet as a screen extension might be a good alternative.

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.


Was kostet Remote-Dolmetschen und warum?

Wann ist es sinnvoll, mit Dolmetschern vor Ort zu tagen, und wann ist Dolmetschen über das Internet sinnvoll? Nach unserem Web-Meeting der AIIC Deutschland am vergangenen Freitag (22. Mai 2020) mit dem herzerfrischenden Titel “TACHELES – RSI auf dem deutschen Markt” teile ich hier gerne mit Euch meine in eine rechnende Tabelle gegossenen Überlegungen zum Thema:

Preisvergleich Remotedolmetschen und Präsenzdolmetschen

Ergänzund zu den finanziellen Überlegungen findet Ihr hier die technischen Empfehlungen der AIIC zum Ferndolmetschen.

Über Fragen und Anregungen freue ich mich natürlich!

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.


Videos on fee calculation for conference interpreters | Videos sobre como calcular honorarios para intérpretes de conferencias

My tutorials on how to use the Time&money calculator, an Excel spreadsheet developed by AIIC Germany’s former profitability working group, finally have English and Spanish subtitles! Comments and questions welcome 🙂

Video on calculating working hours and fees for conference interpreters | Video sobre como calcular horas de trabajo y honorarios para intérpretes de conferencias (11 min):

Video on how to calculate interpreting projects | Video sobre el cálculo de proyectos de interpretación (6 min):

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.