How to be boothmates without sharing a booth – My impressions from the #Innov1nt Summit 2021

Just in case you missed out on last week’s Innovation In Interpreting Summit by @adrechsel and @Goldsmith_Josh, aka Techforword, here comes my short and personal recap.

The good news is: If you want to watch all the panels and loads of useful video contributions around how technology can support you in the booth, setting up your own interpreting studio, sim-consec, digital notetaking, remote simultaneous interpreting, remote teaching any many more, it’s not too late! You can still buy the power pack (including access to all videos and lots of bonus material) until midnight on 3 March 2021 here.

This is my video contribution on How to be boothmates without sharing a booth. (which is in German with English subtitles by my dear colleague Leonie Wagener). It is about digitalising – instead of just digitising – collaboration between interpreters.

Most of what’s in this video has also been – at least briefly – mentioned in our collaborative Unfinished Handbook For Remote Simultaneous Interpreters. If you feel there is something missing, please drop me a line!

I also had the honour to moderate a panel on New Frontiers in Interpreting Technology. My four wonderful panellists were Bart Defrancq, Bianca Prandi, Jorn Rijckaert, and Oliver Pouliot. There was interpretation from spoken English into International Sign Language and vice versa provided by Helsa Borinstein and Romy O’Callaghan, and we even had live captions in English by Norma MacHaye. Even without the inspiring discussion, I could have just watched the sign language interpreting and live captioning for ages. But then the discussion as such wasn’t too bad either 🙂

Looking back on the last 25 years, it seems to me like around every five years some innovative technology comes about and changes our professional lives in a way for us to ask “how could we ever …?”

1995 – … write translations on a typewriter?
2000 – … do translations without Google/electronic dictionaries/translation memories?
2005 – … travel and run a business with no mobile internet/ phone?
2010 – … live without linguee?
2015 – … survive without your laptop/tablet in the booth?
2020 – … prepare technical conferences on your own? Live without Zoom?

So I asked my panellist colleagues what they thought the next big thing would be in 2025. For Bart, and also for Bianca, it is definitely ASR (Automatic Speech Recognition) that is going to help create a new kind of artificial boothmate, displaying difficult elements like numbers, acronyms, and named entities in real time. Bianca also thinks that the majority of interpreter colleagues will finally embrace computers as a valuable support in the booth. Oliver made me a bit envious when he said that as a sign language interpreter, for a very long time he just brought his physical self to the booth, with no technical support whatsoever (not even pen and paper I suppose). He and Jorn mentioned sign language avatars as a new technology in sign language interpreting. Jorn also explained how ASR could be a good way for deaf sign language interpreters to be able to interpret from spoken language into sign language with automatic live captions being their intermediate language.

We then discussed skills. Are there any skills, like knowing how to read a map or remembering phone numbers for our kids, that will become obsolete for conference interpreters? Won’t we memorise key terminology before each conference in the future?

There was general agreement that interpreters shouldn’t become “lazy” and rely on a virtual boothmate to spit out any terminology needed in real-time. We should rather develop strategies as to best use CAI tools in the booth and in preparation, as Bianca put it. So predicting a “virtual boothmate’s” errors might be a decisive skill in the future. After all, the strengths and weaknesses of humans and machines are quite different and should be used so that they complement each other, as Bart explained. Jorn gave us a very interesting account of how sign language interpreters due to COVID-19 started to do their own recordings and video editing at home instead of relying on a cameraman.

My final question was twofold: What do you wish had never been invented (like built-in laptop microphones), and which piece of hardware (e.g. a rollable 34-inch screen which I can bring to the booth) or software (for me: fully functional abstracting/automatic mind-mapping) is top on your wishlist?

Oliver explained how video auto-focus was a real nightmare for sign language interpreters, something I had never thought about before. It tends to never get the focus right, what with sign language interpreters constantly moving and gesturing. Just like Jorn, he saw perfect ASR as a real opportunity in sign language interpreting. Bart referred to the downsides of data sharing in the remote simultaneous interpreting industry. He saw speaker control as a promising feature of the future so that instead of waving at the speaker to slow down, the system will simply slow down the speech electronically as soon as certain threshold values are reached – very promising indeed! Bianca’s wishes were the nicest ones: computers serving as a “second brain” in the booth, and – most importantly – being able to see our boothmates on RSI platforms. I couldn’t have thought of any better concluding remarks!

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C), and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.


Save the Date – Innovation in Interpreting Summit – February 23-25, 2021

Looking forward to talking about How to be boothmates without sharing a booth on the Innovation in Interpreting Summit, hosted by our two favourite tech geeks, Josh Goldsmith & Alex Drechsel, aka @techforword.

Registration for free tickets will start soon!

Hope to see you there on 23-25 February 🙂



About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Natural Language Processing – What’s in it for interpreters?

“Natural Language Processing? Hey, that’s what I do for a living!” That’s what I thought when I heard about the live talk “A Glimpse at the Future of NLP” (big thank you to Julia Böhm for pointing this out to me). As I am always curious about what happens in AI and language processing, I registered right away. And I was not disappointed.

In this conference, Marco Turchi, Head of the Machine Translation group at Fondazione Bruno Kessler, presented recent developments in automatic speech translation. And just to make this clear: This was not about machine interpretation, but about spoken language translation (SLT): spoken language is translated into written language. This text can then be used, e.g., for subtitling. Theoretically, it could then also be passed through TTS (text to speech) in order to deliver spoken interpretation, although this is not the purpose of SLT.

The classic approach of SLT, which has been used in the past decades, is cascading. It consists of two phases: First, the source speech is converted into written text by means of automatic speech recognition (ASR).  This text is then passed through a machine translation (MT) system. The downside of this approach is that once the spoken language has been converted into written text, the MT system is ignorant of, e.g., the tone of the voice, background sounds (i.e. context information), age or gender of the speaker.

Now another, rather recent approach relies on using a single neural network to directly translate the input audio signal in one language into text in a different language without first transcribing it, i.e. converting it into written text. This end-to-end SLT translates directly from the spoken source text, thus has more contextual information available than what a transcript provides. The source speech is neither “normalised” while being converted into written text, nor divided into segments that are treated separately from each other. Despite being very new, the quality of end-to-end SLT this year has already reached parity with the 30-year-old cascade approach. But it also has its peculiarities:

As the text is not segmented automatically (or naturally by punctuation, like in written text), the system must learn how to organise the text into meaningful units (similar to, but not necessarily sentences). I was intrigued to hear that efforts are being made to find the right “ear-voice-span” or décalage, as we human interpreters call it. While a computer does not have this human problem of limited working memory, it still has to decide when to start producing its output – a tradeoff between lagging and performance. This was the point when I decided I wanted to ask some more questions about this whole SLT subject, and had a video chat with Marco Turchi (thank you, Marco!), just to ask him some more questions that maybe only interpreters find interesting:

Question: Could an end-to-end NLP system learn from human interpreters what a good ear-voice-span is? Are there other strategies from conference interpreting that machine interpreting systems are taught to deal with difficult situations, like for example chunking, summarising, explaining/commenting, inferencing, changes of sentence order, or complete reshaping of longer passages? (and guessing, haha)? But then I guess a machine won’t necessarily struggle with the same problems humans have, like excessive speed  …

Marco Turchi: Human interpreting data could indeed be very helpful as a training base. But you need to bear in mind that neural systems can’t be taught rules. You don’t just tell them “wait until there is a meaningful chunk of information you can process before you start speaking” like you do with students of conference interpreting. Neural networks, similar to human brains, learn by pattern recognition. This means that they need to be fed with human interpreting data so that they can listen to the work of enough human interpreters in order to “intuitively” figure out what the right ear-voice-span is. These patterns, or strategies, are only implicit and difficult to interpret. So neural networks need to observe a huge amount of examples in order to recognise a pattern, much more than the human brain needs to learn the same thing.

Question: If human training data was used, could you give me an idea of if or how the learning system would deal with all those human imperfections, like omissions, hesitations, and also mistakes?

Marco Turchi: Of course, human training data would include pauses, hesitations, and errors. But researchers are studying ways of weighing these “errors” in a smart way, so it is a good way forward.

Question: And what happens if the machine is translating a conference on mechanical engineering and someone makes a side remark about yesterday’s football match?

Marco Turchi: Machine translation tends to be literal, not creative. It produces different options and the problem is to select from it.  To a certain extent, machines can be forced to comply with rules: They can be fed preferred terminology or names of persons, or they can be told that a speech is about a certain subject matter, let’s say car engines. Automatic domain adaptation, however, is a topic still being worked on. So it might be a challenge for a computer to recognise an unforeseen change of subject. Although of course, a machine does not forget its knowledge about football just because it is translating a speech about mechanical engineering. However, it lacks the situational awareness of a human interpreter to distinguish between the purposes of different elements of a spoken contribution.

Question: One problem that was mentioned in your online talk: real-live, human training data is simply not available, mainly due to permission and confidentiality issues. How do you go about this problem at the moment?

Marco Turchi: The current approach is to create datasets automatically. For our MuSt-C corpus, we have TED talks transcribed and translated by humans. These translations with their spoken source texts are then fed into our neural network for it to learn from. There are other such initiatives, like Facebook’s CoVoSt or Europarl-ST.

Question: So when will computers outperform humans? What’s the way forward?

Marco Turchi: Bringing machine interpreting to the same level as humans is not a goal that is practically relevant. It is just not realistic. Machine learning has its limitations. There is a steep learning curve at the beginning, which then flattens at a certain level with increasing specificity. Dialects or accents, for example, will always be difficult to learn for a neural network, as it is difficult to feed it with enough of such data for the system to recognise it as something “worth learning” and not just noise, i.e. irrelevant deviations of normal speech.

The idea of all this research is always to help humans where computers are better. Computers, unlike humans, have no creativity, which is an essential element of interpreting. But they can be better at many other things. The most obvious are recognising numbers and named entities or finding a missing word more quickly. But there will certainly be more tasks computers can fulfill to support interpreters, which we are still to discover while the technology improves.

Thank you very much, Marco!

After all, I think that I prefer being supported by a machine than the other way around. The other day, in the booth, I had to read out pre-translated questions and answers provided by the customer. It was only halfway through the first round of questions that my colleague and I realised that we were reading out machine translations that had not been post-edited. While some parts were definitely not recognisable as machine translations, others were complete nonsense content-wise (although they still sounded good). So what we did was a new kind of simultaneous on-the-fly post-editing … Well, at least we won’t get bored too soon!

Further reading and testing: (generates subtitles) (transcribes audio and video files) (European live translator – a current project to provide a solution to transcribe audio input for hearing-impaired listeners in multiple languages)

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Ein Hoch auf den guten Ton beim hybriden #DfD2020 | Good sound and vibes at Interpreters for Interpreters Workshop

+++ for English version see below +++

In einer Mischung aus ESC (“Hello from Berlin”) und Abiprüfung (getrennte Tische) hat am heutigen 18. Juli 2020 der bislang teilnehmerstärkste Dolmetscher-für-Dolmetscher-Workshop als Hybridveranstaltung in Bonn stattgefunden.

169 Dolmetscher*innen waren angemeldet, davon 80 Dolmetscher corona-konform persönlich vor Ort. Dies alles organisiert vom Fortbildungs-Dreamteam der AIIC Deutschland, Inés de Chavarría, Ulla Schneider und Fernanda Vila Kalbermatten, technisch möglich gemacht durch das bewährte Team von PCS.

Lisa Woytowicz hat über ihre Masterarbeit zum Thema Listening Styles referiert. Relational Listening scheint ein unter Dolmetschern unterschätztes Thema zu sein (dazu mehr später auf diesem Kanal).

Monika Ott über den Umgang mit Kunden gibt uns noch einmal als Hausaufgabe auf, uns als Allround-Dienstleister (Conference Interpreting Service Provider laut der noch in Entwicklung befindlichen ISO 23155) zu verstehen und unser Netzwerk zu nutzen, um auf Kompetenzen von Kolleg*innen zugreifen zu können. Denn nur gemeinsam können wir die eilerlegende Wollmilchsau sein: RSI-Plattformen, Hubs, Hardware, Terminologiemanagement, Finanzen, Datenschutz.

Dr. Karin Reithofer hat uns das Thema Englisch als Lingua Franca sehr anschaulich nahegebracht. In ihrer Forschungsarbeit hat sie herausgefunden, dass das Verständnis eines gedolmetschten Fachvortrags (monologisches Setting) signifikant besser ist als das Verständnis bei der Kommunikation in nicht-muttersprachlichem Englisch. In Dialogsituationen hingegen kann nicht-muttersprachliches Englisch durchaus funktionieren. Auch interessant: Wenn ein Nicht-Muttersprachler Englisch spricht, fällt es uns leichter, ihn zu verstehen bzw. zu dolmetschen, wenn wir die Muttersprache dieses Redners kennen.

Gemeinsam mit Alex Drechsel aus dem “Studio Brüssel” durfte ich dann im Hybridformat unseren subjektiven Saisonrückblick präsentieren und diskutieren:

Das Thema Hub oder Heimdolmetschen haben wir weitestgehend ausgeklammert. Meine Gedanken aus Prä-Corona-Zeiten nach dem RSI-Seminar im Mai 2019 finden sich im Blogbeitrag zu Hub or Home.

Was mir dabei neben dem Thema Geld aktuell am meisten umtreibt, ist die Frage, was wir tun, wenn der Ton der Wortbeiträge zu schlecht ist, um simultan zu dolmetschen. Hier wünsche ich mir von uns allen ein souveränes und nouanciertes Vorgehen, das über “das kann nicht gedolmetscht werden” hinausgeht.

Ein Vorschlag zum Thema “Was tun bei schlechtem Ton”:

  1. Ein technischer Co-Host oder “Event-Koordinator”, wie PCS es nennt, überwacht die Tonqualität und greift moderierend ein, wenn nicht mehr simultan gedolmetscht wird. Diese Entscheidung sollte zentral durch eine Person für das ganze Dolmetschteam getroffen werden.
  2. Dann bei einigermaßen brauchbarem Ton: auf Konsekutivdolmetschen umstellen.
  3. Wenn keine Tonübertragung mehr möglich: Beiträge aus dem Chat dolmetschen.

Grundsätzlich gut: Während der virtuellen Sitzung einen Hinweis für die Teilnehmer einblenden, dass remote gedolmetscht wird.

Abschließend wurde unser AIIC-Präsident Uroš Peterc zugeschaltet, der mit seiner Bewertung der Lage den DfD würdig abgerundet hat. Er erwartet, dass das sehr diverse Angebot an RSI-Plattformen und Hubs sich sortieren wird. Trotz der Planungsunsicherheit sieht er die AIIC in der Pflicht, nicht nur zu reagieren, sondern zu agieren. Ein besseres Saisonfazit könnte ich nicht formulieren.

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

+++ English version +++

Disclaimer: This blog post was written by a non-native ELF speaker ;-).

What seemed like a mixture of the European Song Contest (“Hello from Berlin”) and end-of-term exams (with tables set up far apart) was in fact the first ever hybrid edition of the annual Interpreters for Interpreters Workshop.

On 18 July 2020, 169 interpreters (a record high!) had registered for this IfI workshop in Bonn. 80 of us were there on-site, with strict hygiene measures in place, while the others were connected via Zoom. All this had been organised by a fantastic org team consisting of Inés de Chavarría, Ulla Schneider and Fernanda Vila Kalbermatten, while the technical setup was in the experienced hands of PCS.

The first speaker was Lisa Woytowicz, she presented her master’s thesis about Listening Styles. It looks like Relational Listening isn’t exactly what interpreters are best at … but we will read more about it on this blog soon.

Monika Ott reminded us to be all-round service providers to our customers (CISP, Conference Interpreting Service Providers according to ISO 23155, which is still under development) and use our network to draw upon the expertise of our colleagues. For if we join forces, we can cover the whole range of service and advice needed: RSI platforms, hubs, hardware, terminology management, pricing, data protection etc.

Dr Karin Reithofer told us what we need to know about English as a Lingua Franca (ELF).  Her research has shown that in monologue settings, technical presentations are significantly better understood when people speak their mother tongues and use interpretation than when they communicate in non-native English. In dialogues, however, non-native (“ELF“) English may work as well. What’s also interesting: When interpreting non-native English speakers, it is easier for us to understand them if we know their mother tongues.

Alex Drechsel and I then gave our “hybrid“ review of the past – and first ever – virtual  interpreting season, me on-site, and Alex from “Studio Brussels”:

The hub vs. home discussion was left aside in our review. It has been largely discussed already (see my pre-Covid blog article Hub or Home). The main points that keep my mind busy after this virtual interpreting season are money and sound quality.

As to the latter, I would like to see a nuanced way of dealing with situations where sound quality is not sufficient for simultaneous interpreting. I would like us to be more constructive and go beyond the usual black and white approach, i.e. either keep interpreting or stop interpreting.

My suggestion for a professional way of handling poor sound situations:

  1. Have a technical co-host or event coordinator, like PCS put it, monitor sound quality, and intervene as a moderator when sound is too poor for simultaneous interpretation. The decision of when to stop interpreting should be in the hands of one person for the whole team.
  2. If sound quality allows for it: switch to consecutive interpreting.
  3. If not: Have participants type in the chat and do sight translation.

I also like the idea of displaying a disclaimer to the meeting participants in videoconferences, stating that the meeting is being interpreter remotely.

Finally, our AIIC president, Uroš Peterc, joined us via Zoom. His view of the current situation perfectly rounded off the day. He expects the vast offer of RSI platforms and hubs to consolidate over time. In these times of uncertainty, he wants AIIC not only to react to market developments but to be a proactive player. I couldn’t have put it better myself.

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

#multitalkingfähig – Eindrücke vom BDÜ-Kongress 2019 in Bonn

Das Potpourri aus über 100 Vorträgen, Diskussionen, Seminaren und Workshops, das der BDÜ vergangenes Wochenende beim BDÜ-Kongress in Bonn hingezaubert hat, kann ein einzelner Mensch gewiss nicht würdigen. Deshalb ist mein kleiner Bericht auch nur ein ganz persönlicher Erfahrungsausschnitt. Sämtliche Abstracts und Artikel lassen sich viel besser im Tagungsband nachlesen.

Mein erster Gedanke auf dem Heimweg nach dem ersten Kongress Tag war: Wie DeepL, einer Art Naturphänomen gleich, mit einer Mischung aus Faszination und Entsetzen studiert wird, das erinnert mich stark daran, wie vor etwa 15 Jahren alle staundend auf Google blickten und versuchten, sein “Verhalten” zu ergründen. Ein wenig tat mir das arme DeepL schon leid, leistet es doch für eine Maschine wirklich Beträchtliches, und doch erntet es unter Sprachmittlern allenthalben Häme für die – zugegebenermaßen of lustigen – Fehler, die es produziert. Glücklicherweise blieb es aber nicht dabei: Zumindest für mich gab es in einigem hochinteressanten Vorträgen durchaus Neues und Interessantes in Sachen maschineller Übersetzung zu hören.

Patrick Mustu über DeepL im Übersetzen von Rechtstexten für das Sprachenpaar Englisch-Deutsch

Patrick Mustu lieferte eine fundierte wie unterhaltsame Darstellung dessen, was DeepL mit Rechtstexten so anstellt. Was mir dabei besonders im Gedächtnis geblieben ist:

  • DeepL liefert zu verschiedenen Zeitpunkten für denselben Text unterschiedliche Übersetzungen. The parties hereto waren an einem Tag Die Parteien hieran, am nächsten die Beteiligten.
  • DeepL schaut nicht in die amtlichen Übersetzungen von EU-Rechtstexten. Für Rechtskundige und Normalsterbliche ganz geläufige Abkürzungen wie DSGVO wurden nie übersetzt.
  • Die so genannten Doublets (any and all, null and void, terms and conditions) und Triplets (right, title and interest) werden nicht als Sinneinheiten erkannt, sondern einzeln wortwörtlich übersetzt. Sie veranschaulichen gut, was Patrick Mustu mit Blick auf den übersetzerischen Balanceakt zwischen Freiheit und Genauigkeit sehr schön auf den Punkt gebracht hat hat mit dem denkwürdigen Satz “Auch Übersetzer sind Interpreter.”

Kurzseminar von Daniel Zielinski und Jennifer Vardaro über das Terminologieproblem in der maschinellen Übersetzung

In diesem für mich ultraspannenden Seminar wurde gezeigt, wie man die Algorithmen von online verfügbaren MÜ-Systemen mit kundenspezifischer Terminologie füttern, also trainieren kann. Dank dieser engine customization kann man dem System dabei helfen, sich bei einer Auswahl möglicher Termini für den jeweils “richtigen” zu entscheiden. In manchen Texten trägt die Terminologie den Großteil der Inhalte, sie ist außerdem SEO-relevant. Deshalb ist eine verbesserte Terminologie von entscheidender Bedeutung für die Qualität der Übersetzung.

Die Referenten hatten eine Reihe kommerziell verfügbarer Systeme getestet, unter anderem Amazon, Globalese, Google, Microsoft und SDL. Es gab jeweils eine Demonstration der Benutzeroberflächen, der Preismodelle und eine Auswertung der Ergebnisse. Hierbei wurde die mittels Einspeisen eigener Terminologie erzielte Reduzierung der Terminologiefehler gewichtet nach Kritikalität dargestellt – sie lag teilweise bei über 90 %.

Interessant: Das Importieren von “nackter” Terminologie brachte im Vergleich nicht so gute Ergebnisse wie das Eispielen ganzer Sätze – in einigen Fällen verschlechterten sich die Übersetzungen sogar. Die Bedeutung von Kontext, Kontext, Kontext scheint also auch in der MÜ eine große Rolle zu spielen.

Als ich später den “Terminologiepapst” Klaus-Dirk Schmitz nach den Megatrends in der Terminologie fragte, nannte er im Übrigen spontan – unter anderem – auch die Frage, wie man Terminologie sinnvoll in die MÜ integriert. Offensichtlich lag das Seminar von Daniel Zielinski und Jennifer Vardaro also voll im Megatrend!

Nina Cisneros – Schriftdolmetschen

Den Übergang vom Schreiben zum Sprechen schaffte ich dann mit dem Vortrag von Nina Cisneros zum Thema Schriftdolmetschen, auch Livetranskription oder Live Captioning genannt. Dieses “Mitschreiben” in Echtzeit erfolgt normalerweise einsprachig und dient der Hörunterstützung, meist für Menschen mit Hörbehinderung. Wie im Simultandolmetschen arbeitet man zu zweit, um einander aufgrund der hohen kognitiven Belastung regelmäßig abzuwechseln und auch zu korrigieren und zu unterstützen.

Sowohl die konventionelle Methode des Schriftdolmetschens, also mit Händen und Tastatur, als auch die softwarebasierte mittels Nachsprechen des Gesagten und Transkription per Spracherkennungssoftware, hier Dragon (unter Zuhilfenahme eines Silencer genannten Schalldämpfers), wurden demonstriert.

Inspirierend war die anschließende Diskussion über Zusatzdienstleistungen, die im mehrsprachigen Konferenzdolmetschen im Zusammenhang mit Schriftdolmetschen angeboten werden könnten. Bei einer Art sprachmittelndem Schriftdolmetschen oder schriftlichem Simultandolmetschen könnte das Gedolmetschte statt per Mikrofon und Kopfhörer in Schriftform auf einen großen Bildschirm, einer Live-Untertitelung gleich, oder per App auf mobile Endgeräte der Zuhörer übertragen werden. Abgesehen von der Sprachmittlung kann auch hier eine willkommene Hörunterstützung “mitgeliefert” werden: Manche Zuhörer tun sich leichter damit, eine Fremdsprache zu lesen, als sie zu verstehen.
Als Alternative zur Ausgabe unzähliger Kopfhörer wäre eine solche Lösung auch diskret und unkompliziert. Schon heute ist es ja teilweise so, dass man auf das Verteilen von Empfängern verzichtet und stattdessen die Verdolmetschung per Saallautsprecher für alle Zuhörer überträgt (egal ob sie die Ausgangssprache verstehen oder nicht).

Unbeantwortet blieb die Frage, ob man beim Simultandolmetschen statt für die menschlichen Zuhörer auch Dragon-gerecht sprechen kann, also nicht nur deutlich und mit gleichmäßigem Tempo, sondern auch unter Angabe von Satzzeichen und möglichst “druckreif”, also zum Beispiel ohne Selbstkorrekturen. Ich denke Komma es wäre einen Versuch wert Ausrufezeichen

Auch wenn für mich der Charme einer BDÜ-Konferenz unter anderem darin liegt, zu sehen, welche technologischen Neuerungen es bei den Übersetzern gibt (sprich: Was in ein paar Jahren zu den Dolmetschern herüberschwappen könnte), so gab es natürlich auch zum Konferenzdolmetschen viele spannende Vorträge, und zwar nicht nur zum Trendthema RSI. Was ich schon im Sommer beim DfD 2019 bemerkt hatte, bestätigte sich auch hier: Es lohnt sich, Hochschulabsolventen über ihre Masterarbeiten reden zu lassen. So hat Nora Brüsewitz von ihrer Arbeit zur automatischen Spracherkennung als Unterstützung von Simultandolmetschern berichtet. Sie hat hierzu die Systeme von Google, Watson (IBM), Aonix und Speechmatix getestet und anhand der Kriterien Zahlen, Eigennamen, Terminologie, Homophone und unlogische Aussagen evaluiert. In der Gesamtwertung lag Google vorne, bei dem korrekten Transkribieren von Zahlen hatte Watson mit 97 % deutlich die Nase vorne, bei den Eigennamen Google mit 92 % (alle anderen lagen hier deutlich unter 40 %!), im Korrigieren unlogischer Aussagen waren alle Systeme mit zwischen 70 und über 90 % erstaunlich gut (Einzelheiten im Tagungsband, Lesen lohnt sich!) Insgesamt ein Ansatz, bei dem es sich lohnt, die Entwicklungen im Auge zu behalten!

Sarah Fisher stellte die Ergebnisse ihrer Masterarbeit Voices from the booth – collective experiences of working with technology in conference interpreting vor. Die Einzelheiten lassen sich auch hier besser der Arbeit oder dem Tagungsband entnehmen, aber eine sehr aussagekräftige Folie möchte ich gerne für sich sprechen lassen:

Sarah Fisher
We need tech to work

Zu guter Letzt hat Claudio Fantinuoli in seinem Vortrag The Technological Turn in Interpreting: The Challenges That Lie Ahead einen willkommenen Perspektivenwechsel vorgenommen und darauf hingewiesen, dass es ohne “neue Technologien” gar kein Simultandolmetschen gäbe. Die großen technischen Entwicklungen, so Claudio, geschehen schnell und unabhängig von uns. Ein besseres Schlusswort hätte ich mir nicht überlegen können.

Und einmal mehr hat sich bestätigt: Das A und O jedes Kongresses sind die Kaffeepausen. Zwar konnte ich mich in den zwei (von insgesamt drei) Tagen nur mit schätzungsweise 0,1 % der über 1000 Teilnehmer unterhalten (und höchstens 10 % der Beiträge hören). Aber ich hätte kaum in zwei Tagen so viele Bücher, Zeitschriften, Blogs, Posts und Webseiten wälzen können, um durch reines Lesen so viele relevante Informationen und Eindrücke zu erhalten, wie ich sie hier von Mensch zu Mensch bekommen habe.

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

Neues vom Dolmetscher-für-Dolmetscher-Workshop in Bonn #aiicDfD2019

Die Sommerpause 2019 geht dem Ende entgegen, Zeit also für den traditionell von der AIIC Deutschland organisierten Workshop Dolmetscher für Dolmetscher. 71 Dolmetscherinnen und Dolmetscher haben am heutigen 31. August bei mindestens 31 Grad Außentemperatur in Bonn den ganzen Tag über Themen rund um das “Konferenzdolmetschen 4.0” diskutiert.

Eine vollständige Zusammenfassung des Tages gibt es hier natürlich nicht, aber ein paar Informationshäppchen, die im weitesten Sinne mit dem Thema Wissensmanagement zusammenhängen und hoffentlich Appetit auf mehr machen – denn ich werde versuchen, die eine oder andere Vortragende für einen Gastartikel bei zu gewinnen.

Lisa Pfisterer – Wahl des Berufswohnsitzes

Lisa Pfisterer, Absolventin der TH Köln, hat uns von ihrer Masterarbeit über die Kriterien bei der Wahl des Berufswohnsitzes berichtet. Sie hat dazu eine Befragung unter 62 Konferenzdolmetschern durchgeführt.

In dieser Umfrage wurden als wichtige Kriterien für die Wahl des Berufswohnsitzes die Nähe zu Kunden, Nähe zu Kollegen sowie die Verkehrsanbindung (ICE, Flughafen) genannt. Faktisch fällt die Entscheidung für einen bestimmten Wohnsitz jedoch letztendlich vor allem aus persönlichen Gründen. Die Auftragslage hängt laut Umfrage eher von Vernetzung mit Kollegen und Kunden ab als von den Reisekosten.

Interessant: 14 der 62 Befragten haben einen anderen faktischen als offiziellen Berufswohnsitz. Zum Netzwerken reisen Konferenzdolmetscher im Schnitt ca. 50 km, ihre Dolmetscheinsätze absolvieren sie durchschnittlich in einem Radius von 100 km.

Francesca Maria Frittella und Bianca Prandi – Dolmetschen von Zahlen und CAI-Tools

Ein sehr wissensrelevantes Thema hatten Francesca Maria Frittella und Bianca Prandi im Gepäck. Sie forschen in Germersheim zum Thema Dolmetschen von Zahlen und Nutzung von CAI-Tools.

Zum Thema “Problemtrigger Zahl” beim Dolmetschen hatte Francesca ein paar interessante Angaben zu den Fehlerquoten für uns: Diese liegen bei fortgeschrittenen Studierenden bei 45-50 %, bei erfahrenen Dolmetschern in einer Bandbreite von 18 bis 40 %. Häufig entstehen durch falsch gedolmetschte Zahlen Plausibilitätsprobleme, weshalb eine gezielte Vorbereitung mit “numerischen Informationen” so wichtig ist. “Zahlen haben eine Bedeutung” – ein schöner Satz, der doch eigentlich Motivation genug sein sollte, eine Fortbildung zum Thema mit Francesca und Bianca zu besuchen.

In der Zukunft kann dann vielleicht der helfende Kollege, der Zahlen mitnotiert, ersetzt werden durch ein Tool, das per automatischer Spracherkennung Zahlen aus dem Ausgangstext am Bildschirm transkribiert. So erklärt es uns Bianca Prandi, die in ihrer Promotion den Einfluss von visuellem Input auf  den Dolmetschprozess untersucht. Bianca stellte die Frage in den Raum, wer von den Anwesenden überhaupt mit CAI-Tools (Computer-Aided Interpreting) arbeite, und nur sehr wenige der 71 Teilnehmer hoben auf diese Frage die Hand. In der Diskussion dazu klang an, dass die meisten Programme nicht genau die Bedürfnisse der Konferenzdolmetscher abbilden, der Einarbeitungsaufwand den Nutzen häufig “auffresse” oder das Vorbereitungsergebnis schlechter sei als ohne Tool. Hoffentlich ein Ansporn für alle CAI-Tool-Entwickler!

Sabine Seubert – Eyetracking-Studie zu visuellem Input

Passend dazu hat Sabine Seubert uns das Thema der Verarbeitung  von visuellen Informationen beim Simultandolmetschen nähergebracht. In ihrer Eyetracking-Studie hat sie mit 13 Konferenzdolmetscherinnen ein realistisches Konferenzsetting nachgestellt und dabei deren Blickführung aufgezeichnet und ausgewertet. In ihrem Buch Visuelle Informationen beim Simultandolmetschen: eine Eyetracking-Studie lässt sich neben spannenden Erkenntnissen aus der Aufmerksamkeitsforschung die Studie in ihrer ganzen Pracht nachlesen. Hier nur ein paar Ergebnisse der Studie, die mir besonders im Gedächtnis geblieben sind:

Bei ausgeschaltetem Mikrofon schweift der Blick der Dolmetscher durch den Raum, das Blickfeld ist weit. Sobald das Mikrofon an ist, verengt sich der Blick vor allem auf Redner und die Leinwände. Der „starre Blick“, eine starke und verlängerte Fokussierung, ist vor allem bei starker kognitiver Beanspruchung (“schwierige Stellen”) und auch bei perzeptorischer Belastung/distraktorischen Ereignissen zu beobachten. Auch Augenschließen und der Blick auf informationneutrale Bereiche in der Raummitte waren dann zu beobachten, wobei Distraktoren (also ablenkende Elemente wie Menschen, die sich durch den Raum bewegen) dann nicht fixiert wurden.

In dem Zusammenhang stellt sich natürlich auch die Frage, wie stark die Selektion von visuellem Input auf mehreren (zudem geteilten) Bildschirmen, der zudem nach Relevanz und Funktion differenziert werden muss, etwa beim Remote Simultaneous Interpreting, belastend wirkt.

Interessant auch der Begriff der perceptual blindness: Unerfahrene Dolmetscher nehmen offensichtlich Distraktoren weniger wahr, weil sie dafür im Gegensatz zu routinierten Dolmetschern keine freien Verarbeitungskapazitäten haben.

Und zu guter Letzt findet Ihr hier die Folien zum Beitrag über das Remote-Simultandolmetschen, den ich mit Magdalena Lindner-Juhnke leisten durfte:

Alles in allem war dieser 10. Dolmetscher-für-Dolmetscher-Workshop in Bonn ein inspirierendes Forum voller vielseitiger Beiträge und Diskussionen zusammen mit denkfreudigen AIIC-, VKD- und verbandsfreien Kollegen. Gerade die richtige Mischung aus Forschungsthemen und strategischen berufsständischen Fragen, die einem jede Menge Stoff zum Weiterdenken mit nach Hause gibt.


No me canso, ganso – mis impresiones del Foro Lenguas 2019 en CDMX

En medio del desabasto de gasolina llegué a la ciudad de México para asistir al Foro Lenguas 2019 la semana pasada. En este congreso, con representación de 20 lenguas amerindias y 7 lenguas extranjeras, me lancé a la aventura para discutir la gramática, la importancia de las asociaciones profesionales y el papel que desempeñan las tecnologías en nuestra profesión.

Entre muchísimos momentos especiales e interesantes, estos fueron para mí los más inspiradores:

El día 24 de enero fue día de plenaria.

Entre muchas otras charlas interesantísimas, escuchamos la plática de Concepción Company acerca de “la comunicación eficaz y la pureza lingüística, ¿conceptos antagónicos?” Aprendí que todos somos seres gramáticos y que muchas veces lo que consideramos correcto o incorrecto solamente constituye una valoración social: “Dime cómo hablas y te diré quién eres.” Me recuerda mucho las discusiones que a veces tenemos los intérpretes. ¿Hay que educar al cliente y hablar “como se debe” o “como hablan ellos”?

Y: ¿Se puede hablar en dialecto o usar regionalismos? Nos explica Concepción Compay que cuán más elevado el nivel hablado, más neutro se hace, se pierde la variación y riqueza del dialecto y del sabor local, la identidad. Me hace pensar en la cuestión de si es aceptable que los intérpretes podamos hablar con acento, o incluso en dialecto. ¿O solo si el orador mismo lo hace? ¿Se puede interpretar un dialecto con otro en el idioma meta?

Otro dato muy interesante: El bilingüismo es un estado natural. Según la fuente consultada, aunque varían mucho las cifras, parece ser que abarca a más de la mitad de la población mundial.

Me dejó muy impresionada la charla acerca de los servicios lingüísticos en situaciones de crisis con las perspectivas de Julie Burns, intérprete en The Communication Bridge, Ian Newton, director ejecutivo de InZone y Carlos Sánchez González de Rescate Internacional Topos, A.C. Además de sus vivos relatos acerca de sus experiencias en las regiones de crisis, dejaron muy claro lo diferente que es el papel que tiene un intérprete en esta situación a la de un intérprete de conferencias. Es que no hay vidrio en medio, uno se convierte en actor activo responsable que no puede “esconderse” en la cabina.

Además, como es muy apropiado en este Año Internacional de las Lenguas Indígenas de las Naciones Unidas, escuchamos a la poeta y activista indígena mapuche (que no quiere que se le llame chilena), María Teresa Panchillo. Nos habló de su cultura, de la interacción del lenguaje, la modernidad, la tradición y me dejó impresionada por su relato tan original y genuino.

El viernes 25 de enero fue mi día de talleres, enfocado más que nada en el tema de las tecnologías. Tanto en mi propio taller como en los otros dos a los que asistí la sala estaba completamente llena.

Primero tuve el honor de tratar los temas de la preparación eficiente y la gestión de información y conocimientos con un grupo de colegas super simpáticos y muy inspirados. Fue la primera vez que di este taller en tierra americana, y además en español, con participantes de Brasil, México, EEUU y Canadá, me llevé muchas reflexiones muy interesantes. Discutimos de las particularidades de nuestro trabajo de preparación, del “ya no” y del “ni modo”, de un poco de teoría y muchas herramientas de gestión, búsqueda y extracción de terminología. Al final mi verdadero motivo de impartir cursos – saber acerca de las opiniones e ideas de los demás – quedó más que satisfecho 🙂

Después tocó un taller acerca de las “Tecnologías disruptivas: la cadena de bloques y la traducción” con el Dr. Miguel Duro Moreno. Nos habló de lo que la tecnología de las cadenas de bloque nos traerá a los traductores (y en parte también a los intérpretes):

  • Trabajo colaborativo y en equipo con todas las garantías de seguridad
  • Asignación de los derechos de propiedad intelectual con todas sus consecuencias en relación con la propiedad intelectual/autoría, remuneración, reputación, responsabilidad
  • Traducciones infalsificables, importante sobre todo en el ámbito jurídico
  • Contratos inteligentes tipo “if then” (si se termina la traducción, se paga automáticamente), negociados entre computadoras/máquinas, con oferta de contrato y aceptación automática
  • Pagos seguros e inmediatos al monedero digital (celular) sin intermediarios como bancos o empresas financieras tipo paypal – la Uberización de la economía

Miguel Duro tiene previsto publicar un artículo sobre este tema en la revista Comparative Legilinguistics. Se titulará Translation quality gained through the implementation of the ISO 17100:2015 and the usage of blockchain: The case of sworn translation in Spain.

Y finalmente, asistí al taller “Terptech” de Darinka Mangino y Maha El-Metwally. Ellas nos facilitaron una entretenida panorámica de los gadgets, programas y demás tecnologías útiles para los intérpretes, como para digitalizar notas, transcribir videos, de interpretación remota etc. Me tuve que ir en medio del taller para alcanzar mi vuelo de regreso … y me fui de México con una sensación de “No me canso, ganso. ¡Volveré prontito!”

La autora:
Anja Rütten es intérprete de conferencias autónoma para alemán (A), español (B), inglés y francés (C), domiciliada en Düsseldorf/Alemania. Se dedica al tema de la gestión de los conocimientos desde mediados de los años 1990.

About Term Extraction, Guesswork and Backronyms – Impressions from JIAMCATT 2018 in Geneva

JIAMCATT is the International Annual Meeting on Computer-Assisted Translation and Terminology, a IAMLAP taskforce where most international organizations, various national institutions and academic bodies exchange information and experience in the field of terminology and translation. For this year’s JIAMCATT edition in Geneva, I had the honour of running a workshop on Tools for Interpreters – and idea I found absolutely intriguing, as the audience would not necessarily be interpreters, but translators, terminologist and heads of language, conference and/or documentation services. So I chose a hands-on workshop setting called “an hour in the shoes of a conference interpreter”. Participants had to prepare a meeting using different tools and would then listen to a 10 minute sequence of this meeting and see how well they felt prepared.

The meeting to be prepared was a EP Special Committee on the Union’s authorisation procedure for pesticides on April 12, 2018. Participants could work in two possible scenarios:

Scenario 0: Interpreters haven’t received any documents and hardly any info about the conference. They have to guess and prioritise more than those working under Scenario 1.

Scenario 1: Interpreters have received all the documents one hour in advance (quite realistic a scenario, as Marcin Feder from the EP pointed out).

The participants were free to choose to work either alone or in a team. They were encouraged to test/evaluate one of the tools presented:

InterpretBank, a Computer-Aided Interpreting tool that covers many elements of an interpreters’ workflow, like glossary creation, multi-dictionary search, term extraction, document annotation, quick search in the booth and flashcard learning.

InterpretersHelp, a cloud-based Computer-Aided Interpreting tool that allows online shared glossary creation, glossary sharing with the community, manual term extraction and flashcard learning, as well as document and job management.

OneClickTerm, a browser-based term extraction tool

GT4T, a plugin for looking up words in several online dictionaries or machine translation sites, a toolbar for consulting several online dictionaries and encyclopaedias

At the end of the exercise, the participants watched the EP Special Committee on the Union’s authorisation procedure for pesticides on April 12, 2018 of the committee meeting. What followed was a lively and inspiring discussion, where each group described their workflows and how efficient they thought it was.

Those who had the relevant documents and ran them through the OneClick term extraction found that most critical terms that came up in the speech were in the extracted list. Others found the relevant documents by way of internet research and did the same.

Quickly installing programs or creating test accounts didn’t work out as easily for everyone, so some participants reverted to creating glossaries – common practice in the “real world” – and felt well prepared with that. Ten terms of their glossary were mentioned in the 10 minute video sequence. Others spent so much time familiarising themselves with the new tools that they didn’t feel well prepared but were very happy with what they had seen of InterpreterHelp and OneClickTerm.

When it comes to preparing for an EU meeting – at least when working from and into EU languages – there is an abundance of information available on the internet. It became clear once more that EU interpreters, in terms of meeting preparation, live in paradise. The EP legislative observatory, IATE and Eurlex were the main sources of information mentioned. I was happy to learn from Mariangeles Torrent (SCIC) that Prelex has not disappeared, but simply has turned into a tab within Eurlex named “legislative procedures“.

A short discussion about the pros and cons of Eurlex led to the conclusion that for interpreters it would be wonderful to have more than three languages displayed in parallel, and possibly a term extraction feature or technical terms highlighted in the text. Josh Goldsmith had the news that by adding a hyphen plus the language code in the url of the multilingual display, a fourth, fifth etc. language can indeed be added, although the page layout is far from perfect then. For the moment I have decided to stick to the method I have been using for over ten years, which consists of copying and pasting the columns into an Excel spreadsheet.

I was very glad to hear one participant mention the word “thinking” in the context of conference preparation. He looked at the agenda and the first thing he did was think about what the meeting might be about. He then did some background research in Wikipedia and other sources and looked up product names, which actually were mentioned in the speech. He also checked who were the members of the committee, who didn’t appear in this part of the meeting, but would otherwise have been useful.

While terms and glossaries were clearly the topics most intensely discussed, it became clear that semantic and context knowledge is crucial for interpreters to get a grasp of the situation they are working in. For as much as I appreciate a list of extracted terms from a meeting document as a last minute preparation, there is no such thing as understanding the content people are referring to. Hence my enthusiasm about the fact that the different semiotic levels (terms, content, context) did come up in the discussion. And indeed the notes I took while listening to the speech reflect the same thing: sometimes my doubts or reflections were simply about terms (how do you say co-formulant or low risk active substances in German), some about the situation (Can beer and talc be on the list of basic substances? Is the non-native speaker sure that this is the right word?) and some about meaning (What exactly is a candidate for substitution?).

It was also very interesting to see how different ways of preparing a meeting turned out to be useful in the meeting. Obviously, there is not just one way to success in meeting preparation.

Among the software features participants would like to see to support the information and knowledge work in conference interpreting, there seemed to be a wide consensus that term extraction and markup of glossary terms in meeting documents – like InterpretBank and Intragloss offer – are extremely useful. Text summarisation was also mentioned. Several participants found InterpretBank’s speech to text integration (based on Dragon) very interesting, but unfortunately, due to practical restraints we couldn’t test this.

When it comes to search functions, it is crucial that intuitive searching is possible in the relevant (!) documents and sources. Relevance seems to be an important factor in conference preparation. What with the abundance of information available nowadays, finding out what is really useful is key. However, many of the big international organisations like EU, UN and WTO do have very useful document management systems in place which help to find one’s way around.

From a freelancer’s perspective, I think that organizations should rather go for browser-based, i.e. device-independent systems to support their interpreters. This lowers the entry barrier of having to install something on each computer, apart from facilitating mobile access and online collaboration. Although I must say that I do also fancy the idea of a small plugin that works in any software, like my most recent discovery, GT4T. At least as freelancers, we change settings so often (back and forth from personal computers to mobile devices, Excel sheets, shared Google docs, paper, institutional information management systems etc.) that a self-contained environment for conference interpreters is maybe too clumsy and unrealistic. After all, hotkeys seem to be back in fashion: I also heard from the WTO colleagues that they have developed a tool quite along the same lines, creating special hotkeys for translators, check out at

And finally, my favourite newly learnt word: Backcronym

Backronyms are acronyms that used to be normal words and were re-interpreted later. While translators have a chance to think twice or recognise the word as a backronym because it is written in capitals, interpreters may struggle much more with this. It may take us a moment or two to figure out that the sentence “we need to do what PIGS do” refers to a “Professional Interpreters’ Gymnastics Society” rather than an animal.

Further reading:

Workhop Presentation (pdf) JIAMCATT 2018 Tools for Interpreters

Teresa Ortego Antón (2015): Terminology management tools for conference interpreters: an overview. In: Eleftheria Dogoriti  Theodoros Vyzas (editors): International Journal of Language, Translation and Intercultural Communication, Vol 5 (2016), Editors: Technological Educational Institute of Epirus, Greece. 107-115.

Hernani Costa, Gloria Corpas Pastor, Isabel Durán Muñoz (LEXYTRAD, University of Malaga, Spain): A comparative User Evaluation of Terminology Management Tools for Interpreters. In: Proceedings of the 4th International Workshop on Computational Terminology, 23 August 2014, Dublin, Ireland. 68-76
Anja Rütten (2017): Terminology Management Tools for Conference Interpreters –
Current Tools and How They Address the Specific Needs of
Interpreters. In: Translating and the Computer 39, Proceedings, 16-17 November 2017, AsLing, The International Association for Advancement in Language Technology, London, England. 98 ff



Interpreting and the Computer – finally a happy couple?

This year’s 39th edition of the Translating and the Computer conference, which Barry Olsen quite rightly suggested renaming to Translating and Interpreting and the Computer :-), had a special focus on interpreting, so obviously I had to go to London again! And I was all the more looking forward to going there as – thanks to Alex Drechsel’s and Josh Goldsmith’s wonderful idea – I was going to be a panelist for the first time in my life (and you could tell by the   just how excited we all were about our panel, and the whole conference for that matter).

The panelists were (from left to right):
Joshua Goldsmith (EU and UN accredited freelance interpreter, teacher and researcher, Geneva)
Anja Rütten (EU and UN accredited AIIC freelance interpreter, teacher and researcher, Düsseldorf)
Alexander Drechsel (AIIC staff interpreter at the European Commission, Brussels – don’t miss Alex’ report about Translating and the Computer 39 including lots of pictures of the event!)
Marcin Feder (Head of Interpreter​ Support and Training Unit at the European Parliament, Brussels)
Barry Slaughter Olsen (AIIC freelance interpreter, Associate Professor at the MIIS Monterey and Co-President of InterpretAmerica),
Danielle D’Hayer, our moderator (Associate Professor at London Metropolitan University)

If you have an hour to spare, here’s the complete audio recording (thanks for sharing, Alex!): “Live at TC39: New Frontiers in Interpreting Technology” on Spreaker.

If I had to summarise the conference in one single word, it would be convergence. It appears to me from all the inspiring contributions I heard that finally things are starting to fall into place, converging towards supporting humans in doing their creative tasks and decision-making by sparing them the mechanical, stupid work. This obviously does not only apply to the small world of interpreting, but to many other professions, too. “It is not human against machine, but human plus mashine”, as Sarah Griffith Masson, Chair of the Institute of Translation and Interpreting (ITI) and Senior Lecturer in Translation Studies at the University of Portsmouth, put it in her speech.

OK, this is easy to say on a conference called “Translating and the Computer”, where the audience is bound to be a bit on the nerdy side. And the truth is, Gloria Corpas Pastor, Professor of Translation and Interpreting of the University of Malaga, presented some slightly sobering results of her survey about the use of computers among translators and interpreters. It looks like interpreters are less technology-prone than translators, a fact that made most of us in the audience nodd knowingly. But no reason to be pessimistic, given the many interesting use cases presented at the conference, plus the efforts being made, for example, at the European Commission and Parliament to provide confererence interpreters with the tools they need for their information and knowledge management, as Alexander Drechsel and Marcin Feder reported.

So while for everyone who is not an interpreter, interpreters rather seem to be the frontier to be overcome by technology, the conference was all about new frontiers in interpreting in the sense of how technology can best be used in order to support interpreters and turn the relation between interpreters and computers into a symbiotic one. Here are the key ideas I personally took home from the conference:

Whatsappify translators’ software

A question asked by several representative from international organisations like WIPO (who by the way have this wonderful online term database called WIPO pearl), EU, WTO and UN was what our ideal software support for the booth would look like. Unfortunately, the infallible information butler described back in 2003 has not become reality yet, but many things like intuitive searching and filtering, parallel reading/scrolling of documents in two languages, linking the term in its textual context to the entry in the term database have been around for twenty years in Translation Memory systems. Most international organisations have so many translation ressources that could be tapped if only the access to them were open and a bit more tailored to the needs of interpreters. Translators and interpreters could then benefit from each others work much more than they tend to do nowadays. Obviously, a lot could be gained by developing more interpreter-friendly user interfaces.

Which reminds me a bit of WhatsApp. People who wouldn’t go anywhere near a computer before and could hardly manage to receive, let alone write, an email, seem to have become heavy WhatsApp users with the arrival of smartphones. While good old emails have been offering pretty much the same functions AND don’t force you to use always the same device, it’s stupid WhatsApp that finally has turned electronic written communication into the normal thing to do, simply by being much more fashionable, intuitive and user-friendly. So maybe what we need is a “WhatsAppification” of Translation Memory systems in order to make them more attractive (not to say less ugly, to quote Josh Goldsmith) to interpreters?

Making the connection between glossaries and documents

Clearly in the world of glossary or terminology management for simultaneous interpreting, of the nine interpreter-specific solutions I am aware of and had the honour to present in a workshop (thanks to everyone for showing up at 9 am!), InterpretBank and InterpretersHelp are the most forward-moving, ambitious and innovative ones. InterpretersHelp has just released a term extraction feature (to be tested soon) similar to that of Intragloss, i.e. you can add terms from parallel reference texts to your glossary easily. InterpretBank has even integrated a real term extraction feature similar to that of SketchEngine (also to be tested soon). If interpreters cannot be bothered to use translation memories after all, maybe that’s the way forward.

Automatic speech recognition reducing interpreters’ listening effort

Claudio Fantinuoli from Germersheim presented InterpretBank’s latest beta function: It uses speech recognition to provide life transcription of the speech, extracts numbers, names and technical terms and displays them, the latter together with their target language equivalents from the glossary.  This is the impressive demo video giving a glimpse of what is technically feasible.

Although it has to be admitted that it was made in a controlled environment with the speaker pronouncing clearly and in Britsh English. But still, there is reason to hope for more!

There was a nice coincidence that struck me in this context: Recently, I conducted a case study (to be published in 2018) where I analysed interpreters’ booth notes. In this study, numbers, acronyms (mostly names of organs or organisations) and difficult technical terms (mainly nouns) were the items most frecuently written down – and this is exactly what InterpretBank automatically highlights on the transcription screen.

What I have always liked about InterpretBank, by the way, is the fact that there is always science behind it. This time Bianca Prandi, doctoral student at the University of Germersheim, presented the research she plans on the cognitive load of using CAI or computer-assisted interpreting or CAI tools. I am really looking forward to hearing more of her work in the future.

The second speaker who showed a speech recognition function to support interpreters was keynote speaker Prof. Alexander Waibel – not a conference interpreter, for a change, but reputable personal trainers and Professor of Computer Science at Carnegie Mellon University, Pittsburgh and at the Karlsruhe Institute of Technology, Germany (who even has his own Wikipedia entry). During his extremely interesting and entertaining speech about deep learning, neuronal mashine translation and speech recognition, he also presented a life transcript function to support interpreters in the booth.

Paper and electronic devices all becoming one thing

I very much enjoyed talking and listening to the two most tablet-savy conference interpreters I am aware of (doubling as my co-panelists), Alexander Drechsel and Josh Goldsmith. I find the idea of using a tablet for note-taking very enticing, even more so after having seen Josh’s demo. And I don’t agree that the only reason to replace paper by a tablet is to look better or “just to try it out”. Alex and Josh could name so many advantages (consulting a dictionary or glossary in parallel, adjusting the pen colour, not having to turn the pages of your block after every two sentences). The most obvious to me is that don’t have to be afraid of running out of paper, by the way. And luckily, Josh’s study now tells us which devices are best suited to interpreters’ needs:

When we discussed the use of computers among interpreters and interpreting students in the panel, it was interesting to hear about the different experiences. Everyone seemed to agree that young interpreters or interpreting students, despite being “digital natives” and computer-savvy (which most panelists agreed is a myth), cannot necessarily be expected to be able to manage their information and knowledge professionally. On the other hand, common practice seemed to differ from using paper, laptop computers, tablets or even doing relying completely on smartphonse for information management, like our wonderful panel moderator Danielle D’Hayer reported her students did. She seemed to me the perfect example of not “teaching” the use of technologies, but just using them right from the beginning of the courses.

Remote everything: cloud-based online collaboration and distance interpreting

Although in the panel discussion not everyone seemd to share my experience, I think that team glossaries, nowadays more often than not created in Google Sheets, are about to become common practice in conference preparation. Apart from being great fun, it saves time, boosts team spirit, and improves the knowledge base of everyone involved. Not to mention the fact that it is device and operating system neutral, visit if you have lack of time maybe they have solution you need. There is, however, a confidentiality problem when using sensible customer data, but this could be solved by using encrypted solutions like or

Now once we are all able to collaborate, prepare and get to know each other online, we seem to be perfectly prepared to work in simultaneous interpreting teams remotely, i.e. from different places. Luckily, the two most knowledgeable colleagues in remote interpreting I know of, Klaus Ziegler (AIIC freelance interpreter and chair of the AIIC technical committee) and Barry Olsen), were at the conference, too. There is so much to be said about this subject that it would fill several blog posts. My most important lessons were: Remote interpreting technologies don’t necessarily imply lower rates in interpreting. The sound quality of videoconferences via normal (private) phone lines is usually not sufficient for simultaneous interpreting. The use of videoconference interpreting seems to be much more widespread in the U.S. than it is in Europe. It is a good idea for conference interpreters’ associations like AIIC to play an active role (as Klaus Ziegler is thankfully doing) in the development of technologies and standards.

Simultaneous and consecutive interpreting merging into simconsec

The last thing to be noted as converging thanks to modern technology is simultaneous and consecutive interpreting, i.e. using tablets and smartpens to record the original speech and replaying it while rendering the consecutive interpretation. Unfortunately, there was not time to talk about this in detail, but here is a one minute demo video to whet your appetite.

And last but not least: Thank you very much to Barry Olsen for the lovely live interview we had (not to be missed: the funny water moment)!

And of course: Spread the word about next year’s 40th anniversary of Translating and the Computer!

Impressions from Translating and the Computer 38

The 38th ‘Translating and the Computer’ conference in London has just finished, and, as always, I take home a lot of inspiration. Here are my personal highlights:

  • Sketch Engine, a language corpus management and query system, offers loads of useful functions for conference preparation, like web-based (actually Bing-based) corpus-building, term extraction (the extraction results come with links to the corresponding text, the lists are exportable to common, reusable formats) and thesaurus-building. The one thing l liked most was the fact that if, for example, your clients have their websites in several languages, you can enter the urls of the different language versions and SketchEngine will download them, so that you can then use the texts a corpus. You might hear more about SketchEngine from me soon …
  • XTM, a translation memory system, offers parallel text alignment (like many others do) with the option of exporting the aligned texts into xls. This finally makes them reusable for those many interpreting colleagues who, for obvious reasons, do not have any translation memory system. And the best thing is, you can even re-export an amended version of this file back into the translation memory system for your translator colleagues to use. So if you interpret a meeting where a written agreement is being discussed in several language versions, you can provide the translators first hand with the amendments made in the meeting.
  • SDL Trados now offers an API and has an App Store. New hope for an interpreter-friendly user interface!

All in all, my theory that you just have to wait long enough for the language technology companies to develop something that suits conference interpreters’ needs seems to materialise eventually. Also scientists and software providers alike were keen to stress that they really want to work with translators and interpreters in order to find out what they really need. The difficulty with conference interpreters seems to be that we are a very heterogeneous community with very different needs and preferences.

And then I had the honour to run a workshop on interpreters workflows and fees in the digital era (for some background information you may refer to The future of Interpreting & Translating – Professional Precariat or Digital Elite?). The idea was to go beyond the usual “digitalisation spoils prices and hampers continuous working relations” but rather find ways to use digitalisation to our benefit and to boost good working relationships, quality and profitability. I was very happy to get some valuable input from practicioners as well as from several organisations’ language services and scientists. What I took away were two main ideas: interface-building and quality rating.

Interface-building: By cooperating with the translation or documentation department of companies and organisations, quality and efficiency could be improved on both sides (translators providing extremely valuable and well-structured input for conference preparation and interpreters reporting back “from the field”). Which brings me back to the aforementioned positive outlook on the sofware side.

Quality rating: I noticed a contradiction which has never been so clear to me before. While we interpreters go on about the client having to value our high level of service provided and wanting to be paid well for quality, quality rating and evaluation still is a subject that is largely being avoided and that many of us feel uncomfortable with. On the other hand, some kind of quality rating is something clients sometimes are forced to rely on in order to justify paying for that (supposedly) expensive interpreter. I have no perfect solution for this, but I think it is worth some further thinking.

In general, there was a certain agreement that formalising interpreters’ preparation work has its limitations. It is always about filling the very personal knowledge gaps of the individual (for a very particular conference setting), but that technologies can still be used to improve quality and keep up with the rapidly growing knowledge landscape around us.


About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.