Hard consoles – Quick guide to old normal relais interpreting

This blog post is intended for all those students who started their studies of conference interpreting right after the outbreak of Covid-19. More than one year into the pandemic, many of them haven’t entered a physical booth or put their hands on a hard console yet.

In order to not leave them completely unprepared, I have assembled some very basic guidance. There are many different interpreting consoles out there, the ones you see here are just three random ones to give you an idea. Many thanks to Magdalena Lindner-Juhnke and Inés de Chavarría for helping with the pictures and to Tefik Cevikel from Schneider Konferenz Systeme for letting me take a video in their hub in Düsseldorf.

And here is a short video on how to find your relais:


Further reading




About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C), and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Will 3D audio make remote simultaneous interpreting a pleasure?

Now THAT’S what I want: Virtual Reality for my ears!

Apparently, simultaneous interpreters are not the only ones suffering from Zoom fatigue, i.e. the confusion and inability to physically locate a speaker using our ears can lead to a condition that’s sometimes referred to as “Zoom Fatigue”. But it looks like there is still reason to hope that, acoustically, video conferences become more of a pleasure and less of an ordeal in years to come. At least that’s what current discussions around this topic make me hope … This (four year old!) video by BBC Click illustrates quite nicely what 3D sound for headphones is about:


Binaural recording is a method of recording sound with the intent to create an immersive 3D stereo sound sensation for the listener of actually being in the room with the performers. A dummy head, aka “Kunstkopf”, is used for recording, which has two dummy ears in the shape of human ears with microphones in them so as to simulate the perception of the human ear. Sound recorded via binaural recording is intended for replay using headphones. And the good thing is: What was originally made for the gaming and movie industry is now also bound to conquer the video conferencing market.

Mathias Johannson, the CEO of Dirac, is rather confident In less than a decade, 3D audio over headsets with head-tracking capabilities will allow us to have remote meetings in which you can move about an actual room, having sidebar discussions with one colleague or another as you huddle close or step away. What is more, Johannson reckons that spatial audio could be made available to videoconference users early in 2021. Dirac wants to offer 3D audio technology to video chat platforms via an off-the-shelf software solution, so no expensive hardware would be required. Google, on the other hand, already advertises its Google Meet hardware for immersive sound. But from what we have learned six months into Covid-19, it is difficult even to persuade participants to wear a cabled headset (not to mention using an ethernet cable instead of a wifi connection), I am personally not too optimistic that expensive hardware is the way forward to high-quality remote simultaneous interpretation.

So, will such a software-based solution possibly not only provide a more immersive meeting experience but also be able to provide decent sound even without remote participants connected from their home offices having to use special equipment, i.e. headsets? I asked Prof. Jochen Steffens, who is a sound and video engineer, for his opinion. The answer was rather sobering regarding equipment: For 3D audio, a dry and clean recording is required, which at the moment is not possible using built-in microphones. Equipment made for binaural recording, however, would not really serve the purpose of simultaneous interpreting either, as the room sound would actually be more of a disturbance for interpreting. Binaural recording is rather made for recording the sound of real threedimensional sound impressions like in concert halls and the like. For video conferencing, rather than headsets, Steffens recommends using unidirectional microphones; he suggests, for example, an inexpensive cardioid large-diaphragm microphone, mounted on a table stand. And the good news: If you are too vain to wear a headset in video conferences, with decent sound input being delivered by a good microphone, any odd wireless in-ear earphones can be used for listening, or even the built-in speakers of your laptop as long as you turn them off while speaking.

But what about the spatial, immersive experience? And how will a spatial distribution of participants happen if, in fact, there is no real room to match the virtual distribution to? As Prof. Steffens explained to me, once you have a good quality sound input, people can indeed be mapped into a virtual space, e.g. around a table, rather easily. The next question would be if, in contrast to the conference participants, we as interpreters would really appreciate such a being-in-the-room experience. While this immersion could indeed allow for more situational awareness, we might prefer to always be acoustically positioned right in front of the person who is speaking instead of having a “round table” experience. After all, speakers are best understood when they are placed in front of you and both ears get an equally strong input (the so-called cocktail party effect of selectively hearing only one voice works best with binaural input). And this would, by the way, nicely match a close front view of the person speaking.

And then, if ever video conferencing can offer us a useful immersive experience, couldn’t it even end up being more convenient than a “normal” on-site simultaneous interpreting setting? More often than not, we are rather isolated in our booths with no more than a poor view of the meeting room from a faraway/high above/hidden-next-door position. So much so that I am starting to wonder if 3D audio (and video, for that matter) could also be used in on-site conference rooms. According to Prof. Steffens, this would be perfectly feasible by “simply” using sound engineering software.

But then the next question arises: While simultaneous interpreters used to be “the voice in your ear”, they might now be assigned a position in the meeting space … the voice from above, from behind (like in chuchotage), or our voices could even come from where the speaker is sitting who is being interpreted at the moment. Although for this to happen, the speaker’s voice would have to be muted completely, which might not be what we want. Two voices coming from the same position would be hard to process for the brain. So the interpreter’s voice would need to find its own “place in space” after all – suggestions are welcome!

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.






You can never have too many screens, can you?

I don’t know about you, but I sometimes struggle, when using my laptop in the booth, to squeeze the agenda, list of participants,  glossary, dictionary, web browser and meeting documents/presentations onto one screen. Not to mention email, messenger or shared notepad when working in separate booths in times of COVID-19 … Or even the soft console of your RSI provider?

Well, I have more than once found myself wondering if anybody would mind me bringing my 24 inch desktop monitor to the booth to add some additional screenspace to this tiny 12 inch laptop screen – until, finally, I came across a very useful little freeware application called spacedesk. It lets you use your Android, iOS or Windows tablet as an external monitor to complement your Windows computer quite easily (to all tablet aficionados: unfortunately it does not work the other way around). You simply install it on both your main Windows device, the “server” or “Primary Machine”, and the tablet as a “client” or “secondary device”, and you can connect both devices via USB, ethernet or WiFi and then  use your tablet to either extend or duplicate your computer screen just like you do it with any external monitor on your desk.

There is just a tiny delay when moving the mouse (if that’s not due to my low-end tablet’s poor performance), so it might be better to move the more static elements, like the agenda, to it rather than your terminology database, which you might want to handle very swiftly.

So if ever you feel like going back to printing your documents for lack of screen space, bringing your tablet as a screen extension might be a good alternative.

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.


Simultaneous interpreting in the time of coronavirus – Boothmates behind glass walls

Yesterday was one of the rare occasions where conference interpreters were still brought to the client’s premises for a multilingual meeting. Participants from abroad were connected via a web meeting platform, while the few people who were on-site anyway were sitting at tables 2 meters apart from each other. But what about the interpreters, who usually share a booth of hardly 2 x 2 m, and who are not exactly known for their habit of social distancing in the first place? Well, PCS, the client’s conference technology provider of choice, came up with a simple, yet effective solution: They just split up the teams and gave us one booth each. So there we were, my colleague Inés de Chavarría and I, spreading our stuff in our private booths, separated by no more than a window.

Separate booths

Now, apart from having to bring our own food (no catering available), by the time we met in the morning of this meeting, we had already figured out which would probably be the main challenges of being boothmates while separated by a glass wall:

1 How do we agree on when to take turns?

2 How do we help each other by writing down numbers, names and difficult words?

3 How do we tell each other that we want coffee/are completely knackered/need to go to the loo, complain about the sound/accent/temperature/chairman’s haircut or ask how the kids are?

Luckily, after an exciting day, we felt that we had found great solutions to all our communicative needs:

1 Taking over: Although the colleague who was not working couldn’t listen to the original and the interpretation at the same time, she could tell quite reliably from gestures and eye-contact when to take over. So, no countdown or egg timer needed as long as you can see each other.

2 Helping out – These were the options we tried:

Write down things with pen and paper, show it through the window: Rather slow and hard to read due to reflections from the booth windows. The same goes for typing on the computer and looking at the screen through the window.

Scribbling in a shared file in Microsoft Whiteboard (great), One Note (ok), Google Drawings (a bit slow and unprecise): Fine as long as all parties involved have a touchscreen and decent pen. Sometimes hard to read, depending on the quality of the pen/screen and handwriting.

Typing in a shared file like Google Sheets or Docs: This was our method of choice. The things we typed appeared on the other’s screen in real-time, plus it was perfectly legible, in contrast to some people’s handwriting. A perfect solution as long as there is decent Wifi or mobile data connection. And although I am usually of the opinion that there is no such thing as a decent spreadsheet, in this case, a plain word processing document has one clear advantage: When you type in Google Docs, each character you type will appear on your colleague’s screen practically in real-time, whereas when typing in the cell of a Google Sheet, your colleague won’t be able to see it until you “leave” this cell and jump to the next one.

3  The usual chitchat:

WhatsApp, or rather the WhatsApp Web App, was the first thing we all spontaneously resorted to for staying in contact with a glass wall between us. But it quickly turned out to be rather distracting, with all sorts of private messages popping up.

Luckily, all Google documents come with a chat function included, so we had both our meeting-related information exchange and our personal logistics neatly displayed next to each other in the same browser window.

If we had worked with many different documents that needed to be managed while interpreting, I would have liked to try Microsoft Teams. With its chat function and shared documents, among other features, it seems very promising as a shared booth platform. But their registration service was down due to overload anyway, so that’s for next time.

So, all in all, a very special experience, and rather encouraging thanks to the many positive contributions from all people involved. And the bottom line, after having to accommodate on my laptop screen the booth chat and notes next to the usual glossary, online resources, agenda and meeting documents: My next panic purchase will be a portable touchscreen in order to double my screen space in the booth.

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.



Remote Simultaneous Interpreting … muss das denn sein – und geht das überhaupt?!

In einem von AIIC und VKD gemeinsam organisierten Workshop unter der Regie von Klaus Ziegler hatten wir Mitte Mai in Hamburg die Gelegenheit, diese Fragen ausführlich zu ergründen. In einem Coworkingspace wurde in einer Gruppe organisierender Dolmetscher zwei Tage lang gelernt, diskutiert und in einem Dolmetschhub das Remote Simultaneous Interpreting über die cloud-basierte Simultandolmetsch-Software Kudo ausprobiert.

Zur Einstimmung haben wir uns zunächst vergegenwärtigt, worüber denn nun eigentlich genau reden, denn in der ISO-Norm 20108 wird begrifflich folgendermaßen unterschieden:

Distance Interpreting (interpreting of a speaker in a different location from that of the interpreter, enabled by information and communications technology): Dieser Begriff des “Ferndolmetschens” umfasst alle Szenarien, bei der einer der Kommunikationsteilnehmer sind nicht im gleichen Raum wie die anderen befindet. Dies kann unter anderem bedeuten, dass einfach ein Sitzungsteilnehmer per Videokonferenz zugeschaltet wird oder man beim Kunden sitzt und ein Telefonat dolmetscht.

Das Remote Interpreting, um das es hier gehen soll, ist eine „Unterart“ des Distance Interpreting. Es bedeutet, dass die Dolmetscher remote sind, sei es im Nebenraum oder an einem ganz anderen geographischen Ort. Dabei kann durchaus sein, dass darüber hinaus auch die Teilnehmer an unterschiedlichen Orten sind (full remote).

Viele technische Fragen zum Remote Interpreting, etwa zu den Anforderungen an die Übertragungsqualität, hat Tatiana KAPLUN im November 2018 in ihrem tollen Blogbeitrag “A fresh look at remote simultaneous interpreting” über das RSI-Seminar von AIIC Netherlands dargestellt – sehr lesenswert.

Was mich und viele andere Teilnehmer allerdings neben abgesehen von der technischen Umsetzbarkeit (wie gut sind die RSI-Softwarelösungen? Wo gibt es einen Premium-Hub, wie wir ihn uns wünschen?) am Ende des ersten Tages umtrieb, war die Frage, welche Remote-Simultan-Lösung für unseren Kunden in welchem Fall die beste ist. Wir hatten unzählige kritische Aspekte diskutiert, die jeweils für das Remote-Dolmetschen in gut erreichbaren Dolmetsch-Hubs oder die cloudbasierte RSI-Software am eigenen Rechner sprechen. Und da ich meine, nichts geht über eine knackige Entscheidungsmatrix, habe ich versucht, in einer Tabelle zusammenzufassen, welche Vorteile die jeweiligen Lösungen bieten:


(an einem zentralen Ort dauerhaft installierte Dolmetschkabinen mit RSI-Software und stabiler und sicherer Internetverbindung)


(Simultandolmetschen mittels cloud-basierter Software; Nutzung von Rechner, Kopfhörer, Mikrofon und Internetverbindung des Dolmetschers)

Vertraulichkeit Bei Sorge vor Industriespionage bzw. entsprechenden IT-Sicherheitsstandards sind öffentliche, cloud-basierte Lösungen für viele Unternehmen problematisch;

Hub mit sicherer Verbindung/eigener oder externer geschützter Serverstruktur sinnvoll

Werden keine vertraulichen Informationen ausgetauscht bzw. wird ohnehin seitens der Teilnehmer mit offenen Systemen/unkontrollierten Zugangspunkten gearbeitet (Privatrechner, Heimbüro), ist eine sichere Verbindung weniger entscheidend
Verlässlichkeit Wenn bei technischem Ausfall hohe Kosten oder große Probleme entstehen; Hubs können eher zwei oder drei Internetverbindungen vorhalten und so einen Verbindungsausfall praktisch ausschließen Wenn ein technischer Ausfall keine größeren Kosten/Probleme verursacht und die Teilnehmer evtl. auch mit wenig verlässlichen Systemen arbeiten; die wenigsten Dolmetscher halten in Ihrem Büro einen redundanten zweiten Internetanschluss vor
ISO-ähnliche Übertragungsqualität und Arbeitsumgebung Einwandfreie Übertragungsqualität (Frequenzband, Latenz, Lippensynchronität) notwendig, wenn störungsfreie, “unbemerkte” Verdolmetschung erforderlich Suboptimale Übertragungsqualität kann dazu führen, dass häufiger Rückfragen notwendig sind, Inhalte verloren gehen, Pausen notwendig sind; akzeptabel v.a. bei Einsätzen von kurzer Dauer bzw. Intensität
Zeit Zeitersparnis nur, wenn Dolmetscher in Hubnähe verfügbar; Vorteil: Hub ist immer einsatzbereit Praktisch bei kurzfristiger Terminierung, vor allem, wenn Dolmetscherpool vorhanden, so dass gegenseitige Vertretung möglich ist – Achtung: Technikchecks, Stand- und Wartezeiten evtl. trotzdem berücksichtigen
Kosten Vorteilhaft ist, wenn Dolmetscher in Hub-Nähe wohnen (bzw. näher am Hub als am Kunden) Praktisch wenn Dolmetscher weder in Hub- noch in Kunden-Nähe
Logistik Praktisch, wenn die Sitzungsteilnehmer an Firmenstandorten mit Hub-ähnlichen Bedingungen sind (Übertragungstechnik vorhanden) Praktisch, wenn Teilnehmer an beliebigen Orten auf dem Globus verteilt sind und/oder beliebig viele Zuhörer mit eigenen Geräten zuhören sollen (BYOD)


Zusammenarbeit Im Hub ist Teamarbeit in der Kabine wie in Präsenzveranstaltungen möglich Mikrofonübergabe, Unterstützung (Zahlen/Namen notieren, Terminologierecherche) und Abstimmung erschwert, Blickkontakt muss durch Softwarefunktionen simuliert werden

Neue Perspektiven

Vergleicht man die unterschiedlichen Aspekte, wird klar, dass das Dolmetschen aus einem Hub tendenziell eher die Bedingungen des konventionellen Konferenzdolmetschens abbildet bzw. abzubilden in der Lage ist – wenn dies auch bei Weitem nicht immer gilt. So wäre ein vollständig ohne Konferenzraum, rein virtuell abgehaltenes Event durchaus sinnvoll aus einem Hub zu dolmetschen und in eine „normale“ Präsenzveranstaltung könnten Dolmetscher (wie Teilnehmer) cloud-basiert aus allen Teilen der Welt hinzugeschaltet werden. Klar wird dennoch, dass Veranstaltungen, die aktuell nicht simultan gedolmetscht werden, durch Remote-Technik dafür nun eher in Frage kommen – so etwa Webinare oder informelle Besprechungen zwischen größeren Sitzungen  internationaler Gremien.

Der Praxistest

Am interessantesten und unterhaltsamsten war natürlich der Praxistest. In verteilten Rollen konnten alle Teilnehmer in die Rolle des Redners, Dolmetschers und Zuhörers schlüpfen und alle Perspektiven kennenlernen.

Remote-Simultandolmetschen im Hub

Das Kabinen-Feeling war dabei im ersten Moment ganz normal und gewohnt, mit der Benutzeroberfläche hatten wir uns alle schnell vertraut gemacht. Das Bedienen der “Knöpfe” für Mikrofon, Räuspertaste und Lautstärke am Bildschirm erfordert dabei ein wenig Übung english college. Nicht, dass man nicht in der Lage wäre, mit der Maus einen Schieber zu bedienen – aber das blinde Drücken von ertastbaren Knöpfen, ohne das Sitzungsgeschehen oder den Redner dabei aus dem Auge lassen zu müssen, empfanden die meisten – eventuell auch aus Gewohnheit – als weniger aufmerksamkeitsraubend. Spannend war auch das Ausprobieren verschiedener Kopfhörer, Mikrofone und Betriebssysteme, was zu teils erheblichen Unterschieden in der übertragenen Lautstärke führte.

Da wir im Hub miteinander in den Kabinen saßen, wurde nicht ausprobiert, wie man ohne Blickkontakt das Mikrofon abgibt. Einen Übergabebutton hat Kudo nicht, im Gegensatz zu anderen RSI-Programmen. die teils recht pfiffige Lösungen hierfür bieten.

Zuhörer-Benutzeroberfläche (Android) von Kudo

Das Zuhören am eigenen Smartphone funktionierte bis auf die genannten Lautstärkeprobleme recht gut. Die noch ungeklärte Frage war eher: Was passiert, wenn 50, 100 oder 1000 Veranstaltungsteilnehmer ihr own device bringen und nach einem halben Tag des Zuhörens alle gleichzeitig ihren Akku laden müssen? First world problems, würde man sagen, bringt aber ganz gut auf den Punkt, mit welchem Eindruck mich ein spannendes und intensives RSI-Seminar zurücklässt: Tolle Chancen, faszinierende Technik – aber mitunter kritische technische Details, die dem Enthusiasmus (noch) ein Beinchen stellen.

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

Paperless Preparation at International Organisations – an Interview with Maha El-Metwally

Maha El-Metwally has recently written a master’s thesis at the University of Geneva on preparation for conferences of international organisations using tablets. She is a freelance conference interpreter for Arabic A, English B, French and Dutch C domiciled in Birmingham.

How come you know so much about the current preparation practice of conference interpreters at so many international institutions?#

The answer is quite simple really: I freelance for all of them! I am also pro paperless environments for obvious environmental and practical reasons. So even if some organisations offer a paper alternative (ILO, IMO, UNHQ, WFP) I go for the electronic version. Paperless portals of international organisations may differ in layout and how the information is organised but they essentially aim to achieve the same thing. Some organisations operate a dual document distribution system (paper and digital) with the aim of phasing out the former over time.

The European Parliament is already on its second paperless meeting and document portal. It used to be called Pericles and now it is called MINA, the Meeting Information and Notes Application. This required a bit of practice to become familiar with the new features.

I recently heard someone working in one of these paperless environments complain about the paperless approach, saying that they often struggle to find their way through a 400 pages document quickly. My first reaction was to say that hitting CTRL-F or CTRL-G is an efficient way to get to a certain part of a text quickly. But maybe there is more to it than just shortcuts. What is the reason, in your experience, that makes it difficult for colleagues to find their way around on a tablet or laptop computer?

I think that tablets represent a change and people in general resist change. It could be that we are creatures of habit. We are used to a certain way of doing things and some of us may be having a difficulty coping with all the changes coming our way in terms of technology developments. Take online payments as an example. It makes life a lot easier, especially when we know that there are platforms like Fully-Verified that provide complete data protection on the web. Yet there are still people who resist it. And regarding the application of technology in the work of interpreters, some do not see the point of technology so they are not motivated to change something that works for them.

How is the acceptance of going paperless in general in the institutions you work for?

This depends on individual preferences. Many colleagues still prefer paper documents but I also see more and more tablets appearing in the booths. Some organisations try to accommodate both preferences. The ILO operates a dual distribution system as a step towards going completely paperless. Meeting documents are available on the organisation’s portal but are also printed and distributed to the booths. The same goes for the IMO where the interpreters are given the choice of paper or electronic versions of the documents or both.

Right, that’s what they do at SCIC, too. I take it that you wrote your master’s thesis about paperless preparation, is that right? Was the motivational aspect part of it? Or, speaking about motivation: What was your motivation at all to choose this subject?

Yes, this is correct. I am very much of a technophile and anything technological interests me. I was inspired by a paperless preparation workshop I attended at the European Parliament. It made sense to me as a lot of the time, I have to prepare on the go. It happens that I start the week with one meeting then end the week with another. Carrying wads of paper around is not practical. Having all meeting documents electronically in one place is handy. It happens a lot that I receive meeting documents last minute. There is no time to print them. So I learned to read and annotate the documents on apps on my tablet.

So while you personally basically did “learning by doing”, your researcher self tried to shed some more scientific light on the subject. Is that right? Would you like to describe a bit more in detail what your thesis was about and what you found was the most interesting outcome?

My thesis looked at training conference interpreting students to prepare for conferences of international organisations with the use of tablets. I noticed from my own experience and from anecdotes of older colleagues that meetings were getting more and more compressed. As a result, especially in peak seasons, interpreters may start the week with one conference and end it with another. Preparation on the go became a necessity. In addition, there are several international organisations that are moving towards paperless environments. Therefore, I think it is important for students to be introduced to paperless preparation at an early stage in their training for it to become a second nature to them by the time they graduate. And what a better tool to do that than the tablet? I created a course to introduce students to exactly that.

So when you looked at the question, was your conclusion that tablets are better suited than laptop computers? Currently, it seems to me that on the private market almost everyone uses laptops and at the EU, most people use tablets. I personally prefer a tablet for consecutive, but a laptop in the booth, as I can look at my term database, the internet and room documents at the same time more conveniently. I also blind-type much faster on a “real” keyboard. I hope that the two devices will sooner or later merge into one (i.e. tablets with decent hard drives, processors and operating systems).

Now, from your experience, which of the two option would you recommend to whom? Or would you say it should always be tablets?

I prefer the tablet when travelling as:
– it is quieter in the booth (no tapping or fan noise),
– using an app like side by side, I can split the screen to display up to 4 apps/files/websites at the same time so the laptop has no advantage over the tablet here,
– it is lighter.

You have created a course for students. What is it you think students need to be taught? Don’t they come to the university well-prepared when it comes to handling computers or tablets?

The current generation of students is tech savvy so they are more likely to embrace tablets and go fully digital. The course I put together for teaching preparation with tablets relies on the fact that students already know how to use tablets. The course introduces the students to paperless environments of a number of international organisations, it looks at apps for the annotation of different types of documents, glossary management, more efficient google search among other things.

I also like to use the touchscreen of my laptop for typing when I want to avoid noise. But compared to blind-typing on a “normal” keyboard, I find typing on a touchscreen a real pain. My impression is that when I cannot feel the keys under my fingers, I will never be able to learn how to type, especially blind-type, REALLY quickly and intuitively … Do you know of any way (an app, a technique) of improving typing skills on touchscreens?

I’m afraid I don’t really have an answer to that question. I am moving more and more towards dictating my messages instead of typing them and I am often flabbergasted at how good the output is, even in Arabic.

Talking about Arabic, is there any difference when working with different programs in Arabic?

Most of the time, I can easily use Arabic in different apps. The biggest exception is Microsoft Office on Mac. Arabic goes berserk there! I have to resort to Pages or TextEdit then. Having said that, a colleague just mentioned yesterday that this issue has been dealt with. But I have to explore it, check out chauffeur services ny.

As to glossary management, not all terminology management tools for interpreters run on tablets. Which one(s) do you recommend to your students or to colleagues?

I use and recommend Interplex. It has a very good iPad version. The feature I like most about it is that you can search across your glossaries. I can do that while working and it can be a life saver sometimes!

If I wanted to participate in your seminar, where could I do that? Do you also do webinars?

I offer a number of seminars on technology for interpreters to conference interpreting students at some UK universities. I will keep you posted. I also have an upcoming eCPD webinar on September 19th on a hybrid mode of interpreting that combines the consecutive and simultaneous modes.

That sound like a great subject to talk about next time!










Can computers outperform human interpreters?

Unlike many people in the translation industry, I like to imagine that one day computers will be able to interpret simultaneously between two languages just as well as or better than human interpreters do, what with artificial neuronal neurons and neural networks’ pattern-based learning. After all, once hardware capacity allows for it, an artificial neural network will be able to hear and process many more instances of spoken languages and the underlying content than my tiny brain will in all its lifetime. So it may recognise and understand the weirdest accents and the most complicated matter just because of the sheer amount of information it has processed before and the vast ontologies it can rely on (And by that time, we will most probably not only be able to use digital interpreters, but also digital speakers).

The more relevant question by then might rather be if or when people will want to have digital interpretation (or digital speakers in the first place). How would I feel about being replaced by a machine interpreter, people often ask me over a cup of coffee during the break. Actually, the more I think about it, the more I realise that in some cases I would be happy to be replaced by a machine. And it is good old Friedemann Schulz von Thun I find particularly helpful when it comes to explaining when exactly I find that machine interpreters might outperform (out-communicate, so to say) us humans (or machine speakers outperform humans).

As Friedemann Schulz von Thun already put it back in 1981 in his four sides model (https://en.wikipedia.org/wiki/Four-sides_model), communication happens on four levels:

The matter layer contains statements which are matter of fact like data and facts, which are part of the news.

In the self-revealing or self-disclosure layer the speaker – conscious or not intended – tells something about himself, his motives, values, emotions etc.

In the relationship layer is expressed resp. received, how the sender gets along with the receiver and what he thinks of him.

The appeal layer contains the desire, advice, instruction and effects that the speaker is seeking for.

We both listen and speak on those four layers, be it on purpose or inadvertently. But what does that mean for interpretation?

In terms of technical subject matter, machine interpretation may well be superior to humans, whose knowledge base despite the best effort will always be based on a relatively small part of the world knowledge. Some highly technical conferences consist of long series of mon-directional speeches given just for the sake of it, at a neck-breaking pace and with no personal interaction whatsoever. When the original offers little “personal” elements of communication (i.e. layer 2 to 4) in the first place, rendering a vivid and communicative interpretation into the target language can be beyond what human interpretation is able to provide. In these cases, publishing the manuscript or a video might serve the purpose just as well, even more so in the future with increasing acceptance of remote communication. And if a purely “mechanical” translation is what is actually needed and no human element is required, machine interpreting might do the job just as well or even better. The same goes e.g. for discussions of logistics (“At what time are you arriving at the airport?”) or other practical arrangements.

But what about the three other, more personal and emotional layers? When speakers reveal something about themselves and listeners want to find out about the other person’s motives, emotions and values or about what one thinks of the other, and it is crucial to read the message between the lines, gestures and facial expressions? When the point of a meeting is to build trust and understanding and, consequently, create a relationship? Face to face meetings are held instead of phone calls or video conferences in order to facilitate personal connections and a collective experience to build upon in future cooperation (which then may work perfectly well via remote communication on more practical or factual subjects). There are also meetings where the most important function is the appeal. The intention of sales or incentive events generally is to have a positive effect on the audience, to motivate or inspire them.

Would these three layers of communication, which very much involve the human nature of both speakers and listeners, work better with a human or a machine interpreter in between? Is a human interpreter better suited to read and convey personality and feelings, and will human interaction between persons work better with a human intermediary, i.e. a person? Customers might find an un-human interpreter more convenient, as the interpreter’s personality does not interfere with the personal relation of speaker and listener (but obviously not provide any facilitation either). This “neutral” interpreting solution could be all the more charming if it didn’t happen orally, but translation was provided in writing, just like subtitles. This would allow the voice of the original speaker to set the tone. However, when it comes to the “unspoken” messages, the added value of interpreters is in their name: They interpret what is being said and constantly ask the question “What does the speaker mean or want?” Mood, mocking, irony, reference to the current situation or persons present etc. will most probably not be understood (or translated) by machines for a long time, or never at all. But I would rather leave it to philosophers or sci-fi people to answer this question.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Simultaneous interpreting with VR headset | Dolmetschen unter der Virtual-Reality-Brille | Interpretación simultanea con gafas VR

+++ for English see below +++ para español, aun más abajo +++
2016 wurde mir zum Jahresausklang eine Dolmetscherfahrung der ganz besonderen Art zuteil: Dolmetschen mit Virtual-Reality-Brille. Sebastiano Gigliobianco hat im Rahmen seiner Masterarbeit am SDI München drei Remote-Interpreting-Szenarien in Fallstudien durchgetestet und ich durfte in den Räumen von PCS in Düsseldorf als Probandin dabei sein. Mehr zu den Erkenntnissen seiner Untersuchungen dürfen wir hoffentlich im Laufe des Jahres von Sebastiano selbst erfahren. Hier deshalb nur meine persönlichen Impressionen:
  • Relativ offensichtlich, aber doch eine Herausforderung: Mit VR-Brille auf dem Kopf ist man in der realen Welt blind. Das ist für das Bedienen des Dolmetschpultes nicht ganz trivial, so dass ich mich während der ersten fünf Minuten panisch am Lautstärkeknopf festgekrallt habe, um diesen bloß nicht aus dem Griff zu verlieren. Erstaunlicherweise hatte ich aber schon nach kurzer Zeit die geographische Lage des Lautstärkereglers so verinnerlicht, dass ich auch freihändig dolmetschen konnte und meine Hand den Regler trotzdem bei Bedarf problemlos wiederfand. Mir stellt sich spontan die Frage, ob es dafür im Gehirn ein separat zuständiges Areal gibt (so wie den Extra-Magen für Nachtisch).
  • Ich hätte nicht gedacht, dass es unter der VR-Brille so nett ist. Bisher kannte ich aus der VR-Welt nur abgefahrene Spielszenarien wie Spukschlösser oder Unterwasserwelten. Nun befand ich mich aber in einer ganz normalen Arbeitsumgebung, einer Art Werksführung, und stand in Gestalt einer Kamera auf einem Stativ mitten zwischen den Rednern und konnte durch Kopfdrehung intuitiv genau dorthin blicken, wohin ich wollte. Dass ich dabei dem jeweiligen Redner immer ungeniert mitten ins Gesicht glotzen konnte, ohne diesen zu irritieren, war lustig und auch fürs Dolmetschen ziemlich nützlich. Jedenfalls musste ich zu keinem Zeitpunkt mit Blick auf das Genick des Redners dolmetschen.
  • 360-Grad-Drehungen auf dem Bürostuhl in der herkömmlichen Dolmetschkabine sind etwas beschwerlich.
  • Natürlich kann man nicht ewig mit so einem Klotz auf dem Kopf arbeiten, frisurenschädlich ist er obendrein. Aber als Motorola 1983 das erste (800 g schwere!) Handy auf dem Markt brachte (1987 zu sehen in Wallstreet an der Seite von Michael Douglas), hatte auch noch niemand unsere heutigen schnuckeligen Smartphones im Sinn.

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

+++ EN +++ EN +++ EN +++ EN +++

In 2016, I had a real end-of-year special: interpreting with a virtual reality headset on (or rather: around) my head. Sebastiano Gigliobianco, student at the SDI in Munich, wanted to test three remote interpreting scenarios for his master’s thesis – and I was one of the lucky test persons who were invited to the premises of PCS in Düsseldorf. We will hopefully hear more about Sebastiano’s findings in the course of the year. For the moment, I would like to share with you my personal impressions:
  • Obviously, with VR goggles on, you are blind for the real world. A fact, however, not to be neglected when it comes to handling the knobs and buttons of the interpreter console in the booth. So for the first five minutes, I did not dare to let go of the volume control. But then after some time, to my own surprise I managed to locate the knob quite easily, so that I could speak and use both my hands freely and still control the volume intuitively. This makes me wonder whether there is a separate brain area for geographic localisation of things that does not interfere with interpreting. (Just like that extra stomach the dessert seems to go to.)
  • I hadn’t expected to be that much at ease wearing a VR headset. So far, my VR experience has been all haunted castles and underwater scenery. But now I found myself in the middle of a very normal working environment, i. e. a kind of factory tour, standing (in the form of a camera mounted on a tripod) right between the participants. And by simply turning my head, the camera would swivel around and let me look where I wanted to look. Basically, I could stare right into the speaker’s face without causing irritation, which was both fun and very useful. I really liked the fact that at no time I had to interpret looking at the speaker’s back.
  • 180 degree rotations are not the most comfortable thing to do sitting on an office chair in a traditional interpreting booth.
  • You obviously cannot spend a whole day interpreting with this clumsy apparatus mounted on your head (not even to mention what it does to your hair). But then, when Motorola launched the first mobile phone back in 1983 (starring in “Wall Street” together with Michael Douglas in 1987), none of us had in mind those neat little smartphones we have nowadays.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

+++ ES +++ ES +++ ES +++ ES +++

Para finalizar el año 2016, tuve una experiencia muy especial: interpretar en un entorno de realidad virtual, o sea con unas gafas VR puestas. Sebastiano Gigliobianco, estudiante del SDI en Múnich, estudió tres escenarios de interpretación remota en el marco de su tesis de máster, y a mí me tocó participar como voluntaria en las instalaciones de PCS en Düsseldorf. Espero que muy pronto podamos aprender más sobre los hallazgos de Sebastiano, así que, por el momento, me limito a compartir mis impresiones personales:
  • Con las gafas VR puestas, uno está ciego en el mundo real, un hecho nada desdeñable cuando se trata de manejar los botones de una consola de interpretación. De modo que, durante los primeros cinco minutos de interpretación, no solté ni un segundo el control de volumen. Pero luego me fui acostumbrando, y al final tenía memorizada la ubicación de este botón y con mucha facilidad podía controlar el volumen de forma intuitiva y hablar usando mis manos libremente. Casi pareciera que hay un área cerebral para localizar cosas que funciona independientemente de la interpretación simultánea. (Igual que el famoso estómago separado para postres en donde siempre cabe un postrecito.)
  • No me hubiera imaginado que estaría tan a gusto trabajando con las gafas VR. Hasta la fecha, mi experiencia VR se había limitado a los castillos embrujados y los paisajes acuáticos. Pero en este caso, me encontré en medio de un entorno de interpretación muy normal, o sea un tipo de visita de planta, ubicada (en forma de una cámara que está montada sobre un trípode) justamente entre los interlocutores. Nada más moviendo la cabeza podía hacer girar la cámara y con eso dirigir mi mirada en la dirección que quisiera. Esto me permitía mirar al orador directamente a la cara, cosa que en la vida normal sería un poco indiscreta, pero así no causó ninguna irritación. Era divertido y muy útil a la vez. Y me encantó que en ningún momento tuve que interpretar a una persona que me daba la espalda.
  • Las rotaciones de 180 grados no se realizan con facilidad en una silla giratoria dentro de una cabina de interpretación común y corriente.
  • Está claro que no se puede pasar un día entero interpretando con esas gafotas monstruosas en la cabeza (ni hablar de cómo te dejan despeinada).  Pero en fin, cuando Motorola lanzó el primer celular en 1983 (y fue una especie de protagonista en 1987 al lado de Michael Douglas en “Wall Street”), tampoco nos hubiéramos imaginado los smartphones chiquititos y chulos que tenemos hoy en día.



La autora:
Anja Rütten es intérprete de conferencias autónoma para alemán (A), español (B), inglés y francés (C), domiciliada en Düsseldorf/Alemania. Se dedica al tema de la gestión de los conocimientos desde mediados de los años 1990.

Datensicherung für Mutter und Kinder | family-friendly data backup

+++ for English, see below +++

Was ich selbst in über 20 Jahren nicht fertiggebracht habe, schafft mein Kind schon vor dem Erreichen der digitalen Volljährigkeit (gleich Inhaberschaft eines eigenen Whatsapp-Kontos) – den totalen Datenverlust. Zwar in diesem Fall nur in Form aller kostbaren Fotos auf meinem ausrangierten Handy, aber immerhin. So langsam wird klar: Ein umfassendes Datensicherungskonzept muss her.

Im Vorteil ist, wer in seinem Bekanntenkreis ausreichend Testberichtleser hat, so konnte ich glücklicherweise jüngst bei einem Gin Tonic im Hause meiner lieben Freundin Julia eine Empfehlung entgegennehmen, die alles vereint, was ich mir vone einem Backupsystem wünsche: idrive. Es gibt ja viele unterschiedliche Datensicherungssysteme, deshalb hier in Kürze, was mir an diesem System gefällt:

Es funktioniert für alle gängigen Betriebssysteme von Rechnern (Mac, Windows, Linux) und mobilen Geräten (Android, iOS, Windows), bis zu sechs Geräte und 1TB können in der Private-Version über ein Konto laufen, für regulär 69.50 $ (wobei es immer Angebote gibt – nachdem ich mit einem kostenlosen Basis-Konto die App installiert hatte, wurde mir die Private-Version prompt für 15 $ im ersten Jahr angeboten). Man kann über dieses Konto kreuz und quer auf die Daten-Backups aller Geräte zugreifen – was mir zunächst etwas unheimlich war, aber man kann den Zugriff an jedem Endgerät mit einem Passwortschutz belegen. Über das browserbasierte Dashboard kann man dann bspw. die Daten (etwa Fotos) aus dem Handy-Backup direkt auf den Rechner kopieren.

Das Synchronisieren erfolgt entweder in Echtzeit, nach festgelegtem Zeitplan oder auf Knopfdruck. Es rödelt also nicht ständig im Hintergrund, wenn man das nicht möchte, man wird regelmäßig an die Datensicherung erinnert, wenn man das will. Wenn man einmal den kompletten Datenbestand hochgeladen hat, geschieht die Aktualisierung nur noch inkrementell, sprich nur noch Dateien, die auf dem lokalen Rechner geändert wurden, werden im Online-Backup auf den neusten Stand gebracht.

Auf dem Handy kann man auswählen aus der Sicherung von Kontakten, Anruflisten, Kalender, Dateien, Apps, SMS, Fotos (auch die von Whatapp), Videos und Musik. Entweder permanent, zu bestimmten Zeitpunkten oder auf Knopfdruck. Schön für Reisen.

Gelöschte Dateien werden im Online-Backup nur auf Knopfdruck gelöscht (“Archive Cleanup“). Solange man diesen Knopf nicht drückt, sind alle auf dem lokalen Rechner – womöglich versehentlich – gelöschten Dateien im Online-Backup noch da. Und wenn man umgekehrt nicht tausende von Privatfotos auf dem Handy herumschleppen möchte, kann man sie dort löschen und im Cloud-Backup aufbewahren.

In der idrive-Business-Version (aktuell 74,62 $ im Jahr) kann man sogar Unterkonten anlegen, so dass man die Datensicherung von Kollegen, Mitarbeitern oder Kindern als Administrator zentral im Griff hat (ohne dass diese auf die eigenen Daten zugreifen).  Ich habe für den Anfang eine preisgünstigere Lösung gewählt und meinen Kindern jeweils separate Konten (Basis-Konto kostenlos bis 5 GB) eingerichtet. (Letztendlich habe ich selbst für mein eigenes Handy nun ein anderes Konto als für meinen PC, um nicht in ständiger Sorge zu leben, dass sich doch einmal jemand über mein Handy Zugang zu meinem gesamten PC-Backup verschafft). Wenn ich die auf meinem Rechner gespeicherte Musik meinen Kindern auf dem Handy zur Verfügung stellen möchte, teile ich mit ihnen den entsprechenden Ordner aus dem PC-Online-Backup, wenn ich deren Handyfotos sichern möchte, teilen sie den entsprechenden Online-Backup-Ordner über idrive mit mir. Etwas gewöhnungsbedürftig ist, dass man dieses Verwalten und Teilen nur über das browserbasierte Dashboard erledigen kann, während die Backups über eine App oder Desktop-Software erfolgen.

Wenn man nicht den Nerv hat, seine gesamte Datensammlung durch die Telefonleitung zu quetschen, kann man sich auch einen physischen Datenträger schicken lassen und per Post die Daten einmalig nach Kalifornien schicken, wo sie in die Cloud befördert werden. Aktuell befinde ich mich noch in der Versuchsphase, 103 GB über die Leitung in die Cloud zu befördern (nach zwei Tagen bin ich bei 25 %) Wenn man mit der Bandbreitendrosselung ein bisschen spielt und nachts nicht vergisst, den Standby-Modus des Computers zu deaktivieren, könnte es was werden.

Die Daten sind bei der Übertragung und Speicherung mittels 256-bit-AES-Verschlüsselung gesichert, wobei der Schlüssel entweder vom System oder vom Nutzer selbst vorgegeben wird.

Wenn man zusätzlich noch ein lokales Backup auf einem externen Datenträger haben möchte, bietet idrive zusätzlich für 99,99 $ einen lokalen, über WLAN verbundenen Datenträger an (Network attached storage device NAS, “wifi device”), der ebenfalls über die idrive-Software verwaltet wird.

Alles in allem wirklich ein Rundum-Datensicherungskonzept. Aber wie immer bin ich natürlich auch neugierig zu erfahren, wie Ihr Eure Daten sichert!

Über die Autorin:
Dr. Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf und Mitglied von VKD, BDÜ NRW und AIIC. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

+++ English version +++

It is hard to believe that a child, before becoming digitally full of age (i.e. having a proper WhatsApp account), provokes the disaster I have managed to avoid for over 20 years: total loss of data. Luckily, it was “only” a bunch of photos taken with my old smartphone, but still: all of a sudden it became crystal clear to me that my family and I are in desperate need of a comprehensive data backup plan.

I am lucky enough to know some passionate test report readers, and so it happened that when I was at my dear friend Julia’s house, over some Gin & Tonics I was recommended exactly what I was looking for: idrive, the one online backup package that offers everything I had been looking for.

It runs on all the usual operating systems for desktop and laptop computers (Mac, Windows, Linux) and mobile devices (Android, iOS, Windows), up to six devices and 1 TB can be backed up under one “Private” account for – theoretically – $ 69.50 per year (watch out for special offers – after I had installed the app on my smartphone using the free basic version, I was offered the upgrade to the “Private” plan for as little as $ 15). And you can access the online data backups of any of them from any of them. I found this a bit spooky at the beginning, but there is optional password protection when opening the idrive software/app. The browser-based idrive dashboard can then be used, for example, to save the pictures taken with your smartphone to your desktop computer.

Backups can be run continuously, scheduled or ad hoc by clicking a button, so your computer does not necessarily have to be rattling through synchronising data in the background all the time. But the system will still remind you of your regular backup task if you ask it to. Once all your data is uploaded, files will only be updated incrementally, i.e. only those archives which have been changed locally will be uploaded to your online backup.

On your mobile phone, you can choose from saving your contacts, call logs, calendar, files, apps, SMS, photos (also from WhatsApp), videos and audio files, be it continuously, according to a schedule or ad hoc by clicking the button. Very nice for frequent travelers.

Files deleted from your PC will not be deleted automatically from your online backup version. Unless you “clean up” your backup (by clicking the button), it will keep all your files – which you may have deleted accidentally from your local hard disk- virtually forever. And if, the other way around, you don’t want to carry around tons of private photos on your mobile device, you can delete them from your mobile harddisk once you have uploaded them to idrive.

The idrive Business Version (currently $ 74.62) even lets you create sub-accounts in order to manage data backups centrally for colleagues, staff or children (without them getting access to your personal data). My personal solution for the moment is to create separate accounts for each child’s mobile device (basic account, free of charge for up to 5 GB) and make excessive use of the share functions. For example, I simply share the online backup file of my local mp3 collection via idrive and they access them using their accounts. Equally they share their online photo backup files with me if they want me to save their photos on my PC. All this sharing back and forth must be done using the browser dashboard, whereas the backup functions work via programs/apps that need to be installed locally.

If you don’t feel like squeezing your 100 GB of data through the landline, idrive even offers to send you a hard drive by ordinary mail so that you can ship your data to California and let them take care of the uploading. My personal 100-GB-upload experiment is still running (I am at 25 % after two days), but when you play around with the bandwidth throttle a bit and don’t forget to deactivate the standby mode overnight, chances are that you will finally get there.

Your data is encrypted during transfer and storage using 256 bitAES encryption, either on the basis of a default key or using your private one.

If you wish to have an additional local backup on an external hard disk, idrive offers a so called “wifi device” (Network attached storage device, NAS, $ 99.99), which is also managed by the idrive software.

Bottom line is that this is an allround hybrid backup system I am quite happy with, although as always, I would love to know how you handle your data backups!


About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Dictation Software instead of Term Extraction? | Diktiersoftware als Termextraktion für Dolmetscher?

+++ for English see below +++

Als neulich mein Arzt bei unserem Beratungsgespräch munter seine Gedanken dem Computer diktierte, anstatt zu tippen, kam mir die Frage in den Sinn: “Warum mache ich das eigentlich nicht?” Es folgte eine kurze Fachsimpelei zum Thema Diktierprogramme, und kaum zu Hause, musste ich das natürlich auch gleich ausprobieren. Das High-End-Produkt Dragon Naturally Speaking, von dem mein Arzt schwärmte, wollte ich mir dann aber doch nicht gleich gönnen.  Das muss doch auch mit Windows gehen und mit dem im Notebook eingebauten Raummikrofon, dachte ich mir (haha) … Eingerichtet war auch alles in Nullkommanix (unter Windows 10 Auf Start klicken, den Menüpunkt “Erleichterte Bedienung” suchen, ” Windowsspracherkennung” auswählen) und los ging’s. Beim ersten Start durchläuft man zunächst ein kurzes Lernprogramm, das die Stimme kennenlernt.

Und dann konnte es auch schon losgehen mit dem eingebauten Diktiergerät, zunächst testhalber in Microsoft Word. Von den ersten zwei Spracheingaben war ich auch noch einigermaßen beeindruckt, aber schon bei “Desoxyribonukleinsäure” zerplatzten alle meine Träume. Hier meine ersten Diktierproben mit ein paar gängigen Ausdrücken aus dem Dolmetschalltag:

– 12345
– Automobilzulieferer
– Besserungszeremonien Kline sollte es auch viel wie Wohnen Nucleinsäuren für das (Desoxyribonukleinsäure)
– Beste Rock Siri Wohnung Klee ihnen sollte noch in Welle (Desoxyribonukleinsäure)
– Verlustvortrag
– Rechnungsabgrenzungsposten
– Vorrats Datenspeicherung
– Noch Händewellenlänge (Nockenwelle)
– Keilriemen
– Brennstoffzellen Fahrzeuge

Gar nicht schlecht. Aber so ganz das Spracherkennungswunder war das nun noch nicht. In meiner Phantasie hatte ich mich nämlich in der Dolmetschvorbereitung Texte und Präsentationen entspannt lesen und dabei alle Termini und Zusammenhänge, die ich im Nachgang recherchieren wollte, in eine hübsche Tabelle diktieren sehen.  Aber dazu musste dann wohl etwas “Richtiges” her, wahrscheinlich zunächst einmal ein gescheites Mikrofon.

Also setzte ich mich dann doch mit der allseits gepriesenen Diktiersoftware Dragon Naturally Speaking auseinander, chattete mit dem Support und prüfte alle Optionen. Für 99 EUR unterstützt die Home-Edition nur die gewählte Sprache. Die Premium-Version für 169 EUR unterstützt die gewählte Sprache und auch Englisch. Ist die gewählte Sprache Englisch, gibt es nur Englisch. Möchte ich mit Deutsch, Spanisch, Englisch und womöglich noch meiner zweiten C-Sprache Französisch arbeiten, wird es also erstens kompliziert und zweitens teuer. Also verwarf ich das ganze Thema erst einmal, bis wenige Tage später in einem völlig anderen Zusammenhang unsere liebe Kollegin Fee Engemann erwähnte, dass sie mit Dragon arbeite. Da wurde ich natürlich hellhörig und habe es mir dann doch nicht nehmen lassen, sie für mich und Euch ein bisschen nach ihrer Erfahrung mit Spracherkennungssoftware auszuhorchen:

Fee Engemann im Interview am 19. Februar 2016

Wie ist die Qualität der Spracherkennung bei Dragon Naturally Speaking?

Erstaunlich gut. Das Programm lernt die Stimme und Sprechweise kennen und man kann ihm auch neue Wörter “beibringen”, oder es liest über sein “Lerncenter” ganze Dateien aus. Man kann auch Wörter buchstabieren, wenn das System gar nichts mehr versteht.

Wozu benutzt Du Dragon?

Ich benutze es manchmal als OCR-Ersatz, wenn eine Übersetzungsvorlage nicht maschinenlesbar ist. Das hat den Vorteil, dass man gleich den Text einmal komplett gelesen hat.

In der Dolmetschvorbereitung diktiere ich meine Terminologie in eine Liste, die ich dann nachher durch die Begriffe in der anderen Sprache ergänze. Das funktioniert in Word und auch in Excel. Falls es Schwierigkeiten gibt, liegt das evtl. daran, dass sich die Kompatibilitätsmodule für ein bestimmtes Programm deaktiviert haben. Ein Besuch auf der Website des technischen Supports schafft hier Abhilfe. Für Zeilenumbrüche und viele andere Befehle gibt es entsprechende Sprachkommandos. Wenn man das Programm per Post bestellt und nicht als Download, ist sogar eine Übersicht mit den wichtigsten Befehlen dabei – so wie auch ein Headset, das für meine Zwecke völlig ausreichend ist. Die Hotline ist im Übrigen auch super.

Gibt es Nachteile?

Wenn ich einen Tag lang gedolmetscht habe, habe ich danach manchmal keine Lust mehr, mit meinem Computer auch noch zu sprechen. Dann arbeite ich auf herkömmliche Art.

Wenn man in unterschiedlichen Sprachen arbeitet, muss man für jede Sprache ein neues Profil anlegen und zwischen diesen Profilen wechseln. Je nach Sprachenvielfalt in der Kombination könnte das lästig werden.

Mein Fazit: Das hört sich alles wirklich sehr vielversprechend an. Das größte Problem für uns Dolmetscher scheint – ähnlich wie bei der Generierung von Audiodateien, also dem umgekehrten Weg – das Hin und Her zwischen den Sprachen zu sein. Wenn jemand von Euch dazu Tipps und Erfahrungen hat, freue ich mich sehr über Kommentare – vielleicht wird es ja doch noch was mit der Terminologieextraktion per Stimme!

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

+++ English version +++

The other day, when I was talking to my GP and saw him dictate his thoughts to his computer instead of typing them in, I suddenly wondered why I was not using such a tool myself when preparing for an interpreting assignment? So I asked him about the system and, back home, went to try it myself straight away. Although what I was planning to do was not to buy the high-end dictation program Dragon Naturally Speaking I had been recommended, but instead to go for the built-in Windows speech recognition function and the equally built-in microphone of my laptop computer (bad idea) … The speech recognition module under Windows 10 was activated in no time (got to the Start menu, select “Ease of Access > Speech Recognition“) and off I went.

When the voice recognition function is first started, it takes you through a short learning routine in order to familiarise itself with your voice. After that, my Windows built-in dictation device was ready. For a start, I tried it in Microsoft Word. I found the first results rather impressive, but when it came to “Desoxyribonukleinsäure” (deoxyribonucleic acid), I was completely disillusioned. See for yourselves the results of my first voice recognition test with some of the usual expressions from the daily life of any conference interpreter:

– 12345
– Automobilzulieferer
– Besserungszeremonien Kline sollte es auch viel wie Wohnen Nucleinsäuren für das (Desoxyribonukleinsäure)
– Beste Rock Siri Wohnung Klee ihnen sollte noch in Welle (Desoxyribonukleinsäure)
– Verlustvortrag
– Rechnungsabgrenzungsposten
– Vorrats Datenspeicherung
– Noch Händewellenlänge (Nockenwelle)
– Keilriemen
– Brennstoffzellen Fahrzeuge

Not bad for a start – but not quite the miracle of voice recognition I would need in order to live this dream of dictating terminology into a list on my computer while reading documents to prepare for an interpreting assignment. Something decent was what I needed, probably a decent microphone, for a start.

So I enquired about the famous dictation software Dragon Naturally Speaking, chatted with one of the support people and checked the options. For 99 EUR, Dragon’s Home Edition only supports one language. The Premium Edition for 169 EUR supports one selected language plus English (If you choose English when buying the software, it is English-only.)  If I want German, Spanish, English and possibly also my second C-language, French, it gets both complicated and expensive. So I discarded the whole idea until, only a few days later, our dear colleague Fee Engemann happened to mention to me – in a completely different context – that she actually worked with Dragon! I was all ears and spontaneously asked her if she would like to share some of her experience with us in an interview. Luckily, she accepted!

Interview with Fee Engemann February 19th, 2016

What is the voice recognition quality of Dragon Naturally Speaking like?

Surprisingly good. The program familiarises itself with your voice and speech patterns, and you can also “teach” it new words, or let it read loads of new words from entire files. You can also spell words in case the system does not understand you at all.

What do you use Dragon for?

I use it as an OCR substitute when I get a text to translate which is not machine-readable. The big advantage is that once you have done that, you know the entire text.

When preparing for an interpreting assignment, I dictate my terminology into a list and add the equivalent terms in the other language once I have finished reading the texts. That works in MS-Word and MS-Excel. If there are problems, this may be due to the compatibility module for a certain program being deactivated. The technical support website can help in this case. There are special commands for line breaks and the like. And if you order the software on a CD (instead of simply downloading it), your parcel will not only include a list with the most important commands, but also a headset, which is absolutely sufficient for my purpose. And by the way … the hotline is great, too.

Are there any downsides?

After a whole day of interpreting, I sometimes don’t feel like talking to my computer. In this case, I simply work the traditional way.

When working with several languages, you must create one profile per language and switch between them when switching languages. This may be quite cumbersome if you work with many different languages.

My personal conclusion is that this all sounds very promising. As always, our problem as conference interpreters with these technologies (just like when creating multilingual audio files, i.e. the other way around) seems to be the constant changing back and forth between languages. If any of my readers has experience or good advice to share, I will be happy to read about it in the comments – maybe voice-based term extraction is not that far away after all!


About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.