How to measure the efficiency of your conference preparation

Half of the time we dedicate to a specific interpreting assignment is often spent on preparation. But while many a thought is given to the actual interpreting performance and the different ways to evaluate it, I hardly ever hear anyone discuss their (or others‘) preparation performance. However, if we want to be good information and knowledge managers rather than mere information and knowledge workers, we need to close the management cycle and put extra effort into checking if our work serves its purpose and making possible adjustments to optimise it.

Efficiency being the ratio between input and output (how much do you spend to make a dollar?), the question now is what to measure in the first place. Admittedly, the efficiency of information and knowledge work is not the easiest thing to measure. Apart from the fact that whilst interpreting we have other things to worry about, it is hard to tell the difference between the way we actually interpret and the way we would have done without the most essential part of our information work, i.e. preparation. Strictly speaking, previous work experience and knowledge acquired outside the interpreter’s professional life also count as „preparation“ and can even be more helpful than preparation in the stricter sense.

To put the concept of efficiency of information and knowledge work in conference interpreting into measurable terms, it could be reduced to the following question:

How much time do you spend to make a useful information unit?

As it happens, back in 2006 I conducted a case study to check exactly this: a conference interpreter’s preparation effort in relation to its usefulness. As a baseline, I decided to use the terminology prepared for a technical meeting, assuming that this is what comes closest to a quantifiable amount of information. Even if preparation is not all about terminology (or glossaries), it is an important part, and if it is well done, it covers semantics and context information as well.

So in order to get a number representing the output, I simply counted all the terminological units prepared for one meeting (376) and afterwards had the interpreter count those units that actually came up in the meeting (197) so that the terms prepared „in vain“ could be deducted. I then calculated the percentage of the used terms in relation the total amount of elaborated terms, the so called usage rate. In the case study the overall usage rate at the conference at hand was 52%. The usage rate of terminology from a previous conference of the same client about the same subject was 48 % (81 out of 168 terminological units). This has of course no statistical significance whatsoever, but it can surely be a useful indicator for the individual interpreter. And interestingly, when repeating this exercise with my students from now and then, the results are usually of a similar order of magnitude.

Once the output (terms used) has been determined, it can be related to the input. Assuming that the input is mainly the time spent on preparing the terminological units that came up in the conference, this time is divided by the terms used in order to obtain the relative or average time spent per terminological unit. This value can be considered an approximation to the efficiency of the interpreter’s information work. In the case study the average time spent per term used was 5 minutes (9.5 hours for 113 terms). When repeating this exercise with students, this value usually ranges roughly from 1 to 10 minutes.

Such numbers of course merely serve to quantify the information work we do. In order to really complete the management cycle and find out in how far preparation could possibly be optimised, a closer look needs to be taken at the quality of information and knowledge gaps that occur during the interpreting assignment at hand and how they are or could be handled – which is a different story altogether.


Informations- und Wissensmanagement im Konferenzdolmetschen. Sabest 15. Frankfurt: Peter Lang. [dissertation]

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

InterpretBank 4 review

InterpretBank by Claudio Fantinuoli, one of the pioneers of terminology tools for conference interpreters (or CAI tools), already before the new release was full to the brim with useful functions and settings that hardly any other tool offers. It was already presented in one of the first articles of this blog, back in 2014. So now I was all the more curious to find out about the 4th generation, and I am happy to share my impressions in the following article.

Getting started

It took me 2 minutes to download and install the new InterpretBank and set my working languages (1 mother tongue plus 4 languages). My first impression is that the user interface looked quite familiar: language columns (still empty) and the familiar buttons to switch between edit, memorize and conference mode. The options menu lets you set display colours, row height and many other things. You can select the online sources for looking up terminology (linguee, IATE, LEO, DICT, Wordreference and Reverso) and definitions (Wikipedia, Collins, as well as set automatic translation services (search IATE/old glossaries, use different online translation memories like glosbe and others).

Xlsx, docx and ibex (proprietary InterpretBank format) files can be imported easily, and unlike the former InterpretBank, I don’t have to change the display settings any more in order to have all my five languages displayed. Great improvement! Apart from the terms in five languages, you can import an additional “info” field and a link related to each language as well as a “bloc note”, which refers to the complete entry.

Data storage and sharing

All glossaries are saved on your Windows or Mac computer in a unique database. I haven’t tested the synchronization between desktop and laptop, which is done via Dropbox or any other shared folder. The online sharing function using a simple link worked perfectly fine for me. You just open a glossary, upload it to the secure InterpretBank server, get the link and send it to whomever you like, including yourself. On my Android phone, the plain two-language interface opened smoothly in Firefox. And although I always insist on having more than two languages in my term database, I would say that for mobile access, two languages are perfect, as consecutive interpreting usually happens between two languages back and forth and squeezing more than two languages onto a tiny smartphone screen might not be the easiest thing to do either.

I don’t quite get the idea why I should share this link with colleagues, though. Usually you either have a shared glossary in the first place, with all members of the team editing it and making contributions, or everyone has separate glossaries and there is hardly any need of sharing. If I wanted to share my InterpretBank glossary at all, I would export it and send it via email or copy it into a cloud-based team glossary, so that my colleagues can use it at their convenience.

The terminology in InterpretBank is divided into glossaries and subglossaries. Technically, everything is stored in one single database, “glossary” and “subglossary” just being data fields containing a topic classification and sub-classification. Importing only works glossary by glossary, i.e. I can’t import my own (quite big) database as a whole, converting the topic classification data fields into glossaries and sub-glossaries.

Glossary creation

After having imported an existing glossary, I now create a new one from scratch (about cars). In edit mode, with the display set to two languages only, InterpretBank will look up translations in online translation memories for you. All you have to do is press F1 or using the right mouse button or, if you prefer, the search is done automatically upon pressing the tab key, i.e. jumping from one language field to the next –empty– one. When I tried “Pleuelstange” (German for connecting rod), no Spanish translation could be found. But upon my second try, “Kotflügel” (German for mudguard), the Spanish “guardabarros” was found in MEDIAWIKI.

By pressing F2, or right-click on the term you want a translation for, you can also search your pre-selected online resources for translations and definitions. If, however, all your language fields are filled and you only want to double-check or think that what is in your glossary isn’t correct, the program will tell you that nothing is missing and therefore no online search can be made. Looking up terminology in several online sources in one go is something many a tool has tried to make possible. My favourite so far being, I must say that I quite like the way InterpretBank displays the online search results. It will open one (not ten or twenty) browser tabs where you can select the different sources to see the search results.

The functions for collecting reference texts on specific topics and extracting relevant terminology haven’t yet been integrated into InterpretBank (but, as Claudio assured me, will be in the autumn). However, the functions are already available in a separate tool named TranslatorBank (so far for German, English, French and Italian).

Quick lookup function for the booth

While searching in „normal“ edit mode is accent and case sensitive, in conference mode (headset icon) it is intuitive and hardly demands any attention. The incremental search function will narrow down the hit list with every additional letter you type. And there are many option to customize the behaviour of the search function. Actually, the „search parameters panel“ says it all: Would you like to search in all languages or just your main language? Hit enter or not to start your query? If not, how many seconds would you like the system to wait until it starts a fresh query? Ignore accents or not? Correct typos? Search in all glossaries if nothing can be found in the current one? Most probably very useful in the booth.

When toying around with the search function, I didn’t find my typos corrected, at least not that I was aware of. When typing „gardient“ I would have thought that the system corrected it into „gradient“, which it didn’t. However, when I typed „blok“, the system deleted the last letter and returned all the terms containing „block“. Very helpful indeed.

In order to figure out how the system automatically referred to IATE when no results were found in my own database, I entered „Bruttoinlandsprodukt“ (gross domestic product in German). Firstly, the system froze (in shock?), but then the IATE search result appeared in four of my five languages in the list, as Dutch isn’t supported and would have to be bought separately. At least I suppose it was the IATE result, as the source wasn’t indicated anywhere and it just looked like a normal glossary entry.

Queries in different web sources hitting F2 also works in booth mode, just as described above for edit mode. The automatic translation (F1) only works in a two-language display, which in turn can only be set in edit mode.

Memorize new terms

The memorizing function, in my view, hasn’t changed too much, which is good because I like it the way it was before. The only change I have noticed is that it will now let you memorize terms in all your languages and doesn’t only work with language pairs. I like it!


All in all, in my view InterpretBank remains number one in sophistication among the terminology tools made for (and mostly by) conference interpreters. None of the other tools I am aware of covers such a wide range of an interpreter’s workflow. I would actually not call it a terminology management tool, but a conference preparation tool.

The changes aren’t as drastic as I would have expected after reading the announcement, which isn’t necessarily a bad thing, the old InterpretBank not having been completely user-unfriendly in the first place. But the user interface has indeed become more intuitive and I found my way around more easily.

The new online look-up elements are very relevant, and they work swiftly. Handling more than two languages has become easier, so as long as you don’t want to work with more than five languages in total, you should be fine. If it weren’t for the flexibility of a generic database like MS Access and the many additional data fields I have grown very fond of , like client, date and name of the conference, degree of importance, I would seriously consider becoming an InterpretBank user. But then even if one prefers keeping one’s master terminology database in a different format, thanks to the export function InterpretBank could still be used for conference preparation and booth work „only“.

Finally, whatwith online team glossaries becoming common practice, I hope to see a browser-based InterpretBank 5 in the future!

PS: One detail worth mentioning is the log file InterpretBank saves for you if you tell it to. Here you can see all the changes and queries made, which I find a nice thing not only for research purposes, but also to do a personal follow-up after a conference (or before the next conference of the same kind) and see which were the terms that kept my mind busy. Used properly, this log file could serve to close the circle of knowledge management.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Can computers outperform human interpreters?

Unlike many people in the translation industry, I like to imagine that one day computers will be able to interpret simultaneously between two languages just as well as or better than human interpreters do, what with artificial neuronal neurons and neural networks‘ pattern-based learning. After all, once hardware capacity allows for it, an artificial neural network will be able to hear and process many more instances of spoken languages and the underlying content than my tiny brain will in all its lifetime. So it may recognise and understand the weirdest accents and the most complicated matter just because of the sheer amount of information it has processed before and the vast ontologies it can rely on (And by that time, we will most probably not only be able to use digital interpreters, but also digital speakers).

The more relevant question by then might rather be if or when people will want to have digital interpretation (or digital speakers in the first place). How would I feel about being replaced by a machine interpreter, people often ask me over a cup of coffee during the break. Actually, the more I think about it, the more I realise that in some cases I would be happy to be replaced by a machine. And it is good old Friedemann Schulz von Thun I find particularly helpful when it comes to explaining when exactly I find that machine interpreters might outperform (out-communicate, so to say) us humans (or machine speakers outperform humans).

As Friedemann Schulz von Thun already put it back in 1981 in his four sides model (, communication happens on four levels:

The matter layer contains statements which are matter of fact like data and facts, which are part of the news.

In the self-revealing or self-disclosure layer the speaker – conscious or not intended – tells something about himself, his motives, values, emotions etc.

In the relationship layer is expressed resp. received, how the sender gets along with the receiver and what he thinks of him.

The appeal layer contains the desire, advice, instruction and effects that the speaker is seeking for.

We both listen and speak on those four layers, be it on purpose or inadvertently. But what does that mean for interpretation?

In terms of technical subject matter, machine interpretation may well be superior to humans, whose knowledge base despite the best effort will always be based on a relatively small part of the world knowledge. Some highly technical conferences consist of long series of mon-directional speeches given just for the sake of it, at a neck-breaking pace and with no personal interaction whatsoever. When the original offers little „personal“ elements of communication (i.e. layer 2 to 4) in the first place, rendering a vivid and communicative interpretation into the target language can be beyond what human interpretation is able to provide. In these cases, publishing the manuscript or a video might serve the purpose just as well, even more so in the future with increasing acceptance of remote communication. And if a purely „mechanical“ translation is what is actually needed and no human element is required, machine interpreting might do the job just as well or even better. The same goes e.g. for discussions of logistics (“At what time are you arriving at the airport?”) or other practical arrangements.

But what about the three other, more personal and emotional layers? When speakers reveal something about themselves and listeners want to find out about the other person’s motives, emotions and values or about what one thinks of the other, and it is crucial to read the message between the lines, gestures and facial expressions? When the point of a meeting is to build trust and understanding and, consequently, create a relationship? Face to face meetings are held instead of phone calls or video conferences in order to facilitate personal connections and a collective experience to build upon in future cooperation (which then may work perfectly well via remote communication on more practical or factual subjects). There are also meetings where the most important function is the appeal. The intention of sales or incentive events generally is to have a positive effect on the audience, to motivate or inspire them.

Would these three layers of communication, which very much involve the human nature of both speakers and listeners, work better with a human or a machine interpreter in between? Is a human interpreter better suited to read and convey personality and feelings, and will human interaction between persons work better with a human intermediary, i.e. a person? Customers might find an un-human interpreter more convenient, as the interpreter’s personality does not interfere with the personal relation of speaker and listener (but obviously not provide any facilitation either). This “neutral” interpreting solution could be all the more charming if it didn’t happen orally, but translation was provided in writing, just like subtitles. This would allow the voice of the original speaker to set the tone. However, when it comes to the „unspoken“ messages, the added value of interpreters is in their name: They interpret what is being said and constantly ask the question „What does the speaker mean or want?“ Mood, mocking, irony, reference to the current situation or persons present etc. will most probably not be understood (or translated) by machines for a long time, or never at all. But I would rather leave it to philosophers or sci-fi people to answer this question.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Hello from the other side – Chinese and Terminology Tools. A guest article by Felix Brender 王哲謙

As Mandarin Chinese interpreters, we understand that we are somewhat rare beings. After all, we work with a language which, despite being a UN language, is not one you’d encounter regularly. We wouldn’t expect colleagues working with other, more frequently used languages to know about the peculiarities of Mandarin.

This applies not least to terminology tools. Many of the tools available to interpreters do now support Chinese-script entries. And indeed: Interpreting from English into Chinese, terminology software works as well for Chinese as it would for any other language – next to good old Excel, I myself have used InterPlex, Interpret Bank and flashterm. It’s rather when working from Chinese into English that things get tricky – and that’s not necessarily a software issue.

Until recently, many interpreters were convinced and rather adamant that simultaneous interpreting with Chinese is downright impossible, and I am sometimes tempted to agree. Compared to English – left alone German – Chinese is incredibly dense. Many words consist of only one syllable, and only very few of more than two. Owing to the way Chinese works grammatically, the very same idea can be expressed a lot more concisely in Chinese than English. To make matters worse, we replace modern Mandarin expressions with written, Classical Chinese equivalents in formal Mandarin. As a rule of thumb, the more formal the Chinese used, the more succinct it will be as well – rather different from English or German. This also is the case with proper names and terminology, which will usually have abbreviated forms that are a lot shorter syllable-wise than their English equivalents.

Adding to that, Chinese natives are incredibly fond of their language and make ample use of its full range of options: Using rare and at times byzantine expressions and words is appreciated and applauded as a sign of good education; it is never perceived as pedantic or conceited. This includes idioms (chengyu 成語), which usually refer to a story from the Chinese Classics in a highly condensed fashion: They generally contain a mere four syllables and usually function as adjectives, in contrast to English or German. In English, we will need at least a full sentence to explain what is being said, even if the same or a similar idiom exists. Chinese also frequently uses xiehouyu 歇後語: proverbs consisting of two parts, the first presenting a scenario, the second outlining the rationale of the story. Usually, the second part will be left out because Chinese natives will be able to deduce it from the first – similar to speak of the devil (and he will appear) in English. Needless to say, there hardly ever is an English equivalent, and seeing that we are operating in entirely different cultural contexts, ironing out cultural differences when explaining xiehouyu will take additional time.

It will be no surprise to hear that Chinese discursive and grammatical peculiarities make it a difficult language to interpret from: relative clauses tend to be lengthy – and are always placed in front of the noun they describe; Chinese doesn’t mark tenses as such but rather uses particles to outline how different events and actions relate to each other, in contrast to linear notions of time and tenses in European languages, so we are often left guessing; he and she are homonyms in Mandarin; to name but three examples.

Considering all of this, we see that more often than not, simultaneous interpreting from Chinese is a race against the clock and an exercise in humility – and there isn’t much time to look up words in the first place.

In Modern Mandarin, there are only around 1,200 possible syllables, with each syllable being a morpheme, i.e. a component bearing meaning; in English, we have a far greater range of possible syllables, and they only make sense in context, as not every syllable carries meaning: /mea/ and /ning/ do not mean anything per se, but meaning does. For Chinese, this implies that homophones are a common occurrence. And while we aim for perfect clarity and lucidity in English, Chinese rather daoistically indulges in ambivalence. Clever plays on words, being illusive and vague and giving listeners space to interpret what you might actually mean: not bad style, but an art to be honed. Apart from having to spend more capacity and time on identifying terms and words used in the original, this adds another layer of difficulty with regards to looking up terminology in the booth: The fastest way to type Chinese characters is by using pinyin romanisation, but owing to the huge number of homonyms, any syllable in romanized transliteration will give you a huge range of options. This means that we would have to spend at least another second or so simply to select the correct character from a drop-down list – and we will not enjoy the pleasure of word prediction that works for other languages.

In practice, this means that besides very intensive preparation before the event, we rely on what might be the oldest terminology tool in the world: our booth buddy. They are particularly important because in Chinese, we obviously don’t have any cognates – something than might get us off the hook working with European languages. We also heavily rely on our colleagues sitting next to us for figures: Chinese has ten thousand (萬) and one hundred million (億) as units in their own right, so rather than talking about one million and one billion, the Chinese will talk about a hundred times ten thousand and ten times one hundred million, respectively. This means we will have to be calculating while interpreting: a feat hard to accomplish if you are out there on your own.

While I started out thinking that not being able to use terminology software to the same extent I would use it for German-English would be quite a nuisance, I have found that this is rather an instance of the old man living at the border whose horse runs away1, as you’d say in Chinese. Interpreting is teamwork after all, and working with Chinese, we are acutely aware that we rely on our booth buddy as much as they rely on us and that we can only provide the excellent service we do with somebody else in the booth. With that in mind, professional Chinese interpreters always make for great partners in crime in the booth.

About the author:

Felix Brender 王哲謙 is a freelance conference interpreter for English, Chinese and German based in Düsseldorf/Germany. He also teaches DE>EN at the University of Heidelberg, and ZH>EN interpreting as a guest lecturer in Leeds, UK, and Taipei, Taiwan.

1 (which, as the story goes, then returns, bringing a fine stallion with it, which is then ridden by his son, who falls of the horse and breaks his leg, which is why he is not drafted and sent to war, ultimately saving his life; meaning that any setback may indeed be a blessing in disguise, similar but not entirely identical to every cloud has a silver lining. One of the most frequently used Chinese sayings, eight syllables of which the latter four are generally left out: 塞翁失馬,焉知非福, which literally translates as ‘When the old man from the frontier lost his horse, how could one have known that it would not be fortuitous?’. I rest my case.)

Digital dementia and conference interpreters – article published in Multilingual July/August 2015

DigitalDementiaMultilingual Preview

To read this article on effects of digitalisation on conference interpreters‘ memory and learning habits, please follow the link to (English only).



About the author
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.



Booth-friendly terminology management: Intragloss – the missing link between texts and glossaries|die Brücke zwischen Text und Glossar


+++ for English see below +++

Wer schon immer genervt war von der ständigen Wechselei zwischen Redetexten/Präsentationsfolien einerseits und dem Glossar andererseits, der hat jetzt allen Grund zu jubilieren: Dan Kenig und Daniel Pohoryles aus Paris haben mit Intragloss eine Software entwickelt, in der man direkt aus dem Text Termini in sein Glossar befördern kann und das einen neuen Text mit einem vorhandenen Glossar abgleicht, gefundene Termini im Text hervorhebt und die anderssprachige Entsprechung in einem kleinen Kommentarfeld im Text anzeigt. Das Programm ermöglicht auch die parallele Anzeige von Original und Übersetzung, ferner ist eine Internetsuche in Portalen wie Linguee, IATE, Wikipedia etc. eingebaut. Fazit: Jede einzelne dieser drei „Killer-Funktionen“ ist für sich schon ein Kaufargument!

– Intragloss läuft momentan nur auf Mac, eine Windows-Version ist in Entwicklung (man kann sich als Beta-Tester registrieren!).

– Kosten: Sonderangebot bis 10. Juli 2015: 49 $ für das erste Jahr, danach bei Vertragsverlängerung 219 $/1 Jahr, 309 $/2 Jahre, 359 $/3 Jahre (Regulärer Preis: 49 $/Monat, 99 $/3 Monate, 269 $/1 Jahr)

Mehr zu dolmetschfreundlichen Terminologieprogrammen findet Ihr in der Übersichtstabelle TermTools für Dolmetscher.

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

+++ English version +++

If you are fed up with constantly jumping back and forth between your glossary and speech manuscripts or Powerpoint presentations, here’s a tool that will make your day: Intragloss, developed by Dan Kenig and Daniel Pohoryles from Paris, allows you to transfer terms from your text directly into your glossary, checks new texts against existing glossaries and highlights glossary terms that have been found in the text, adding its translation in a small comment between lines. Furthermore, Intragloss includes a parallel display function of original text and translation and lets you search for terms in internet resources like linguee, IATE, Wikipedia and the like. In short: Intragloss offers three killer functions, which each of them make this program worth trying.

– Mac-only; a Windows version is currently being developped (you can register as a beta tester!).

– Price: Special offer valid until July 10, 2015: 49 $ for the first year, then renewal 219 $/1 year, 309 $/2 years, 359 $/3 years (regular price: 49 $/month, 99 $/3 months, 269 $/1 year)

For more information about terminology management for interpreters, see this Summary table terminology tools for interpreters.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Handschriftlich notieren oder tippen – was sagt die Forschung? | Handwriting vs. typing – what does research tell us?

+++ for English see below +++

laptop Egal, in welchem Kreis von Dolmetschern man darüber redet, geschätzte 10 % schwören immer Stein und Bein, dass sie sich die Dinge besser merken können, wenn sie sie mit Stift und Papier festhalten. Sie pfeifen auch auf die Vorteile des papierlosen Büros, darauf, ihre Unterlagen durchsuchen und Glossare sortieren zu können, mit Kollegen Unterlagen auszutauschen oder einfach nur kein Papier zu verschwenden und immer alles digital verfügbar zu haben. Die Argumentation lautet für gewöhnlich in etwa so: „Die Wörter wandern dann durch den Stift den Arm hinauf und direknotizent in mein Gehirn.“ In meinen Ohren eine etwas grobe Herleitung … Umgekehrt begegne ich allerdings auch der eher zufällig gemachten Erkenntnis mit Misstrauen, dass ich beim Konsekutivdolmetschen mit der Tastatur um ein Vielfaches detaillierter notiere als mit Stift und Block … Aber deshalb die hohe Kunst der Notizentechnik über Bord werfen? Meine Notizentechnik war zu Beginn meiner Karriere mein ganzer Stolz, ausgefeilt und systematisch. Mittlerweile ähnelt sie zwar eher einem Stück Papier, das man versehentlich im Hühnerstall vergessen hat – dies könnte mir aber egaler nicht sein, denn meine Merkfähigkeit im Konsekutivdolmetschen entwickelt sich glücklicherweise umgekehrt proportional zur Lesbarkeit und Schönheit meiner Notizen. Eigentlich einleuchtend: Die Notizentechnik ist darauf angelegt, Sinnstrukturen abzubilden, und abbilden bedeutet zunächst einmal erkennen. Je mehr man das Erkennen von Sinnstrukturen trainiert, desto überflüssiger macht sich die Notizentechnik entsprechend selbst. Also bin ich nach wie vor ein großer Fan von handschriftlichen Notizen beim Konsekutivdolmetschen, weil es einfach das Mitdenken und Verstehen fördert und die lebendige und glaubhafte Verdolmetschung einer Rede und keine Wort-für-Wort-Übersetzung vom Notizblatt ermöglicht. So weit, so gut. Aber muss ich deshalb wirklich auch alles andere handschriftlich festhalten, damit mein Geist sich bequemt, die Information in Wissen umzuwandeln?

Zu meiner großen Begeisterung hat sich kürzlich ein Forscherteam in den USA (Pam A. Mueller von der Princeton University und Daniel M. Oppenheimer von der University of California, Los Angeles) genau diese Frage gestellt: Hat das Notier-Instrument (Stift vs. Tastatur) einen Einfluss auf das Lernen? Dabei kamen ein paar interessante Erkenntnisse zutage:

Studenten, die bei einer Vorlesung mit Papier und Stift Notizen gemacht hatten, erzielten nachher beim Beantworten konzeptioneller Fragen zu Sinnzusammenhängen besser Ergebnisse als die Testpersonen, die mit dem Laptop notiert hatten. Ein Vorteil des Tippens liegt grundsätzlich zwar darin, dass man mengenmäßig mehr Informationen festhalten kann, was grundsätzlich dem Lernen zuträglich ist. Ein großer Nachteil – der diesen Vorteil zunichte machen kann – besteht jedoch darin, dass man beim Tippen eher dazu neigt, wortwörtlich mitzuschreiben, statt Sachverhalte zu synthetisieren (selbst wenn im Versuch darum gebeten wird, dies nicht zu tun). Dies wiederum beeinträchtigt die Verarbeitungstiefe und das inhaltliche Lernen. Was für konzeptionelles Wissen gilt, gilt jedoch nicht zwingend für andere Wissensarten: Beim Abfragen von Faktenwissen war es in den Versuchen interessanterweise so, dass die „Stiftnotierer“ den „Tastaturnotierern“ nur dann überlegen waren, wenn zwischen dem Notieren und dem Beantworten der Fragen eine Woche Zeit vergangen war; beim Abfragen unmittelbar nach dem Notieren waren hier die Leistungen gleich.

Als Konferenzdolmetscherin geht mir bei dieser Studie das Herz auf. Nicht ohne einen flüchtigen Gedanken an unser aller Lieblings-Kundenausspruch („Sie sollen das nicht verstehen, Sie sollen das nur dolmetschen.“) kommt mir sofort die wunderbare Kunst der Notizentechnik in den Sinn. Wenn schon das handschriftliche und damit zusammenfassende Notieren von Inhalten dem wortwörtlichen Mittippen überlegen ist, wie spitzemmäßig muss das Aufnehmen von Inhalten dann erst mit der Dolmetsch-Notizentechnik funktionieren, die ja genau das Abbilden von Sinnzusammenhängen zum Gegenstand hat? Wäre es nicht interessant, diese Versuche mit Dolmetschern zu wiederholen? Müsste nicht eigentlich die ganze Welt unsere Notizentechnik lernen?

Andererseits bin ich als papierlose Dolmetscherin auch beruhigt, dass das reine Faktenwissen ebensogut tastaturbasiert verarbeitet werden kann. Sprich: Wenn ich mir – oder Kollegen – beim Dolmetschen (sei es simultan oder konsekutiv) ohnehin nur ein paar Zahlen,  Namen oder Termini notiere, weil ich den Rest aus dem Kopf mache, kann ich das auch gleich am PC machen. Wenn es aber um „richtiges“ Konsekutivdolmetschen geht, sind Stift und Block (oder Touchscreen) plus gute Notizentechnik die Methode der Wahl. Aber auch in der Vorbereitungsphase – wenn es um Zusammenhänge geht, um Abläufe oder Unternehmensstrukturen – ist es durchaus eine gute Idee, sich dies mit Stift und Papier buchstäblich vor Augen zu führen – am besten auch dann unter Verwendung der Notizentechnik, frei nach Matyssek or Rozan.

PS: Und wer noch darüber nachdenkt, wie man diese Spontanvisualisierungen dann am geschicktesten in den Computer bekommt: Vielleicht taugt ja das neue Mikrowellen-Notizbuch etwas?

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widment sich seit Mitte der 1990er dem Wissensmanagement.

+++ English version +++

Whenever you get tlaptopo talk about note-taking with a bunch of conference interpreters, at least one in ten will most probably be telling you that they just remember things better if they write them down using pen and paper. They don’t care about paperless offices, searching their documents and sorting glossaries alphabetically, sharing information with colleagues or simply avoiding excessive use of paper and having access to their documents any place, any time. The explanation goes roughly like this: “The words just travel through the pen up my arm, entering directly into my brain.” notizenSlightly too simple a reasoning to convince me … On the other hand, I am nonetheless suspicious about a phenomenon I found out about rather by coincidence: interpreting consecutively, I can take many more notes, and more detailed ones, using my laptop computer than with pen and paper. Fair enough … but reason enough to scrap my good old-fashioned note-taking technique, which I was so proud of back when I graduated? Once very nuanced and systematic, nowadays my notes rather resemble a piece of paper someone forgot in the hen house. But I couldn’t care less, as my powers of memory seem to increase in inverse proportion to the beauty and legibility of my notes. Quite logical, actually: Interpreters’ note-taking technique is made to visualise semantic structures, and visualising means understanding in the first place. So the more you practice note-taking, the less you will end up needing them, which is why I am still quite fond of pen and paper notes after all. What we want to deliver is the lively and credible interpretation of a speech and not a sight translation of our notes. Now that’s for interpreting. But do I really have to write down every piece of information with pen and paper for my mind to take it in as knowledge?

To my utmost delight, a team of researchers in the USA (Pam A. Mueller from Princeton University and Daniel M. Oppenheimer from University of California, Los Angeles) have asked themselves exactly the same question: Does the instrument of note-taking (pen vs. keyboard) influence learning? The results are quite interesting:

Students who had taken notes of a lecture using pen and paper performed better in answering conceptual questions than those tested who had taken notes using a laptop computer. Even though typing has the advantage that more information can be captured, which in itself is beneficial to learning, the downside – potentially outweighing this advantage – is that when fast-typing, people tend to transcribe verbatim instead of synthesing the content (and the test participants did so even when being told to avoid verbatim transcription). This in turn leads to shallower processing and impairs conceptual learning. But, interestingly, what goes for conceptual learning, does not necessarily apply to other types of knowledge: For factual knowledge, the advantage pen-users showed over keyboard-users did only occur when a week had elapsed between the lecture and the test. In immediate testing, there was no difference in performance of pen vs. keyboard users.

As a conference interpreter, I am quite thrilled by this study. It is not without remembering the famous client’s comment (“You are supposed to translate, not to understand.”) that our wonderful note-taking technique springs to mind. If hand-written, i. e. summarising, content-based notes are superior to verbatim typing, then just how much more efficient must interpreters’ notes be, which are designed to do exactly this, encoding conceptual relations? Wouldn’t it be interesting to repeat the same study with interpreters? And, by the way, shouldn’t the whole world learn how to take notes like interpreters do?

Then again, as a paperless interpreter I am in a way reassured to know that mere factual knowledge is eligible for keyboard-based processing after all. So I can still use my laptop computer while interpreting (simultaneously or consecutively) in order to note down some numbers, names or terms – be it for myself or for my colleague – while I rely on my brain for the rest of the job. But for “real” consecutive interpreting, pen and paper (or touchscreen) plus strong note-taking skills are the method of choice. And also in the preparation phase, when it comes to understanding the structure of a company or a particular workflow, there is no such thing as visualising them manually – ideally doing it the Matyssek or Rozan way.

PS: And if you are wondering how to bring the results of your spontaneous visualisations to your hard disc … What about the new microwaveable notebook?

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Pam A. Mueller1 and Daniel M. Oppenheimer2 (1Princeton University and 2University of California, Los Angeles): The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking. In: Psychological Science 2014, Vol. 25(6) 1159–1168;

Booth-friendly terminology management –

Flashterm Suchergebnis

+++ for English see below +++

Wer – womöglich auch fürs Übersetzen – eigentlich lieber eine „richtige“ Terminologieverwaltung hätte, die obendrein aber noch kabinenfreundlich ist, der sollte einmal bei Flashterm von der Eisenrieth Dokumentations GmbH vorbeischauen. Flashterm basiert auf Filemaker und bietet die Klassiker der Eintragsmöglichkeiten wie Sachgebiete, Synonyme, Kontext usw. , aber auch das Anlegen von bis zu 10 persönlichen Merklisten, integrierte Wikipediaabfrage und vieles mehr. Die Abfrage ist sehr komfortabel (ignoriert natürlich Akzepte und dergleichen) und wird fürs Simultandolmetschen noch besser, wenn man das Interpreter-Modul dazukauft. Bei der Darstellung der Suchergebnisse werden (nebst Definition in der Ausgangssprache) die Benennungen in allen Zielsprachen angezeigt (zehn passen mindestens in die Anzeige). Insgesamt deutlich mehr als nur eine zweidimensionale Tabelle (auch „Glossar“ genannt). Funktioniert für 195 Sprachen – da müsste eigentlich für jeden was dabei sein!

– verfügbar für PC und Mac, ipad/iphone und browser-basiert
–  Solo-Edition derzeit kostenlos (Jubiläumsangebot)
– zusätzliches Interpreter-Modul für 299 EUR

Mehr zu dolmetschfreundlichen Terminologieprogrammen findet Ihr in den beiden Vorgängerbeiträgen:
Booth-friendly terminology management programs for interpreters – a market snapshot (LookUp, Interplex, InterpretBank)
Booth-friendly terminology management revisited – 2 newcomers (Interpreters‘ Help, Glossary Assistant)

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widment sich seit Mitte der 1990er dem Wissensmanagement.

+++ English version +++

If what you are after is „real“ terminology management – e.g. because you double as a translator – but the program should still be booth-friendly, then you should definitely meet Flashterm by Eisenrieth Dokumentations GmbH. Flashterm is based on Filemaker and has the typical entry options like subject area, synonyms, context etc., but you can also create up to 10  personal memory lists, it has integrated Wikipedia query and much more. The query function is very handy (ignores accents and the like) and gets even better if you buy the supplementary „Interpreter“ module, wich offers an optimised search interface for the booth. Search results are displayed with the definition in the source language plus the equivalent terms in all your target languages (at least ten fit on the screen). All in all, this is much more than a two-dimensional table (aka „glossary“). It works with 195 languages, so I suppose it suits pretty much every taste.

– available for PC and Mac, ipad/iphone and browser-based
– Solo Edition free of charge at the moment (special anniversary promotion)
– supplementary „Interpreter“ module for 299 EUR

For more information about terminology management for interpreters, see previous articles on the subject:
Booth-friendly terminology management programs for interpreters – a market snapshot (LookUp, Interplex, InterpretBank)
Booth-friendly terminology management revisited – 2 newcomers (Interpreters‘ Help, Glossary Assistant)

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Mein Gehirn beim Simultandolmetschen| My brain interpreting simultaneously

Gehirn einer Simultandolmetscherin
conference interpreter’s brain

+++ for English, see below +++

Für gewöhnlich fragen wir uns ja eher, was gerade um Himmels Willen im Kopf des Redners vorgeht, den wir gerade dolmetschen. Unsere Kollegin Eliza Kalderon jedoch stellt in ihrer Doktorarbeit die umgekehrte Frage: Was geht eigentlich in den Dolmetscherköpfen beim Simultandolmetschen vor? Zu diesem Zweck steckt sie Probanden in die fMRT-Röhre (funktionale Magnetresonanztomographie), um zu sehen, was im Gehirn beim Dolmetschen, Zuhören und Shadowing passiert. Und so habe auch ich mich im November 2014 aufgemacht an die Uniklinik des Saarlandes in Homburg. Nachdem uns dort Herr Dr. Christoph Krick zunächst alles ausführlich erklärt und ein paar Zaubertricks mit dem Magnetfeld beigebracht hat (schwebende Metallplatten und dergleichen), ging es in die Röhre.


Dort war es ganz bequem, der Kopf musste still liegen, aber die Beine hatten zu meiner großen Erleichterung viel Platz. Dann habe ich zwei Videos im Wechsel gedolmetscht, geshadowt und gehört, ein spanisches ins Deutsche und ein deutsches ins Spanische. Neben dem Hämmern der Maschine, das natürlich ein bisschen störte, bestand für mich die größte Herausforderung eigentlich darin, beim Dolmetschen die Hände stillzuhalten. Mir wurde zum ersten Mal richtig klar, wie wichtig das Gestikulieren beim Formulieren des Zieltextes ist. Nach gut anderthalb Stunden (mit Unterbrechungen) war ich dann einigermaßen k.o., bekam aber zur Belohnung nicht nur sofort Schokolade, sondern auch direkten Blick auf mein Schädelinneres am Computer von Herrn Dr. Krick.

Natürlich lassen sich bei einer solchen Untersuchung viele interessante Dinge beobachten. Beispielhaft möchte ich zum Thema Sprachrichtungen Herrn Dr. Krick gerne wörtlich zitieren, da er mir das Gehirngeschehen einfach zu schön erläutert hat: „Da Sie muttersprachlich deutsch aufgewachsen sind, ergeben sich – trotz Ihrer hohen Sprachkompetenz – leichte Unterschiede dieser ähnlichen Aufgaben bezüglich sensorischer und motorischer Leistungen im Gehirn. Allerdings möchte ich nicht ausschließen, dass der Unterschied durchaus auch an der jeweiligen rhetorischen Kompetenz von Herrn Gauck und Herrn Rajoy gelegen haben mag … Wenn Sie den Herrn Gauck ins Spanische übersetzt hatten, fiel es Ihnen vergleichsweise leichter, die Sprache zu verstehen, wohingegen Ihr Kleinhirn im Hinterhaupt vergleichsweise mehr leisten musste, um die Feinmotorik der spanischen Sprechweise umzusetzen.“

simultaneous interpreting German Spanish
Simultandolmetschen Deutsch (A) – Spanisch (B)

„Wenn Sie aber den Herrn Rajoy ins Deutsche übersetzt hatten, verbrauchte Ihr Kleinhirn vergleichsweise weniger Energie, um Ihre Aussprache zu steuern. Allerdings musste Ihre sekundäre Hörrinde im Schläfenlappen mehr sensorische Leistung aufbringen, um den Ausführungen zu folgen. Dies sind allerdings nur ganz subtile Unterschiede, die in der geringen Magnitude den Hinweis ergeben, dass Sie nahezu gleich gut in beide Richtungen dolmetschen können.“

simultaneous interpreting Spanish German
Simultandolmetschen Spanisch (B) – Deutsch (A)

Dies ist nur einer von vielen interessanten Aspekten. So war beispielsweise auch mein Hippocampus relativ groß – ähnlich wie bei Labyrinth-Ratten oder den berühmten Londoner Taxifahrern … Welche wissenschaftlichen Erkenntnisse sich aus der Gesamtauswertung der Studienreihe ergeben, dürfen wir dann hoffentlich demnächst von Eliza Kalderon selbst erfahren!

PS: Und wer auch mal sein Gehirn näher kennenlernen möchte: Eliza sucht noch weitere professionelle Konferenzdolmetscher/innen mit Berufserfahrung, A-Sprache Deutsch, B-Sprache Spanisch (kein Doppel-A!), ca. 30-55 Jahre alt und möglichst rechtshändig. Einfach melden unter

PPS: Auch ein interessanter Artikel zum Thema:


Normally, we rather wonder what on earth is going on in the mind of the speaker we are interpreting. Our colleague Eliza Kalderon, however, puts it the other way around. In her phD, she looks into what exactly happens in the brains of simultaneous interpreters. To find out, she puts human test subjects into an fMRI machine (functional Magnetic Resonance Imaging) and monitors their brains while interpreting, listening and shadowing. I was one of those volunteers and made my way to the Saarland University Hospital in Homburg/Germany in November 2014. First of all, Dr. Christoph Krick gave us a detailed technical introduction including a demo of how to do magic with the helfp of the magnetic field (flying metal plates and the like). And then off I went into the tube.


To my delight, it was quite comfortable. My head wasn’t supposed to move, ok, but luckily my legs had plenty of room. Then Eliza made me listen to, interpret and shadow two videos: one from German into Spanish and one from Spanish into German. The machine hammering away around my head was a bit of a nuisance, obviously, but apart from that the biggest challenge for me was keeping my hands still while interpreting. I hadn’t realised until now how important it is to gesture when speaking. After a good one and a half hour’s work (with little breaks), I was rather knocked out, but I was rewarded promptly: Not only was I given chocolate right after the exercise, I was even allowed a glance at my brain on Dr. Krick’s computer.

There are of course a great many interesting phenomena to observe in such a study. To describe one of them, I would like to quote literally Dr. Krick’s nice explanation: „As you have grown up speaking German as a mother tongue, despite your high level of linguistic competence, we can see slight differences between the two similar tasks in terms of sensoric and motoric performance in your brain. However, it cannot be ruled out that these differences might be also be attributable to the respective rhetorical skills of Mr. Gauck and Mr. Rajoy. When translating Mr. Gauck into Spanish, understanding involved comparably less effort while the cerebellum in the back of your head had to work comparably harder in order to articulate the Spanish language.“

simultaneous interpreting German Spanish
Simultaneous interpreting German (A) – Spanish (B)

„When, on the other hand, translating Mr. Rajoy into German, your cerebellum needed comparably less energy to control your pronunciation. Your secondary auditory cortex, located in the temporal lobe, had to make a greater sensoric effort in order to understand what was being said. Those differences are, however, very subtle, their low magnitude actually leads to the assumption that you practically work equally well in both directions.“

simultaneous interpreting Spanish German
Simultaneous interpreting Spanish (B) – German (A)

This is only one of many interesting aspects. Another one worth mentioning might be the fact that my hippocampus was slightly on the big side – just like in maze rats or London cab drivers … I am really looking forward to getting the whole picture and reading about the scientific findings Eliza draws once she has finished her study!

PS: If you, too, would like to get a glimpse inside your head: Eliza is still looking for volunteers! If you are a professional, experienced conference Interpreter with German A and Spanish B as working languages (no double A!), about 30-55 years old and preferably right-handed, feel free to get in touch:

PPS: Some further reading:

PPPS: A portuguese version of this article can be found on Marina Borges‘ blog:

Not-To-Do Lists and Not-To-Learn Words

not to learn - nicht lernen

+++ For English see below +++

To-Do-Listen sind eine feine Sache: Einmal aufgeschrieben, kann man lästige Aufgaben zumindest zeitweilig des Gedächtnisses verweisen, und überhaupt ist man viel organisierter und effizienter. Ich zumindest gieße seit Ewigkeiten alles, was nicht bei drei auf den Bäumen ist, in eine Excel-Tabelle – seien es nun Geld, Arbeit, Adressbuch oder die Weihnachtsgeschenke der gesamten Familie.

Die ultimative Glückseligmachung des Selbstmanagements aber sind nicht die Listen, die uns sagen, was zu tun ist, sondern die Not-To-Do-Listen. Sie entschlacken unseren Alltag und schaffen erst die Freiräume für die wichtigen Dinge des Lebens. Die wichtigen Dinge des Lebens und Not-To-Do-Listen sollen aber mein Thema gar nicht sein, denn damit haben sich andere schon viel qualifizierter auseinandergesetzt, so etwa Pat Brans im Blog des Forbes-Magazin oder Timothy Ferriss, Autor des Buchs „Die 4-Stunden-Woche“.

Mich als Dolmetscherin erinnert dieser Ansatz nur unweigerlich an ein Prinzip, auf dem ich in Seminaren für Konferenzdolmetscher seit Jahren gerne herumreite, nämlich der Kosten-Nutzen-Abwägung bzw. Selektion beim Vokabellernen – zeitgeist-adäquat nun auch „not-to-learn words“ genannt. Das Prinzip bleibt das Gleiche: Regelmäßig ziehen wir bis an die Zähne vorbereitet in den Dolmetscheinsatz, im Gepäck Glossare (hoffentlich eher Datenbanken) mit lässig an die 100 bis 400 Termini. Aber alle auswendig lernen, womöglich so stark automatisiert, dass sie uns auch unter kognitiver Belastung mühelos einfallen? Manchmal eher nicht wirklich. Zumal nicht selten nur die Hälfte der vorbereiteten Terminologie überhaupt zum Einsatz kommt. Folglich ist es eine gute Idee, einen Teil der Terminologie von vorneherein bewusst auf die Nicht-Lernen-Liste zu setzen, statt die Entscheidung dem Zufall zu überlassen, an welche Termini wir uns nun im entscheidenen Augenblick erinnern. Um Kandidaten für diese Liste zu erkennen, kann man sich an drei Kriterien orientieren (oder in Huffington-Post-Deutsch: „Drei Wege, wie Sie Ihre persönlichen Not-To-Learn Words erkennen und damit Ihr ganzes Leben revolutionieren“):

– Man kann sie sich beim besten Willen nicht merken. Wenn es aber sein muss und die „zirkulierende Wirbelschichtverbrennung“/“circulating fluidized bed combustion“/“combustión en lecho fluidizado circulante“ nunmal ein zentraler Begriff ist, besser aufschreiben und immer gut sichtbar platzieren. Gut abgelesen ist allemal besser als schlecht gelernt.

– Kein Mensch weiß, wie wahrscheinlich es ist, dass dieser Begriff in der heutigen Konferenz und im restlichen Leben überhaupt je eine Rolle spielen wird. Besser abrufbar schriftlich/digital mitführen. Dabei macht es natürlich einen Unterschied, ob der Kontext für das „Schleierkraut“ im Blumenstrauß für die Vorsitzende zu suchen ist („lovely flowers!“) oder in einem Botanikerkongress („babies‘ breath, Gypsophila paniculata“).

– Lässt sich zur Not improvisieren. Ein Negativbeispiel hierfür wären militärische Ränge. Wenn aus Oberstleutnant Meyer „el Señor Meyer“ wird, macht das nicht unbedingt einen schlanken Fuß. Unersetzbares also besser lernen.

Und das Lustigste daran: Bestimmt sind es genau die Begriffe, die auf der Not-To-Learn-Liste stehen, die man unweigerlich doch im Kopf hat. Denn am Ende gilt eben immer: Dolmetscher wissen alles.



To-do lists are just lovely: Once you have put all nasty tasks down there, you can forget about them for a while and, what’s more, be just perfectly organised and highly efficient. I have been putting anything into spreadsheets for ages, be it money, jobs, my address book or the whole family’s Christmas gifts.

If, however, you want to go the extra mile on your way to happiness and self-management, you will most probably not want lists to tell you what you have to do, but rather what not to do – the famous “not-to-do lists”. They help to prune your daily life and leave room for what really matters. Now I am not the right person to tell you what really matters in life or how to handle not-to-do lists, as other more qualified people have done so before, such as Pat Brans on the Forbes magazine’s blog or Timothy Ferriss, author of the book „The 4-Hour Workweek“.

But still … these not-to-do lists remind me of something I have been dwelling upon for years in conference interpreting seminars: the principle of cost-benefit ratios and selection when it comes to learning vocab for a job – now probably to be called „not-to-learn words“. The principle remains the same: We often go to work prepared to the teeth, easily carrying with us some 100 to 400 terms in our glossaries (or, hopefully, rather data bases). But have we learnt them all by heart, ideally to a degree that makes us spit them out automatically in the right moment, even under cognitive load? Well, rather not, sometimes. And quite understandably so, as it occurs that half of the terminology prepared is not applied after all. Accordingly, it is a good idea to discard some of the terminology-to-be-learnt in advance and put it on your not-to-learn word list instead of relying on being lucky enough to retain the decisive terms. There are some criteria to identify the right candidates for your not-to-learn words (or, to put it in Huffington Post style: Three life-changing ways of creating your personal not-to-learn word list):

– You just cannot bring yourself to memorising this word (or expression). Now, if they really insist on talking about „zirkulierende Wirbelschichtverbrennung“/“circulating fluidized bed combustion“/“combustión en lecho fluidizado circulante“ all day, make sure to write it down and place it somewhere visible at all times. Better well read than badly remembered.

– You have no idea about how likely this term is to ever pop up in your conference or your life at all. Better have it with you on paper or computer. Of course, it makes a difference whether the “babies‘ breath“ or Gypsophila paniculata turns up in the context of a botanical congress („Schleierkraut“) or in the bunch of flowers the president receives for her birthday (“tolle Blumen!”).
– If necessary, you might get away with an impromptu solution. A negative example of this is military ranks, which just can’t be invented. And if Lieutenant Meyer becomes „el Señor Meyer“, this will not exactly make you shine as an interpreter. You better learn the irreplaceable.

And the funny thing is that, at the end, you are bound to remember exactly the terms you put on the not-to-learn list. After all, interpreters just know it all.