Great Piece of Research on Terminology Assistance for Conference Interpreters

Terminology Assistance Coming to a Simultaneous Interpreter Near You

What do conference interpreters‘ booth notes tell us about their information management?

First of all, a big thank you to all of you who followed my call and provided copies of their booth notes for my little study – I finally managed to collect booth notes from 25 colleagues! Now, what was this study all about? The purpose was to see what interpreters write down intuitively on a blank sheet of paper, i.e. with no given structure like a terminology database, supposing that what you find on these notes is what is really relevant in the booth. What I was interested in was

1. to see if these notes possibly confirmed what research says about knowledge management, or terminology management more in particular,

2. to check if this information can be mapped to the structures of booth-friendly terminology managmenent systems.

I was also hoping to get some inspiration about the more general question of how (or if) computers could best support conference interpreters in their work.

As the information on the notes might be confidential, the first thing I decided to do was create a mock set of notes reflecting the statistics of my sample notes:

– Average number of terminological records per set of notes: 20 (10 nouns, 4 phrases, 6 acronyms)
– Of all terminological records, 99.6% were technical or specialised terminology.
– 14 records were in one language only (2 in source language, 12 in target language), 5 records in two languages, 1 record in three and more languages.
– Non-terminological records: 6 numbers; 1 context information like names of legal acts, persons, positions; graphic illustrations (1 drawing, 1 underline)

My self-made model notes look like this:

Of all the things I observed in the notes, I was more surprised by what I did not see than by what I saw:

– Hardly any verbs and adjectives
– not really many drawings illustrating conceptual relations
– 72 % of all „terminological records“ found were made in one language only, and each interpreter wrote down terminology in one language only at least once.

Overall, it looks like the „deeper“ information about content and semantic relations is rather dealt with during preparation while information work in the booth is more about having  crucial context information and the right technical term in the target language (almost all terminological records were of technical nature). In short, this filling of personal knowledge gaps in the booth is the tip of the iceberg of a conference interpreter’s information and knowledge work. This confirms what research says, but makes me wonder whether a terminology tool that – in booth mode – displays key terms in the current target language only (possibly in word clouds) might be more efficient as a word-finding trigger than bilingual, glossary-style lists. Or is cognitive overload the only reason why simultaneous interpreters would note down their terms in one language only in the booth?

Luckily I was even able to collect one team sample, i.e. the notes of 5 interpreters working at the same conference. It was interesting to see that there was indeed some overlapping in the terms noted down and that these „shared“ terms were mainly written at the top of the respective sheets. In particular, 2 acronyms were written down by all 5 interpreters, another 2 acronyms by 4 of the 5 interpreters, and one technical term by 3 of them. Just like the complete study, this is by no means representative, but at least it indicates that it might be possible to provide key terms for certain meetings which are useful to all interpreters.

Beyond statistics and hard data, this study made me think a lot about the possible reasons that put interpreters off going paperless in the booth. It also inspired me to discuss this question with colleagues. It appears that there are several factors that tend to work better on paper than on a computer:

– Screen space: There is only so much information you can display on a computer screen. With agenda, meeting documents, glossaries and online ressources, it is hard to squeeze everything onto a display not much bigger than a regular sheet of paper.

– Exchange plattform: Simultaneous interpreters in the booth like to use a sheet of paper as a kind of exchange platform to ask for coffee, write down when to change turns and note down difficult terms, numbers etc. to support each other, check dublin locksmith.

– Permanent visibility: Once written down on paper, information doesn’t usually disappear from our view easily, something that may well happen on a computer.

– Document handling: When working with different documents (original and translation of speeches, draft agreements, legislative texts), they can be arranged on a desk (if not too small) in a way to find one’s way through them and/or share them with the colleague who is busy interpreting in order to find the right page or line for her o him.

– Input: The input function of pen and paper is just very intuitive.

These were my main conclusions from this lovely little study. If you want to know all the details, I encourage you to read the full article, which was published in the Proceedings of the 40th Conference Translating and the Computer, London, UK, November 15-16, 2018, p 132-144. All the slides are also available for download.


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Neurophysiologie des Simultandolmetschens | Neurophysiology of simultaneous interpreting – by Eliza Kalderon

+++ for English, scroll down +++

Etwa eineinhalb Jahre nach Beenden der Promotion freue ich mich über die Möglichkeit, im Blog meiner Kollegin, die das Projekt „Neurophysiologie des Simultandolmetschens: eine fMRI-Studie mit Konferenzdolmetschern“ von Anfang an voller Begeisterung und Engagement unterstützte, eines der spannendsten Ergebnisse vorstellen zu dürfen.

Die drei nachfolgenden Abbildungen stellen sogenannte Render-Bilder dar, d. h. dass die 3D-Bilder jeder einzelnen Versuchsperson zu einer Bildsynthese zusammengefasst wurden, da der wiederkehrende Wert sowie die Suche nach übereinstimmenden neuronalen Mustern in den untersuchten Leistungen im Mittelpunkt des wissenschaftlichen Interesses standen.

In den Bedingungen wurden Masken, sogenannte regions of interest (ROI), angewandt, in denen das Simultandolmetschen im Vergleich zu einer weiteren Aufgabe stand – in unserem Fall im Vergleich zum Shadowing. Durch diese Masken kann die Kalkulation der Gehirnaktivierung auf eine definierte anatomische Region eingegrenzt werden. Anhand der verwendeten Maskierung konnte also die Aktivität bestimmter Aktivierungscluster, die zum Broca- bzw. zum Wernicke-Areal gehören, bestimmt werden.

Abbildung 1 – DE>ES

In Abbildung 1 wurde die Mehraktivierung beim Simultandolmetschen aus dem Deutschen ins Spanische und das Simultandolmetschen aus dem Spanischen ins Deutsche ohne Maskierung kontrastiert, das heißt mit Abbildung der Aktivierung über das ganze Gehirn.

Beim Vergleich der Dolmetschrichtung wird die vom Simultandolmetschen aus dem Spanischen ins Deutsche hervorgerufene Gerhirnaktivierung von dem Simultandolmetschen aus dem Deutschen ins Spanische subtrahiert. Unter diesen zwei getesteten Bedingungen haben die Probanden eine Rede aus ihrer Muttersprache Deutsch in ihre aktive Fremdsprache (B-Sprache, Spanisch) gedolmetscht beziehungsweise in der anderen Bedingung eine Rede aus dem Spanischen ins Deutsche gedolmetscht. Die Abbildung zeigt, dass beim Simultandolmetschen in die spanische Sprache das Gehirn der deutschmuttersprachlichen Konferenzdolmetscher beidhemisphärisch im primären motorischen somatosensorischen Kortex aktiviert wurde.

Das bedeutet, dass deutschmuttersprachliche Dolmetscher für die Artikulation im Spanischen mehr Mundmotorik aktivieren als beim Simultandolmetschen in ihre Muttersprache. Das bedeutet wiederum, dass sie für die Performanz in der spanischen Sprache mehr Kontrolle über die Sprachmuskulatur brauchen. Zu beobachten war weiterhin eine Aktivierung im medialen superioren Frontallappen. In diesem Areal ist das strategische Denken (prospective memory, BURGESS et al. 2011) angesiedelt.

Abbildung 2 – ES>DE

In Abbildung 2 ist der umgekehrte Kontrast dargestellt, die Mehraktivierung beim Simultandolmetschen aus dem Spanischen ins Deutsche, das heißt, von der neuronalen Aktivität beim Simultandolmetschen aus dem Spanischen ins Deutsche wurde die neuronale Aktivität aus dem Deutschen ins Spanische subtrahiert. Sie zeigt eine Aktivierung des inferioren Temporallappens, in dem visuelle Informationen verarbeitet werden.

Es ist auch die Aktivierung eines Areals im medialen präfrontalen Kortex zu beobachten, der mit dem prospektiven Denken in Verbindung gebracht wird (vgl. BURGESS et al. 2011). Dort legt man sich Handlungsstrategien zurecht. Bei der Verdolmetschung ins Spanische sind sie sprachlich-motorischer Natur (beansprucht wird also das motorische Arbeitsgedächtnis; ein analoges Ergebnis findet sich bei TOMMOLA et al. 2000: 162).

Abbildung 3 stellt eine Zusammenfassung der beiden vorherigen Abbildungen dar. Hier wurde das Simultandolmetschen in beide Sprachrichtungen in einem Bild gegenübergestellt. Die rot markierten Areale stellen die Mehraktivierung beim Simultandolmetschen aus dem Deutschen ins Spanische dar, die blauen das Simultandolmetschen vom Spanischen ins Deutsche.

Abbildung 3 – Vergleich

Wie man sieht, wurden beim Simultandolmetschen ins Spanische besonders die motorischen Areale beansprucht. In der umgekehrten Sprachrichtung dominieren eine rechtsseitige Aktivierung im inferioren Temporallappen sowie ein aktiviertes Cluster im medialen Präfrontalkortex.

Diese Bilder lieferten uns ein überraschendes und unerwartetes Ergebnis: Dass selbst das trainierte Gehirn von Konferenzdolmetschern eine immense Menge an Kapazitäten für die Artikulation in der Fremdsprache benötigt.

Wer das komplette Studiendesign sowie alle Ergebnisse nachlesen möchte, kann gerne den folgenden frei zugänglichen Link anklicken.

Und last, but not least, möchte ich mich noch einmal bei Anja Rütten und all den Kolleginnen und Kollegen herzlich bedanken, dass sie die lange Fahrt nach Homburg (Saar) auf sich genommen haben, um die Studie und die beeindruckenden Ergebnisse zu ermöglichen.


+++ English version +++

About one and a half years after the project’s completion, I am particularly pleased to present one of the most fascinating results of my doctoral research about neurophysiological processes in simultaneous intepreting on the blog of my colleague, who provided enthusiastic and committed support to this research project from the outset.

The three images below are what is referred to as “render images”: They represent a 3D synthesis of each individual subject in a single image as this research primarily focussed on recurring values and identifying neuronal patterns in the performance analysed.

Masks were applied to the different tasks to outline what is known as regions of interest (ROI). This served to contrast simultaneous interpreting and a second task – shadowing in our case. With the help of these masks, it became possible to limit the calculation of brain activation to a defined anatomic region. This allowed to specify the attribution of an activation and its localisation. It was thus possible to determine the activity outlined after masking of certain activation clusters attributed to Broca’s area or Wernicke’s area.

Figure 1 – DE>ES

Figure 1 shows the contrast in activation between simultaneous interpreting from German into Spanish and simultaneous interpreting from Spanish into German without masking. In other words, it shows the activation of the entire brain.

For a comparison of interpreting directions, the brain activation caused by simultaneous interpreting from Spanish into German was subtracted from that caused by simultaneous interpreting from German into Spanish. In the two tested settings, the subjects were asked to interpret a speech from their mother tongue (German) into their active working language (“B language”, Spanish) and, for the reverse setting, from Spanish into German. The image shows bi-hemispheric activation of the primary motor somatosensory cortex of the brain of a German native conference interpreter when interpreting into Spanish.

This implies that a German native conference interpreter requires stronger activation of mouth movement when articulating in Spanish than when simultaneously interpreting into their mother tongue. This in turn implies that they need stronger control of the muscles in their vocal tracts for a performance in Spanish. Furthermore, activation in the medial superior frontal lobe was observed. This is the area were strategic thinking (prospective memory BURGESS et al. 2011) is located.

Figure 2 – ES>DE

Figure 2 shows the reverse contrast, namely the stronger activation linked to simultaneous interpreting from Spanish into German. This is the result of subtracting the neuronal activity related to German into Spanish from the neuronal activity caused by interpreting from Spanish into German. It illustrates an activation of the inferior temporal lobe which is where visual input is processed.

We can observe an activation of the medial prefrontal cortex, which is associated with prospective memory (BURGESS et al. 2011). This area is responsible for developing strategies for action. These strategies are of motor-linguistic nature when interpreting into Spanish (it is the motor working memory which is responsible; TOMMOLA et al. 2000:162 draw similar conclusions).

The final image summarises the two previous images. It contrasts the process of simultaneous interpretation in both language directions. Areas marked in red represent stronger activation during simultaneous interpretation from German into Spanish. Areas in blue mark simultaneous interpreting from Spanish into German.

Figure 3 – Comparison

It is evident that simultaneous interpreting into Spanish particularly engages the motor brain areas. The dominant activation areas in the reverse language direction are in the right inferior temporal lobe and a cluster in the medial prefrontal cortex.

These images provided a surprising and unexpected finding: Even a practiced conference interpreter uses large amounts of capacity for articulating in the foreign language, check painter dublin.

If you are interested in reading the complete research design and all other findings, you are welcome to follow this link (free access).

Last but certainly not least I would like to thank Anja Rütten and all other colleagues for taking the long journey to Homburg (Saar) to participate in this experiment and making these impressive results possible.

References

BURGESS, B.W.; GONEN-YAACOVI, G.; VOLLE, E. (2011): „Functional neuroimaging studies of prospective memory: What have we learnt so far?”. Neuropsychologia 49. 2246-2257
TOMMOLA, J.; LAINE, M.; SUNNARI, M.; RINNE, J. (2000): „The translating brain: cerebral activation patterns during simultaneous interpreting”. Neuroscience Letters 294(2). 85-88

Microsoft Office Translator – Can it be of any help in the booth?

When it comes to Computer-Aided Interpreting (CAI), a question widely  discussed in the interpreting community is whether information being provided automatically by a computer in the booth could be helpful for simultaneous interpreters or if would rather be a distraction. Or to put it differently: Would the cognitive load of simultaneous interpreting be increased by the additional input, or would it be decreased by providing helpful information that the interpreters would otherwise have to retrieve from their longterm memory.

Of course, interpreting is not about translating single words, but about ideas being understood in one language and then expressed in another. But on the other hand, we all (conference interpreters or not) know the occasional tip of the tongue, when we just can’t think of the German word for, say, nitric acid, and might appreciate a little trigger to remember a particular word or expression.

One scenario of CAI often discussed is that the source speech is analysed by a speech recognition software, critical terminology is extracted  and, based on the interpreter’s glossary, a dictionary or other sources, the equivalent in the target language is displayed on the screen. This technology still has many limitations with the sunflowermaids.com official website, especially the speed and quality/reliability of the speech recognition function. But while we are waiting for this solution to become market-ready, I have recently come to like a tool which is altogether quite different in its original aim but can be used for a similar purpose: The Microsoft Translator.

In Powerpoint, for example, by just clicking on a text element the translator window opens next to the slide, and remains open. It translates complete texts or single words, and it has turned out to be quite useful for me in some situations, especially when interpreting presentations based on Powerpoint files I had not had the time to read before the meeting.

But would I say that the Microsoft Translator is a tool I consider a valuable support in the booth? The answer clearly is: it depends.

Quality varies considerably between language pairs. While English-Spanish seems to be one of the well-developed „premium“ combinations with sometimes impressive results, French-German did not really convince me.

You can never rely on the system to understand the message. And running a mental plausibility check in parallel to the normal interpreting job plus reading the translation on the screen is not an option.

But: If you manage to use the translator simply to prompt your brain when you are searching for a particular word, preferably one that leaves no room for mistranslations (like sodium, elderflower or forklift truck), it may make your life easier.

The nice thing is that this translator, which can also be used as a dictionary, runs within Powerpoint, so you can read your presentation and pre-translate texts very easily. It does not involve any typing or skipping between different windows.

After all, we are still at the beginning of what CAI brings. The Microsoft Translator is an easily accessible tools nice enough to play around with and get a flavour of what language technology brings for conference interpreters. And I am really curious to hear what your experience is!

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

How to measure the efficiency of your conference preparation

Half of the time we dedicate to a specific interpreting assignment is often spent on preparation. But while many a thought is given to the actual interpreting performance and the different ways to evaluate it, I hardly ever hear anyone discuss their (or others‘) preparation performance. However, if we want to be good information and knowledge managers rather than mere information and knowledge workers, we need to close the management cycle and put extra effort into checking if our work serves its purpose and making possible adjustments to optimise it.

Efficiency being the ratio between input and output (how much do you spend to make a dollar?), the question now is what to measure in the first place. Admittedly, the efficiency of information and knowledge work is not the easiest thing to measure. Apart from the fact that whilst interpreting we have other things to worry about, it is hard to tell the difference between the way we actually interpret and the way we would have done without the most essential part of our information work, i.e. preparation. Strictly speaking, previous work experience and knowledge acquired outside the interpreter’s professional life also count as „preparation“ and can even be more helpful than preparation in the stricter sense.

To put the concept of efficiency of information and knowledge work in conference interpreting into measurable terms, it could be reduced to the following question:

How much time do you spend to make a useful information unit?

As it happens, back in 2006 I conducted a case study to check exactly this: a conference interpreter’s preparation effort in relation to its usefulness. As a baseline, I decided to use the terminology prepared for a technical meeting, assuming that this is what comes closest to a quantifiable amount of information. Even if preparation is not all about terminology (or glossaries), it is an important part, and if it is well done, it covers semantics and context information as well.

So in order to get a number representing the output, I simply counted all the terminological units prepared for one meeting (376) and afterwards had the interpreter count those units that actually came up in the meeting (197) so that the terms prepared „in vain“ could be deducted. I then calculated the percentage of the used terms in relation the total amount of elaborated terms, the so called usage rate. In the case study the overall usage rate at the conference at hand was 52%. The usage rate of terminology from a previous conference of the same client about the same subject was 48 % (81 out of 168 terminological units). This has of course no statistical significance whatsoever, but it can surely be a useful indicator for the individual interpreter. And interestingly, when repeating this exercise with my students from now and then, the results are usually of a similar order of magnitude.

Once the output (terms used) has been determined, it can be related to the input. Assuming that the input is mainly the time spent on preparing the terminological units that came up in the conference, this time is divided by the terms used in order to obtain the relative or average time spent per terminological unit. This value can be considered an approximation to the efficiency of the interpreter’s information work. In the case study the average time spent per term used was 5 minutes (9.5 hours for 113 terms). When repeating this exercise with students, this value usually ranges roughly from 1 to 10 minutes.

Such numbers of course merely serve to quantify the information work we do. In order to really complete the management cycle and find out in how far preparation could possibly be optimised, a closer look needs to be taken at the quality of information and knowledge gaps that occur during the interpreting assignment at hand and how they are or could be handled – which is a different story altogether.

References

Informations- und Wissensmanagement im Konferenzdolmetschen. Sabest 15. Frankfurt: Peter Lang. [dissertation] www.peterlang.net

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

InterpretersHelp’s new Practice Module – Great Peer-Reviewing Tool for Students and Grownup Interpreters alike

I have been wondering for quite a while now why peer feedback plays such a small role in the professional lives of conference interpreters. Whatwith AIIC relying on peer review as its only admission criterion, why not follow the logic and have some kind of a routine in place to reflect upon our performance every now and then. After having received our university degrees, we are not immune to developing bad habits or dropping in performance for the next decades to come.

In light of this I was all the more pleased to learn at Translating and the Computer 40 that InterpretersHelp had implemented a brand new practice and feedback module.

I decided to test it straight away with my students at the University of Applied Sciences in Cologne. So instead of meeting in the classroom, I had my students interpret a video at home in the InterpretersHelp practice module and send their interpretation to me for feedback. I listened to eleven complete interpretations and gave detailed feedback using the evaluation criteria and the comment function InterpretersHelp offers. (First lesson learned: Listening to eleven interpreted versions of the same speech from start to finish is a safe way to drive anyone insane.) Later we met face-to-face at the university to discuss recurring issues and general patterns in interpreting particular parts of the speech.

Here are my lessons learned from a trainer’s perspective:

– Listening to all your students‘ recordings is extremely time-consuming. Make sure you plan accordingly.
– Giving structured feedback trains your sense of analysis and helps to discover similarities and differences between your students‘ skills, strengths and weaknesses.
– The discussions in the group were much more focussed on patterns, strategies and best practice than on individual mistakes.

Feedback from my students was:

– The tool was great and intuitive to use. They also used it after our first test session to practice on their own and prepare for their exams.

Some minor technical hiccups were reported, some of which were fixed immediately by the IH team. For example, a „Pause“ button and synchronous playing/reversing/forwarding were implemented immediately after we reported that we desperately needed them. There still seems to be an imprecision in the alignment of original and interpretation track, which makes it a bit difficult to measure decalage, but chances are this will be improved in the near future. A downloadable recording file in two-track format was implemented at short notice. Loading the recording can be a bit slow depending on your hardware and internet connection, but this is being worked on. So far, the practice module only works in Google Chrome, and it cannot be used on a mobile device. For technical reasons, the choice of source videos is currently limited to YouTube. I personally would love to see speechpool.net integrated as a source of video material, which seems to be an option InterpretersHelp  is not averse to either.

But back to the question of peer review practice among grown-up interpreters: Why don’t we make a habit of completing one practice interpretation per language combination once a year, just like our medical check-up, and sending it to several colleagues for review? If you are shy about exposing yourself to your peers‘ criticism, you can start choosing a good speaker and an easy subject, and once you feel a bit braver you go for the super fast-speaking and mumbling techie.

I would be very interested in hearing your thoughts on this, so feel free to leave comments here or on Twitter or Facebook 🙂


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

InterpretBank 4 review

InterpretBank by Claudio Fantinuoli, one of the pioneers of terminology tools for conference interpreters (or CAI tools), already before the new release was full to the brim with useful functions and settings that hardly any other tool offers. It was already presented in one of the first articles of this blog, back in 2014. So now I was all the more curious to find out about the 4th generation, and I am happy to share my impressions in the following article.

Getting started

It took me 2 minutes to download and install the new InterpretBank and set my working languages (1 mother tongue plus 4 languages). My first impression is that the user interface looked quite familiar: language columns (still empty) and the familiar buttons to switch between edit, memorize and conference mode. The options menu lets you set display colours, row height and many other things. You can select the online sources for looking up terminology (linguee, IATE, LEO, DICT, Wordreference and Reverso) and definitions (Wikipedia, Collins, Dictionary.com) as well as set automatic translation services (search IATE/old glossaries, use different online translation memories like glosbe and others).

Xlsx, docx and ibex (proprietary InterpretBank format) files can be imported easily, and unlike the former InterpretBank, I don’t have to change the display settings any more in order to have all my five languages displayed. Great improvement! Apart from the terms in five languages, you can import an additional “info” field and a link related to each language as well as a “bloc note”, which refers to the complete entry.

Data storage and sharing

All glossaries are saved on your Windows or Mac computer in a unique database. I haven’t tested the synchronization between desktop and laptop, which is done via Dropbox or any other shared folder. The online sharing function using a simple link worked perfectly fine for me. You just open a glossary, upload it to the secure InterpretBank server, get the link and send it to whomever you like, including yourself. On my Android phone, the plain two-language interface opened smoothly in Firefox. And although I always insist on having more than two languages in my term database, I would say that for mobile access, two languages are perfect, as consecutive interpreting usually happens between two languages back and forth and squeezing more than two languages onto a tiny smartphone screen might not be the easiest thing to do either.

I don’t quite get the idea why I should share this link with colleagues, though. Usually you either have a shared glossary in the first place, with all members of the team editing it and making contributions, or everyone has separate glossaries and there is hardly any need of sharing. If I wanted to share my InterpretBank glossary at all, I would export it and send it via email or copy it into a cloud-based team glossary, so that my colleagues can use it at their convenience.

The terminology in InterpretBank is divided into glossaries and subglossaries. Technically, everything is stored in one single database, “glossary” and “subglossary” just being data fields containing a topic classification and sub-classification. Importing only works glossary by glossary, i.e. I can’t import my own (quite big) database as a whole, converting the topic classification data fields into glossaries and sub-glossaries.

Glossary creation

After having imported an existing glossary, I now create a new one from scratch (about cars). In edit mode, with the display set to two languages only, InterpretBank will look up translations in online translation memories for you. All you have to do is press F1 or using the right mouse button or, if you prefer, the search is done automatically upon pressing the tab key, i.e. jumping from one language field to the next –empty– one. When I tried “Pleuelstange” (German for connecting rod), no Spanish translation could be found. But upon my second try, “Kotflügel” (German for mudguard), the Spanish “guardabarros” was found in MEDIAWIKI.

By pressing F2, or right-click on the term you want a translation for, you can also search your pre-selected online resources for translations and definitions. If, however, all your language fields are filled and you only want to double-check or think that what is in your glossary isn’t correct, the program will tell you that nothing is missing and therefore no online search can be made. Looking up terminology in several online sources in one go is something many a tool has tried to make possible. My favourite so far being http://sb.qtrans.de, I must say that I quite like the way InterpretBank displays the online search results. It will open one (not ten or twenty) browser tabs where you can select the different sources to see the search results.

The functions for collecting reference texts on specific topics and extracting relevant terminology haven’t yet been integrated into InterpretBank (but, as Claudio assured me, will be in the autumn). However, the functions are already available in a separate tool named TranslatorBank (so far for German, English, French and Italian).

Quick lookup function for the booth

While searching in „normal“ edit mode is accent and case sensitive, in conference mode (headset icon) it is intuitive and hardly demands any attention. The incremental search function will narrow down the hit list with every additional letter you type. And there are many option to customize the behaviour of the search function. Actually, the „search parameters panel“ says it all: Would you like to search in all languages or just your main language? Hit enter or not to start your query? If not, how many seconds would you like the system to wait until it starts a fresh query? Ignore accents or not? Correct typos? Search in all glossaries if nothing can be found in the current one? Most probably very useful in the booth.

When toying around with the search function, I didn’t find my typos corrected, at least not that I was aware of. When typing „gardient“ I would have thought that the system corrected it into „gradient“, which it didn’t. However, when I typed „blok“, the system deleted the last letter and returned all the terms containing „block“. Very helpful indeed.

In order to figure out how the system automatically referred to IATE when no results were found in my own database, I entered „Bruttoinlandsprodukt“ (gross domestic product in German). Firstly, the system froze (in shock?), but then the IATE search result appeared in four of my five languages in the list, as Dutch isn’t supported and would have to be bought separately. At least I suppose it was the IATE result, as the source wasn’t indicated anywhere and it just looked like a normal glossary entry.

Queries in different web sources hitting F2 also works in booth mode, just as described above for edit mode. The automatic translation (F1) only works in a two-language display, which in turn can only be set in edit mode.

Memorize new terms

The memorizing function, in my view, hasn’t changed too much, which is good because I like it the way it was before. The only change I have noticed is that it will now let you memorize terms in all your languages and doesn’t only work with language pairs. I like it!

Summary

All in all, in my view InterpretBank remains number one in sophistication among the terminology tools made for (and mostly by) conference interpreters. None of the other tools I am aware of covers such a wide range of an interpreter’s workflow. I would actually not call it a terminology management tool, but a conference preparation tool.

The changes aren’t as drastic as I would have expected after reading the announcement, which isn’t necessarily a bad thing, the old InterpretBank not having been completely user-unfriendly in the first place. But the user interface has indeed become more intuitive and I found my way around more easily.

The new online look-up elements are very relevant, and they work swiftly. Handling more than two languages has become easier, so as long as you don’t want to work with more than five languages in total, you should be fine. If it weren’t for the flexibility of a generic database like MS Access and the many additional data fields I have grown very fond of , like client, date and name of the conference, degree of importance, I would seriously consider becoming an InterpretBank user. But then even if one prefers keeping one’s master terminology database in a different format, thanks to the export function InterpretBank could still be used for conference preparation and booth work „only“.

Finally, whatwith online team glossaries becoming common practice, I hope to see a browser-based InterpretBank 5 in the future!

PS: One detail worth mentioning is the log file InterpretBank saves for you if you tell it to. Here you can see all the changes and queries made, which I find a nice thing not only for research purposes, but also to do a personal follow-up after a conference (or before the next conference of the same kind) and see which were the terms that kept my mind busy. Used properly, this log file could serve to close the circle of knowledge management.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Can computers outperform human interpreters?

Unlike many people in the translation industry, I like to imagine that one day computers will be able to interpret simultaneously between two languages just as well as or better than human interpreters do, what with artificial neuronal neurons and neural networks‘ pattern-based learning. After all, once hardware capacity allows for it, an artificial neural network will be able to hear and process many more instances of spoken languages and the underlying content than my tiny brain will in all its lifetime. So it may recognise and understand the weirdest accents and the most complicated matter just because of the sheer amount of information it has processed before and the vast ontologies it can rely on (And by that time, we will most probably not only be able to use digital interpreters, but also digital speakers).

The more relevant question by then might rather be if or when people will want to have digital interpretation (or digital speakers in the first place). How would I feel about being replaced by a machine interpreter, people often ask me over a cup of coffee during the break. Actually, the more I think about it, the more I realise that in some cases I would be happy to be replaced by a machine. And it is good old Friedemann Schulz von Thun I find particularly helpful when it comes to explaining when exactly I find that machine interpreters might outperform (out-communicate, so to say) us humans (or machine speakers outperform humans).

As Friedemann Schulz von Thun already put it back in 1981 in his four sides model (https://en.wikipedia.org/wiki/Four-sides_model), communication happens on four levels:

The matter layer contains statements which are matter of fact like data and facts, which are part of the news.

In the self-revealing or self-disclosure layer the speaker – conscious or not intended – tells something about himself, his motives, values, emotions etc.

In the relationship layer is expressed resp. received, how the sender gets along with the receiver and what he thinks of him.

The appeal layer contains the desire, advice, instruction and effects that the speaker is seeking for.

We both listen and speak on those four layers, be it on purpose or inadvertently. But what does that mean for interpretation?

In terms of technical subject matter, machine interpretation may well be superior to humans, whose knowledge base despite the best effort will always be based on a relatively small part of the world knowledge. Some highly technical conferences consist of long series of mon-directional speeches given just for the sake of it, at a neck-breaking pace and with no personal interaction whatsoever. When the original offers little „personal“ elements of communication (i.e. layer 2 to 4) in the first place, rendering a vivid and communicative interpretation into the target language can be beyond what human interpretation is able to provide. In these cases, publishing the manuscript or a video might serve the purpose just as well, even more so in the future with increasing acceptance of remote communication. And if a purely „mechanical“ translation is what is actually needed and no human element is required, machine interpreting might do the job just as well or even better. The same goes e.g. for discussions of logistics (“At what time are you arriving at the airport?”) or other practical arrangements.

But what about the three other, more personal and emotional layers? When speakers reveal something about themselves and listeners want to find out about the other person’s motives, emotions and values or about what one thinks of the other, and it is crucial to read the message between the lines, gestures and facial expressions? When the point of a meeting is to build trust and understanding and, consequently, create a relationship? Face to face meetings are held instead of phone calls or video conferences in order to facilitate personal connections and a collective experience to build upon in future cooperation (which then may work perfectly well via remote communication on more practical or factual subjects). There are also meetings where the most important function is the appeal. The intention of sales or incentive events generally is to have a positive effect on the audience, to motivate or inspire them.

Would these three layers of communication, which very much involve the human nature of both speakers and listeners, work better with a human or a machine interpreter in between? Is a human interpreter better suited to read and convey personality and feelings, and will human interaction between persons work better with a human intermediary, i.e. a person? Customers might find an un-human interpreter more convenient, as the interpreter’s personality does not interfere with the personal relation of speaker and listener (but obviously not provide any facilitation either). This “neutral” interpreting solution could be all the more charming if it didn’t happen orally, but translation was provided in writing, just like subtitles. This would allow the voice of the original speaker to set the tone. However, when it comes to the „unspoken“ messages, the added value of interpreters is in their name: They interpret what is being said and constantly ask the question „What does the speaker mean or want?“ Mood, mocking, irony, reference to the current situation or persons present etc. will most probably not be understood (or translated) by machines for a long time, or never at all. But I would rather leave it to philosophers or sci-fi people to answer this question.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Hello from the other side – Chinese and Terminology Tools. A guest article by Felix Brender 王哲謙

As Mandarin Chinese interpreters, we understand that we are somewhat rare beings. After all, we work with a language which, despite being a UN language, is not one you’d encounter regularly. We wouldn’t expect colleagues working with other, more frequently used languages to know about the peculiarities of Mandarin.

This applies not least to terminology tools. Many of the tools available to interpreters do now support Chinese-script entries. And indeed: Interpreting from English into Chinese, terminology software works as well for Chinese as it would for any other language – next to good old Excel, I myself have used InterPlex, Interpret Bank and flashterm. It’s rather when working from Chinese into English that things get tricky – and that’s not necessarily a software issue.

Until recently, many interpreters were convinced and rather adamant that simultaneous interpreting with Chinese is downright impossible, and I am sometimes tempted to agree. Compared to English – left alone German – Chinese is incredibly dense. Many words consist of only one syllable, and only very few of more than two. Owing to the way Chinese works grammatically, the very same idea can be expressed a lot more concisely in Chinese than English. To make matters worse, we replace modern Mandarin expressions with written, Classical Chinese equivalents in formal Mandarin. As a rule of thumb, the more formal the Chinese used, the more succinct it will be as well – rather different from English or German. This also is the case with proper names and terminology, which will usually have abbreviated forms that are a lot shorter syllable-wise than their English equivalents.

Adding to that, Chinese natives are incredibly fond of their language and make ample use of its full range of options: Using rare and at times byzantine expressions and words is appreciated and applauded as a sign of good education; it is never perceived as pedantic or conceited. This includes idioms (chengyu 成語), which usually refer to a story from the Chinese Classics in a highly condensed fashion: They generally contain a mere four syllables and usually function as adjectives, in contrast to English or German. In English, we will need at least a full sentence to explain what is being said, even if the same or a similar idiom exists. Chinese also frequently uses xiehouyu 歇後語: proverbs consisting of two parts, the first presenting a scenario, the second outlining the rationale of the story. Usually, the second part will be left out because Chinese natives will be able to deduce it from the first – similar to speak of the devil (and he will appear) in English. Needless to say, there hardly ever is an English equivalent, and seeing that we are operating in entirely different cultural contexts, ironing out cultural differences when explaining xiehouyu will take additional time.

It will be no surprise to hear that Chinese discursive and grammatical peculiarities make it a difficult language to interpret from: relative clauses tend to be lengthy – and are always placed in front of the noun they describe; Chinese doesn’t mark tenses as such but rather uses particles to outline how different events and actions relate to each other, in contrast to linear notions of time and tenses in European languages, so we are often left guessing; he and she are homonyms in Mandarin; to name but three examples.

Considering all of this, we see that more often than not, simultaneous interpreting from Chinese is a race against the clock and an exercise in humility – and there isn’t much time to look up words in the first place.

In Modern Mandarin, there are only around 1,200 possible syllables, with each syllable being a morpheme, i.e. a component bearing meaning; in English, we have a far greater range of possible syllables, and they only make sense in context, as not every syllable carries meaning: /mea/ and /ning/ do not mean anything per se, but meaning does. For Chinese, this implies that homophones are a common occurrence. And while we aim for perfect clarity and lucidity in English, Chinese rather daoistically indulges in ambivalence. Clever plays on words, being illusive and vague and giving listeners space to interpret what you might actually mean: not bad style, but an art to be honed. Apart from having to spend more capacity and time on identifying terms and words used in the original, this adds another layer of difficulty with regards to looking up terminology in the booth: The fastest way to type Chinese characters is by using pinyin romanisation, but owing to the huge number of homonyms, any syllable in romanized transliteration will give you a huge range of options. This means that we would have to spend at least another second or so simply to select the correct character from a drop-down list – and we will not enjoy the pleasure of word prediction that works for other languages.

In practice, this means that besides very intensive preparation before the event, we rely on what might be the oldest terminology tool in the world: our booth buddy. They are particularly important because in Chinese, we obviously don’t have any cognates – something than might get us off the hook working with European languages. We also heavily rely on our colleagues sitting next to us for figures: Chinese has ten thousand (萬) and one hundred million (億) as units in their own right, so rather than talking about one million and one billion, the Chinese will talk about a hundred times ten thousand and ten times one hundred million, respectively. This means we will have to be calculating while interpreting: a feat hard to accomplish if you are out there on your own.

While I started out thinking that not being able to use terminology software to the same extent I would use it for German-English would be quite a nuisance, I have found that this is rather an instance of the old man living at the border whose horse runs away1, as you’d say in Chinese. Interpreting is teamwork after all, and working with Chinese, we are acutely aware that we rely on our booth buddy as much as they rely on us and that we can only provide the excellent service we do with somebody else in the booth. With that in mind, professional Chinese interpreters always make for great partners in crime in the booth.

About the author:

Felix Brender 王哲謙 is a freelance conference interpreter for English, Chinese and German based in Düsseldorf/Germany. He also teaches DE>EN at the University of Heidelberg, and ZH>EN interpreting as a guest lecturer in Leeds, UK, and Taipei, Taiwan.

1 (which, as the story goes, then returns, bringing a fine stallion with it, which is then ridden by his son, who falls of the horse and breaks his leg, which is why he is not drafted and sent to war, ultimately saving his life; meaning that any setback may indeed be a blessing in disguise, similar but not entirely identical to every cloud has a silver lining. One of the most frequently used Chinese sayings, eight syllables of which the latter four are generally left out: 塞翁失馬,焉知非福, which literally translates as ‘When the old man from the frontier lost his horse, how could one have known that it would not be fortuitous?’. I rest my case.)

Digital dementia and conference interpreters – article published in Multilingual July/August 2015

DigitalDementiaMultilingual Preview

To read this article on effects of digitalisation on conference interpreters‘ memory and learning habits, please follow the link to multilingual.com (English only).

 

—————-

About the author
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.