Conference Interpreters and Their Listening Behaviour—a Guest Article by Lisa Woytowicz

Listening is an integral part of every conference interpreter’s job. It might therefore be surprising that there is hardly any research on conference interpreters’ listening behaviour. Since I did not find too much on the issue, I conducted my own study.

About Listening

Studies on listening behaviour exist. But generally, they are conducted by scholars in the field of psychology or communication studies. According to these experts, listening is a multidimensional construct which consists of behavioural, cognitive, and affective processes.

Every time we listen, we—or rather our brains—process information on several levels: When somebody speaks, we receive (verbal and non-verbal) signals. We identify sounds and put them together. We recognise words, sentences and what they mean. During this process, our short-term memory continuously verifies whether the incoming information corresponds to the information stored in our long-term memory. Besides, it adds new information and establishes new links.

There is evidence that the more we already know about an issue, the faster our short-term memory processes the information. This is not only fascinating; it also is one of the reasons why preparing an interpreting assignment is key.

Listening as a Skill

However, there is a tiny but important step in the listening process which is often ignored or at least underestimated: every listener has an intention, a goal she pursues. Selecting a listening goal is the very first step of the listening process which commonly happens subconsciously. Nevertheless, it is a decision every listener makes. And it determines which of the incoming signals are considered relevant and which will be ignored.

When interpreting simultaneously, conference interpreters are special listeners because they are “double listeners”. They need to listen to the speaker and—at the same time—to themselves. They listen to the information they interpret while also making sure that their rendition makes sense and is grammatically and semantically correct. This kind of listening behaviour might be part of the job description. Nevertheless, it is quite unnatural.

Experts agree that listening is “an identifiable set of skills, attitudes, and abilities [that] can be formulated and taught to improve individual performance” (Worthington & Bodie, 2017, p. 8). And this is brilliant! It means that interpreters can learn to make conscious listening decisions to become better listeners and thus (even) better interpreters.

Different Listening Styles

The Listening Styles Profile (LSP) is a concept to describe listening behaviour. According to the latest version of the LSP, listening styles are listening goals which are triggered by individual predispositions (i.e., they are partially stable) and elements of the listening situation (i.e., they are partially not stable).

There are four different listening styles:

  • Relational listening: a concern with and awareness of the speakers’ feelings and emotions,
  • Analytical listening: focussing on the full message before forming an opinion,
  • Task-oriented listening: a concern with the amount of time spent listening and a desire to interact with focused speakers,
  • Critical listening: a tendency to evaluate and critically assess messages for accuracy and consistency. (Bodie & Worthington, 2017, p. 403)

Data on listening behaviour is collected using self-assessment questionnaires. For my research project, I used the LSP-R8 (Rinke, 2016).

Assessing the Listening Behaviour of Different Professions

I asked representatives of three different professions as well as students enrolled in the respective university courses about their listening behaviour. Using an online questionnaire, I was able to gather data on the listening behaviour of 242 (future) psychologists, teachers, and conference interpreters.

Several t-tests were performed to determine statistically relevant differences between the groups mentioned above. If you are into statistics, let me know and I am happy to give you the details. But for now, let us skip the statistical part and get straight to the results. So, here is what I found:

  • Conference interpreters have a stronger tendency toward Critical listening than the other professionals.
  • Conference interpreters have a weaker tendency toward Relational listening than the other professionals.

To my surprise, there were no statistically relevant differences among the student groups. Apparently, future conference interpreters’ listening behaviour does not differ very much from the way future psychologists or future teachers listen.

Therefore, I concluded that the frequent use of a certain listening style on-the-job might result in applying it frequently, even in other contexts. If you think about it, this is not very far-fetched. The more we use a certain skill, the more we train it and the better we get at it. And when we are good at something, we tend to do it more often. In the end, this cycle might lead to partially automatising a certain listening behaviour.

Remember, interpreters are double listeners who always make sure that their rendition is correct. So, they often apply Critical listening when sitting in the booth. Psychologists and teachers—in their professional contexts—surely use a lot more Relational listening. In the end, psychologists are paid to know how people feel; and teachers regularly need to put themselves into the shoes of their students to meet their needs.

Conclusions

What are these findings good for? Well, competent listeners can flexibly switch between different listening styles, always adapting to new listening contexts. Irrespective of one’s profession, this might be a goal everybody could strive for. At the end of the day, being a good listener is a great asset.

It looks as though conference interpreters should train to use Relational listening more often. They could start thinking about situations in which this listening style (or the others) could come in handy, particularly if Critical listening is more of a hindrance than a help. These might be situations which involve talking to clients, colleagues, family, and friends.

Furthermore, conference interpreters could try to consciously apply different listening styles in the booth. Depending on the speaker, they might grasp more of the relevant information by focussing on her emotions (Relational listening) or on the full message (Analytical listening).

Interpreting trainers could consider establishing listening behaviour as part of the curriculum. Besides, the LSP might help explain certain flaws, such as omissions, contresens, etc., which could be relevant for giving (better) feedback.

Since listening plays such an important role in every conference interpreter’s (professional) life, there are plenty of other conclusions to be drawn. Are you interested in discussing your suggestions? Just send me an e-mail: info@lw-dolmetschen.de

 

References

Bodie, G. D. & Worthington, D. L. (2017). Profile 36 listening styles profile-revised (LSP-R). In D. L. Worthington & G. D. Bodie (Eds.), The sourcebook of listening research. Methodology and measures (pp. 402–409). Wiley-Blackwell.

Imhof, M. (2010). Zuhören lernen und lehren. Psychologische Grundlagen zur Beschreibung und Förderung von Zuhörkompetenzen in Schule und Unterricht. In M. Imhof & V. Bernius (Eds.), Zuhörkompetenz in Unterricht und Schule. Beiträge aus Wissenschaft und Praxis (pp. 15–30). Vandenhoeck & Ruprecht.

Rinke, E. M. (2016, May 14). A general survey measure of individual listening styles: Short form of the listening styles profile-revised (LSP-R8) [AAPOR Poster Session 3]. Annual Conference of the American Association for Public Opinion Research, Hilton Austin, Austin, TX, United States.

Worthington, D. & Bodie, G. D. (2017). Defining listening. A historical, theoretical, and pragmatic assessment. In D. L. Worthington & G. D. Bodie (Eds.), The sourcebook of listening research. Methodology and measures (pp. 3–17). Wiley-Blackwell.

Woytowicz, L. (2019). Persönlichkeitseigenschaften und Listening Styles von Konferenzdolmetschern im Vergleich zu anderen Berufsgruppen [unpublished master’s thesis]. Johannes Gutenberg University Mainz.


About the author

Lisa Woytowicz is a professional conference interpreter for German, English, and Portuguese, based in Essen (Germany).

www.lw-dolmetschen.de

Ein Hoch auf den guten Ton beim hybriden #DfD2020 | Good sound and vibes at Interpreters for Interpreters Workshop

+++ for English version see below +++

In einer Mischung aus ESC („Hello from Berlin“) und Abiprüfung (getrennte Tische) hat am heutigen 18. Juli 2020 der bislang teilnehmerstärkste Dolmetscher-für-Dolmetscher-Workshop als Hybridveranstaltung in Bonn stattgefunden.

169 Dolmetscher*innen waren angemeldet, davon 80 Dolmetscher corona-konform persönlich vor Ort. Dies alles organisiert vom Fortbildungs-Dreamteam der AIIC Deutschland, Inés de Chavarría, Ulla Schneider und Fernanda Vila Kalbermatten, technisch möglich gemacht durch das bewährte Team von PCS.

Lisa Woytowicz hat über ihre Masterarbeit zum Thema Listening Styles referiert. Relational Listening scheint ein unter Dolmetschern unterschätztes Thema zu sein (dazu mehr später auf diesem Kanal).

Monika Ott über den Umgang mit Kunden gibt uns noch einmal als Hausaufgabe auf, uns als Allround-Dienstleister (Conference Interpreting Service Provider laut der noch in Entwicklung befindlichen ISO 23155) zu verstehen und unser Netzwerk zu nutzen, um auf Kompetenzen von Kolleg*innen zugreifen zu können. Denn nur gemeinsam können wir die eilerlegende Wollmilchsau sein: RSI-Plattformen, Hubs, Hardware, Terminologiemanagement, Finanzen, Datenschutz.

Dr. Karin Reithofer hat uns das Thema Englisch als Lingua Franca sehr anschaulich nahegebracht. In ihrer Forschungsarbeit hat sie herausgefunden, dass das Verständnis eines gedolmetschten Fachvortrags (monologisches Setting) signifikant besser ist als das Verständnis bei der Kommunikation in nicht-muttersprachlichem Englisch. In Dialogsituationen hingegen kann nicht-muttersprachliches Englisch durchaus funktionieren. Auch interessant: Wenn ein Nicht-Muttersprachler Englisch spricht, fällt es uns leichter, ihn zu verstehen bzw. zu dolmetschen, wenn wir die Muttersprache dieses Redners kennen.

Gemeinsam mit Alex Drechsel aus dem „Studio Brüssel“ durfte ich dann im Hybridformat unseren subjektiven Saisonrückblick präsentieren und diskutieren:

Das Thema Hub oder Heimdolmetschen haben wir weitestgehend ausgeklammert. Meine Gedanken aus Prä-Corona-Zeiten nach dem RSI-Seminar im Mai 2019 finden sich im Blogbeitrag zu Hub or Home.

Was mir dabei neben dem Thema Geld aktuell am meisten umtreibt, ist die Frage, was wir tun, wenn der Ton der Wortbeiträge zu schlecht ist, um simultan zu dolmetschen. Hier wünsche ich mir von uns allen ein souveränes und nouanciertes Vorgehen, das über „das kann nicht gedolmetscht werden“ hinausgeht.

Ein Vorschlag zum Thema „Was tun bei schlechtem Ton“:

  1. Ein technischer Co-Host oder „Event-Koordinator“, wie PCS es nennt, überwacht die Tonqualität und greift moderierend ein, wenn nicht mehr simultan gedolmetscht wird. Diese Entscheidung sollte zentral durch eine Person für das ganze Dolmetschteam getroffen werden.
  2. Dann bei einigermaßen brauchbarem Ton: auf Konsekutivdolmetschen umstellen.
  3. Wenn keine Tonübertragung mehr möglich: Beiträge aus dem Chat dolmetschen.

Grundsätzlich gut: Während der virtuellen Sitzung einen Hinweis für die Teilnehmer einblenden, dass remote gedolmetscht wird.

Abschließend wurde unser AIIC-Präsident Uroš Peterc zugeschaltet, der mit seiner Bewertung der Lage den DfD würdig abgerundet hat. Er erwartet, dass das sehr diverse Angebot an RSI-Plattformen und Hubs sich sortieren wird. Trotz der Planungsunsicherheit sieht er die AIIC in der Pflicht, nicht nur zu reagieren, sondern zu agieren. Ein besseres Saisonfazit könnte ich nicht formulieren.

————————
Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.


+++ English version +++

Disclaimer: This blog post was written by a non-native ELF speaker ;-).

What seemed like a mixture of the European Song Contest („Hello from Berlin“) and end-of-term exams (with tables set up far apart) was in fact the first ever hybrid edition of the annual Interpreters for Interpreters Workshop.

On 18 July 2020, 169 interpreters (a record high!) had registered for this IfI workshop in Bonn. 80 of us were there on-site, with strict hygiene measures in place, while the others were connected via Zoom. All this had been organised by a fantastic org team consisting of Inés de Chavarría, Ulla Schneider and Fernanda Vila Kalbermatten, while the technical setup was in the experienced hands of PCS.

The first speaker was Lisa Woytowicz, she presented her master’s thesis about Listening Styles. It looks like Relational Listening isn’t exactly what interpreters are best at … but we will read more about it on this blog soon.

Monika Ott reminded us to be all-round service providers to our customers (CISP, Conference Interpreting Service Providers according to ISO 23155, which is still under development) and use our network to draw upon the expertise of our colleagues. For if we join forces, we can cover the whole range of service and advice needed: RSI platforms, hubs, hardware, terminology management, pricing, data protection etc.

Dr Karin Reithofer told us what we need to know about English as a Lingua Franca (ELF).  Her research has shown that in monologue settings, technical presentations are significantly better understood when people speak their mother tongues and use interpretation than when they communicate in non-native English. In dialogues, however, non-native (“ELF“) English may work as well. What’s also interesting: When interpreting non-native English speakers, it is easier for us to understand them if we know their mother tongues.

Alex Drechsel and I then gave our “hybrid“ review of the past – and first ever – virtual  interpreting season, me on-site, and Alex from „Studio Brussels“:

The hub vs. home discussion was left aside in our review. It has been largely discussed already (see my pre-Covid blog article Hub or Home). The main points that keep my mind busy after this virtual interpreting season are money and sound quality.

As to the latter, I would like to see a nuanced way of dealing with situations where sound quality is not sufficient for simultaneous interpreting. I would like us to be more constructive and go beyond the usual black and white approach, i.e. either keep interpreting or stop interpreting.

My suggestion for a professional way of handling poor sound situations:

  1. Have a technical co-host or event coordinator, like PCS put it, monitor sound quality, and intervene as a moderator when sound is too poor for simultaneous interpretation. The decision of when to stop interpreting should be in the hands of one person for the whole team.
  2. If sound quality allows for it: switch to consecutive interpreting.
  3. If not: Have participants type in the chat and do sight translation.

I also like the idea of displaying a disclaimer to the meeting participants in videoconferences, stating that the meeting is being interpreter remotely.

Finally, our AIIC president, Uroš Peterc, joined us via Zoom. His view of the current situation perfectly rounded off the day. He expects the vast offer of RSI platforms and hubs to consolidate over time. In these times of uncertainty, he wants AIIC not only to react to market developments but to be a proactive player. I couldn’t have put it better myself.


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

You can never have too many screens, can you?

I don’t know about you, but I sometimes struggle, when using my laptop in the booth, to squeeze the agenda, list of participants,  glossary, dictionary, web browser and meeting documents/presentations onto one screen. Not to mention email, messenger or shared notepad when working in separate booths in times of COVID-19 … Or even the soft console of your RSI provider?

Well, I have more than once found myself wondering if anybody would mind me bringing my 24 inch desktop monitor to the booth to add some additional screenspace to this tiny 12 inch laptop screen – until, finally, I came across a very useful little freeware application called spacedesk. It lets you use your Android, iOS or Windows tablet as an external monitor to complement your Windows computer quite easily (to all tablet aficionados: unfortunately it does not work the other way around). You simply install it on both your main Windows device, the „server“ or „Primary Machine“, and the tablet as a „client“ or „secondary device“, and you can connect both devices via USB, ethernet or WiFi and then  use your tablet to either extend or duplicate your computer screen just like you do it with any external monitor on your desk.

There is just a tiny delay when moving the mouse (if that’s not due to my low-end tablet’s poor performance), so it might be better to move the more static elements, like the agenda, to it rather than your terminology database, which you might want to handle very swiftly.

So if ever you feel like going back to printing your documents for lack of screen space, bringing your tablet as a screen extension might be a good alternative.


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

 

Was kostet Remote-Dolmetschen und warum?

Wann ist es sinnvoll, mit Dolmetschern vor Ort zu tagen, und wann ist Dolmetschen über das Internet sinnvoll? Nach unserem Web-Meeting der AIIC Deutschland am vergangenen Freitag (22. Mai 2020) mit dem herzerfrischenden Titel „TACHELES – RSI auf dem deutschen Markt“ teile ich hier gerne mit Euch meine in eine rechnende Tabelle gegossenen Überlegungen zum Thema:

Preisvergleich Remotedolmetschen und Präsenzdolmetschen

Ergänzund zu den finanziellen Überlegungen findet Ihr hier die technischen Empfehlungen der AIIC zum Ferndolmetschen.

Über Fragen und Anregungen freue ich mich natürlich!


Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

 

Videos on fee calculation for conference interpreters | Videos sobre como calcular honorarios para intérpretes de conferencias

My tutorials on how to use the Time&money calculator, an Excel spreadsheet developed by AIIC Germany’s former profitability working group, finally have English and Spanish subtitles! Comments and questions welcome 🙂


Video on calculating working hours and fees for conference interpreters | Video sobre como calcular horas de trabajo y honorarios para intérpretes de conferencias (11 min):


Video on how to calculate interpreting projects | Video sobre el cálculo de proyectos de interpretación (6 min):


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Simultaneous interpreting in the time of coronavirus – Boothmates behind glass walls

Yesterday was one of the rare occasions where conference interpreters were still brought to the client’s premises for a multilingual meeting. Participants from abroad were connected via a web meeting platform, while the few people who were on-site anyway were sitting at tables 2 meters apart from each other. But what about the interpreters, who usually share a booth of hardly 2 x 2 m, and who are not exactly known for their habit of social distancing in the first place? Well, PCS, the client’s conference technology provider of choice, came up with a simple, yet effective solution: They just split up the teams and gave us one booth each. So there we were, my colleague Inés de Chavarría and I, spreading our stuff in our private booths, separated by no more than a window.

Separate booths

Now, apart from having to bring our own food (no catering available), by the time we met in the morning of this meeting, we had already figured out which would probably be the main challenges of being boothmates while separated by a glass wall:

1 How do we agree on when to take turns?

2 How do we help each other by writing down numbers, names and difficult words?

3 How do we tell each other that we want coffee/are completely knackered/need to go to the loo, complain about the sound/accent/temperature/chairman’s haircut or ask how the kids are?

Luckily, after an exciting day, we felt that we had found great solutions to all our communicative needs:

1 Taking over: Although the colleague who was not working couldn’t listen to the original and the interpretation at the same time, she could tell quite reliably from gestures and eye-contact when to take over. So, no countdown or egg timer needed as long as you can see each other.

2 Helping out – These were the options we tried:

Write down things with pen and paper, show it through the window: Rather slow and hard to read due to reflections from the booth windows. The same goes for typing on the computer and looking at the screen through the window.

Scribbling in a shared file in Microsoft Whiteboard (great), One Note (ok), Google Drawings (a bit slow and unprecise): Fine as long as all parties involved have a touchscreen and decent pen. Sometimes hard to read, depending on the quality of the pen/screen and handwriting.

Typing in a shared file like Google Sheets or Docs: This was our method of choice. The things we typed appeared on the other’s screen in real-time, plus it was perfectly legible, in contrast to some people’s handwriting. A perfect solution as long as there is decent Wifi or mobile data connection. And although I am usually of the opinion that there is no such thing as a decent spreadsheet, in this case, a plain word processing document has one clear advantage: When you type in Google Docs, each character you type will appear on your colleague’s screen practically in real-time, whereas when typing in the cell of a Google Sheet, your colleague won’t be able to see it until you „leave“ this cell and jump to the next one.

3  The usual chitchat:

WhatsApp, or rather the WhatsApp Web App, was the first thing we all spontaneously resorted to for staying in contact with a glass wall between us. But it quickly turned out to be rather distracting, with all sorts of private messages popping up.

Luckily, all Google documents come with a chat function included, so we had both our meeting-related information exchange and our personal logistics neatly displayed next to each other in the same browser window.

If we had worked with many different documents that needed to be managed while interpreting, I would have liked to try Microsoft Teams. With its chat function and shared documents, among other features, it seems very promising as a shared booth platform. But their registration service was down due to overload anyway, so that’s for next time.

So, all in all, a very special experience, and rather encouraging thanks to the many positive contributions from all people involved. And the bottom line, after having to accommodate on my laptop screen the booth chat and notes next to the usual glossary, online resources, agenda and meeting documents: My next panic purchase will be a portable touchscreen in order to double my screen space in the booth.


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

 

 

How to Make CAI Tools Work for You – a Guest Article by Bianca Prandi

After conducting research and providing training on Computer-Assisted Interpreting (CAI) for the past 6 years, I feel quite confident in affirming that there are three indisputable truths about CAI tools: they can potentially provide a lot of advantages, do more harm than good if not used strategically, and most interpreters know very little about them.

The surveys conducted so far on interpreters’ terminological strategies[1] have shown that only a small percentage has integrated CAI tools in their workflow. Most interpreters still prefer “traditional” solutions such as Word or Excel tables to organize their terminology. There can be many reasons for this. Some may have already developed reliable systems and processes, and don’t see a point in reinventing the wheel. Others believe the cons outweigh the pros when it comes to these tools and are yet to find a truly convincing alternative to their current solutions. Others may simply never have heard about Flashterm, InterpretBank or Interpreter’s Help before.

Even though a lot still remains to be investigated and demonstrated empirically, the studies conducted so far have highlighted both advantages and disadvantages in the use of CAI tools.  On the positive side, CAI tools can provide effective preparation through automatic term extraction and in-built concordancers[2] (Xu 2015). They seem to contribute to higher terminological accuracy than paper glossaries[3] or even Excel tables[4] when used to look up terms in the booth. They help interpreters organize, reuse and share their resources, rationalize and speed up their preparation process, make the most of preparation documents, work efficiently on the go and go paperless if desired. On the negative side, they are often perceived as potentially distracting and less flexible than traditional solutions. When working with CAI tools, we might run the risk of relying too much on the tool, both during the preparation phase and interpretation proper[5].

I would argue that, if used strategically, the pros easily outweigh the cons billigastemobilabonnemang.nu telefonabonnemang. Just as with any tool and new technology, it all comes down to how you use them. Whether you are still sceptical, already CAI-curious, or a technology enthusiast, here are three tips on how to make CAI tools work for you.

  1. Take time to test your tools

Most tools offer a free demo to test out their functionalities. I know we are all busy, but you can use downtimes to work on improving your processes, just as you would (should!) do to work on your CPD and marketing strategy. I suggest you do the following:

  • Choose one of your recent assignments, something you had to do research on because the topic was unfamiliar to you.
  • Set aside 1-2 hours a day, or even just 30 minutes, to simulate preparing for the assignment again.
  • Set yourself a clear goal for each phase of your workflow (glossary creation, terminology extraction, memorization, debriefing).
  • Build your baseline: dedicate 1 session to assessing your current approach. Then, dedicate each of the following sessions to testing out a different tool.
  • For a systematic comparison, keep track of the time needed for each activity, the pros and cons for each tool, your preferences and things that you found irritating.

You can conduct this analysis and selection process over a week or even a month if you are very busy. Once you have identified what might work for you, keep using those tools! Maybe test them out on a real assignment for a client you already know, where the risk of mishaps is lower.

  1. There is no perfect tool

Unless you can write code and develop your own tool, chances are there will always be something you don’t like about a tool, or that some functions you deem essential might be missing. But given the advantages that come from working with these solutions, it is definitely worth it to try and see whether you can find a tool that satisfies even just 50% of your interpreting needs. It may not seem much, but it’s already 50% of your workflow that you can optimize.

Once you get a feeling for what each tool can do for you, you might find out that there are some options you love that aren’t available in your tool of choice. My suggestion: mix and match. Most CAI tools are built modularly and allow users to only work with a specific function. For instance, I love Intragloss’ terminology extraction module, so I use that tool to work with documents, but I use InterpretBank for everything else. In a word: experiment and be creative!

  1. Tools can’t do the work for you

If you’re passionate about technology, you will agree that CAI tools are quite cool. However, we should never forget that they are a tool and, as such, they fulfil their function as long as we use them purposefully. Think before you use them, always make sure you follow a strategic course of action.

If you have the feeling you had never been as ill-prepared as when you worked with a CAI tool, here are some questions you can ask yourself:

  • Am I sure this is the right tool for me? Have I taken enough time to test it out?
  • Did I have a clear goal when I started preparing for my assignment? Or was I simply trying to cram together as many terms as possible?
  • Am I aware of my learning preferences? If I’m an auditory learner, does it make sense to use a flashcard method to study the terminology?
  • Did I include in my glossary just any term that came up in my documents? Or did I start from the relevant terminology I found to further explore the topic?

As for many things in life, reflection and a structured, strategic approach can really go a long way. For busy interpreters needing some guidance, Interpremy is preparing a course series that will help you effectively use CAI tools to optimize all phases of your workflow and avoid potential pitfalls. Get in touch at info@interpremy.com!


[1] See for instance: Zielinski, Daniel and Yamile Ramírez-Safar (2006). Onlineumfrage zu Terminologieextraktions- und Terminologieverwaltungstools. Wunsch und Wirklichkeit noch weit auseinander.” MDÜ. and Corpas Pastor, Gloria and Lily May Fern (2016). A Survey of Interpreters’ Needs and Practices Related to Language Technology.

[2] See Xu, Ran (2015). Terminology Preparation for Simultaneous Interpreters. University of Leeds.

[3] Biagini, Giulio (2015). Glossario cartaceo e glossario elettronico durante l’interpretazione simultanea: uno studio comparativo. Università degli studi di Trieste.

[4] Prandi, Bianca (2018). An exploratory study on CAI tools in simultaneous interpreting: Theoretical framework and stimulus validation. In Claudio Fantinuoli (ed.), Interpreting and technology, 29–59. Berlin: Language Science Press.

[5] Prandi, Bianca (2015). The Use of CAI Tools in Interpreters’ Training: A Pilot Study. 37th Conference Translating and the Computer, 48–57.


About the author:

Bianca Prandi

bianca@interpremy.com

  • Conference Interpreter IT-EN-DE, MA Interpreting (University of Bologna/Forlì), based in Mannheim (Germany), www.biancaprandi.com;
  • PhD candidate – University of Mainz/Germersheim. Research topic: impact of computer-assisted interpreting tools on terminological quality and cognitive processes in simultaneous interpreting;
  • CAI trainer and co-founder of InterpreMY – my interpreting academy: online academy for interpreters with goal-centered, research-based courses, www.interpremy.com (coming soon: July 2020).

Publications:

  • Prandi, B. (2015). L’uso di InterpretBank nella didattica dell’interpretazione: uno studio esplorativo. Università di Bologna/Forlì.
  • Prandi, B. (2015). The Use of CAI Tools in Interpreters’ Training: A Pilot Study. 37th Conference Translating and the Computer, 48–57. London.
  • Prandi, B. (2017). Designing a Multimethod Study on the Use of CAI Tools during Simultaneous Interpreting. 39th Conference Translating and the Computer, 76–88. London: AsLing.
  • Prandi, B. (2018). An exploratory study on CAI tools in Simultaneous Interpreting: theoretical framework and stimulus validation. In C. Fantinuoli (Ed.), Interpreting and technology, 28–59.
  • Fantinuoli, C., & Prandi, B. (2018). Teaching information and communication technologies: a proposal for the interpreting classroom. Trans-Kom, 11(2), 162–182.
  • Prandi, B. (forthcoming). CAI tools in interpreter training: where are we now and where do we go from here? InTRAlinea.

 

Preparing on numbers: Yes, we CAN (and SHOULD)! – A Guest Article by Francesca Maria Frittella

A new client requested you to interpret at his business’s annual press conference. It’s quite a big assignment and the event takes place in only five days, but you are not worried: the best practices that you have developed through your training and professional experience allow you to prepare efficiently and effectively. So, you get started with your preparation. You find out more about the company’s departments and positions and prepare a multilingual list of speakers and participants. You learn more about the company’s products and create a glossary using your favourite computer-assisted interpreting tools. Of course, you also prepare strategically on numerical facts and organise key data for easy access in the booth.

The following statement will, therefore, appear obscure to you:

“ Preparation, as a means to overcome number problems, is not very efficient. Most numbers that arise in speeches do not form part of interpreters’ general knowledge.“

I was surprised when I read this comment to one of my articles by a peer reviewer in one of the most influential scientific magazines in interpreting research. After all, the conviction that preparation is key to interpret numbers successfully seems not to be shared by all experts in our field. Time and again, this observation found confirmation in my research studies on the topic as well as in my experience teaching my course on the simultaneous interpretation of numbers, talking at conferences and exchanging views with colleagues on this topic. A large number of students, trainers and professionals are unclear about why, when and how preparation is helpful to interpret numbers. In this blog post, I will try to clarify these points and share with you some general principles to guide your numerical preparation for assignments.

Of course, preparation on numerical facts, just like general preparation, may only be efficient and effective if it is goal-directed and systematic. In other words, if we do not want to waste our time and make sure that our numerical preparation actually allows us to improve our interpretation quality, we need a technique for numerical preparation―a set of procedures and methods that allow us to achieve the desired effect. To develop and refine our technique for numerical preparation, it is, therefore, first and foremost important to define its purpose.

1. The function of preparation in interpreting

The function of numerical preparation can be compared to the function of preparation in general. When we prepare for our assignments, we conduct two distinct types of preparation: terminological and knowledge-based. Both types have their own function, and both are, therefore, necessary.
Terminological preparation is aimed at finding target-language equivalents for source-language terms. To be useful, this type of preparation should guarantee that we have identified all the most relevant terms to our assignment.
Knowledge-based preparation is aimed at acquiring encyclopaedic knowledge about the topic of our assignment. Such background knowledge is fundamental because it allows us to understand the meaning of the information in the source speech, summarise, reformulate, clarify concepts and check our delivery for plausibility.

2. The function of numerical preparation: why and when is it important?

The same principles apply to numerical preparation. Preparation on numerical facts can, too, be divided into two types, each with its own function.
The first type of numerical preparation aims at identifying the key components of the numerical information unit. Numbers (the bare arithmetical value) are always accompanied by other elements that constitute the information unit, such as referent (the thing that the number quantifies or defines) and unit of measurement (the accepted standard of measurement of quantity), like in the example below:

a 19[arithmetical value]-inch[unit of measurement] tablet [referent]

A number without a referent is like a sentence without a subject. All elements of the numerical information unit must be interpreted accurately to convey the information. For instance, if I didn’t know the equivalent term of the unit of measurement ‘inch’ in the target language, I wouldn’t be able to provide my audience with an accurate rendition of the source-language numerical information.

The second type of numerical preparation is aimed at acquiring encyclopaedic numerical knowledge about the topic of our assignment. Like general encyclopaedic knowledge in understanding, the knowledge of some reference numerical facts allows us to understand the numbers in the source-speech. This enables us to apply interpreting strategies when needed or desirable. For instance, if I didn’t know the correct translation of the word ‘inch’ but I could convert this unit of measurement into centimetre, I would still be able to provide my audience with equivalent information. This type of preparation also allows us to perform a plausibility check of our delivery, which can save us from painful plausibility errors. For instance, knowing the rough length of an inch in the real world, I would not confuse ’19 inches’ with ’90 inches’ when talking about the width of a laptop screen. Even if similar-sounding numerals are a frequent problem trigger, thanks to my background numerical knowledge, I would immediately judge the second option as implausible, see this here.
3. Numerical preparation: how to do it efficiently and effectively?

To summarise, we may distinguish two distinct types of numerical preparation:
1) Preparation on the components of the numerical information unit, like terminological preparation, allows us to increase the accuracy of our delivery, by ensuring that we can rapidly and precisely deliver the information in the target language;
2) Preparation on encyclopaedic numerical knowledge makes it possible for us to understand the meaning of the numerical information, select adequate strategies to solve interpreting-related problems (summarising, reformulating, clarifying etc.), and check our delivery for plausibility.
A common problem with numerical preparation is choosing the right elements to focus on. When you prepare for numerical facts remember a general rule of thumb: less is more but may not be enough. You want to make sure that you do not waste your time trying to memorise endless lists of data but you still need to find the fundamental information that will help you achieve the objective of a more accurate and effortless delivery. To make sure that you are preparing on numerical facts both efficiently and effectively, try asking yourself the following questions:

• What are the top 5 most important numerical facts about this event/topic?―Sure you can do more if you have time, but starting with the 5 most relevant numbers will help you focus your preparation and decrease the likelihood of overseeing fundamental facts that you really should know.
• What are the elements that accompany those 5 key numerical information units?―For each numerical fact, make sure to learn the fundamental elements of the numerical information unit in both the source and the target language. This will help you make sure that you can interpret the information completely and accurately.
• What are the benchmark values for these 5 key numerical facts?―The previous knowledge of some reference values (for instance the highest and lowest values) will allow you to gauge the plausibility of the information in the source speech and in your delivery.

If this all sounds too abstract, don’t worry! A course on the topic will soon be available on www.interpremy.com. We will also be holding a seminar on the topic at AIIC Germany’s PRIMS conference in July 2020. Get in touch to be kept posted!


About the author:

Francesca Maria Frittella

      • Conference Interpreter IT-EN-DE-CN, MA Germersheim, based in Beijing (China), fmfinterpreting.com, contact: francesca@interpremy.com
      • Researcher in interpreting pedagogy and course design
      • Co-founder of InterpreMY – my interpreting academy: online academy for interpreters with goal-centred, research-based courses, interpremy.com (coming soon: July 2020)
      • Publications:

    Frittella F. M. (2017) Numeri in interpretazione simultanea. Difficoltà oggettive e soggettive: un contributo sperimentale (in English, Numbers in Simultaneous Interpreting. Objective and Subjective Difficulties: An experimental study), Rome, Europa Edizioni.
    Frittella F. M. (2019) „70.6 billion world citizens: Investigating the difficulty of interpreting numbers“, Translation & Interpreting 11/1.

Vögelchen füttern nicht vergessen! | Don’t forget to feed the birds! | ¡No se olviden de dar de comer a los pajaritos!

Frohe Festtage! * ¡Felices Fiestas! * Happy Holidays! * Joyeuses Fêtes!

Die drei Spatzen

 

 

#multitalkingfähig – Eindrücke vom BDÜ-Kongress 2019 in Bonn

Das Potpourri aus über 100 Vorträgen, Diskussionen, Seminaren und Workshops, das der BDÜ vergangenes Wochenende beim BDÜ-Kongress in Bonn hingezaubert hat, kann ein einzelner Mensch gewiss nicht würdigen. Deshalb ist mein kleiner Bericht auch nur ein ganz persönlicher Erfahrungsausschnitt. Sämtliche Abstracts und Artikel lassen sich viel besser im Tagungsband nachlesen.

Mein erster Gedanke auf dem Heimweg nach dem ersten Kongress Tag war: Wie DeepL, einer Art Naturphänomen gleich, mit einer Mischung aus Faszination und Entsetzen studiert wird, das erinnert mich stark daran, wie vor etwa 15 Jahren alle staundend auf Google blickten und versuchten, sein „Verhalten“ zu ergründen. Ein wenig tat mir das arme DeepL schon leid, leistet es doch für eine Maschine wirklich Beträchtliches, und doch erntet es unter Sprachmittlern allenthalben Häme für die – zugegebenermaßen of lustigen – Fehler, die es produziert. Glücklicherweise blieb es aber nicht dabei: Zumindest für mich gab es in einigem hochinteressanten Vorträgen durchaus Neues und Interessantes in Sachen maschineller Übersetzung zu hören.

Patrick Mustu über DeepL im Übersetzen von Rechtstexten für das Sprachenpaar Englisch-Deutsch

Patrick Mustu lieferte eine fundierte wie unterhaltsame Darstellung dessen, was DeepL mit Rechtstexten so anstellt. Was mir dabei besonders im Gedächtnis geblieben ist:

  • DeepL liefert zu verschiedenen Zeitpunkten für denselben Text unterschiedliche Übersetzungen. The parties hereto waren an einem Tag Die Parteien hieran, am nächsten die Beteiligten.
  • DeepL schaut nicht in die amtlichen Übersetzungen von EU-Rechtstexten. Für Rechtskundige und Normalsterbliche ganz geläufige Abkürzungen wie DSGVO wurden nie übersetzt.
  • Die so genannten Doublets (any and all, null and void, terms and conditions) und Triplets (right, title and interest) werden nicht als Sinneinheiten erkannt, sondern einzeln wortwörtlich übersetzt. Sie veranschaulichen gut, was Patrick Mustu mit Blick auf den übersetzerischen Balanceakt zwischen Freiheit und Genauigkeit sehr schön auf den Punkt gebracht hat hat mit dem denkwürdigen Satz „Auch Übersetzer sind Interpreter.“

Kurzseminar von Daniel Zielinski und Jennifer Vardaro über das Terminologieproblem in der maschinellen Übersetzung

In diesem für mich ultraspannenden Seminar wurde gezeigt, wie man die Algorithmen von online verfügbaren MÜ-Systemen mit kundenspezifischer Terminologie füttern, also trainieren kann. Dank dieser engine customization kann man dem System dabei helfen, sich bei einer Auswahl möglicher Termini für den jeweils „richtigen“ zu entscheiden. In manchen Texten trägt die Terminologie den Großteil der Inhalte, sie ist außerdem SEO-relevant. Deshalb ist eine verbesserte Terminologie von entscheidender Bedeutung für die Qualität der Übersetzung.

Die Referenten hatten eine Reihe kommerziell verfügbarer Systeme getestet, unter anderem Amazon, Globalese, Google, Microsoft und SDL. Es gab jeweils eine Demonstration der Benutzeroberflächen, der Preismodelle und eine Auswertung der Ergebnisse. Hierbei wurde die mittels Einspeisen eigener Terminologie erzielte Reduzierung der Terminologiefehler gewichtet nach Kritikalität dargestellt – sie lag teilweise bei über 90 %.

Interessant: Das Importieren von „nackter“ Terminologie brachte im Vergleich nicht so gute Ergebnisse wie das Eispielen ganzer Sätze – in einigen Fällen verschlechterten sich die Übersetzungen sogar. Die Bedeutung von Kontext, Kontext, Kontext scheint also auch in der MÜ eine große Rolle zu spielen.

Als ich später den „Terminologiepapst“ Klaus-Dirk Schmitz nach den Megatrends in der Terminologie fragte, nannte er im Übrigen spontan – unter anderem – auch die Frage, wie man Terminologie sinnvoll in die MÜ integriert. Offensichtlich lag das Seminar von Daniel Zielinski und Jennifer Vardaro also voll im Megatrend!

Nina Cisneros – Schriftdolmetschen

Den Übergang vom Schreiben zum Sprechen schaffte ich dann mit dem Vortrag von Nina Cisneros zum Thema Schriftdolmetschen, auch Livetranskription oder Live Captioning genannt. Dieses „Mitschreiben“ in Echtzeit erfolgt normalerweise einsprachig und dient der Hörunterstützung, meist für Menschen mit Hörbehinderung. Wie im Simultandolmetschen arbeitet man zu zweit, um einander aufgrund der hohen kognitiven Belastung regelmäßig abzuwechseln und auch zu korrigieren und zu unterstützen.

Sowohl die konventionelle Methode des Schriftdolmetschens, also mit Händen und Tastatur, als auch die softwarebasierte mittels Nachsprechen des Gesagten und Transkription per Spracherkennungssoftware, hier Dragon (unter Zuhilfenahme eines Silencer genannten Schalldämpfers), wurden demonstriert.

Inspirierend war die anschließende Diskussion über Zusatzdienstleistungen, die im mehrsprachigen Konferenzdolmetschen im Zusammenhang mit Schriftdolmetschen angeboten werden könnten. Bei einer Art sprachmittelndem Schriftdolmetschen oder schriftlichem Simultandolmetschen könnte das Gedolmetschte statt per Mikrofon und Kopfhörer in Schriftform auf einen großen Bildschirm, einer Live-Untertitelung gleich, oder per App auf mobile Endgeräte der Zuhörer übertragen werden. Abgesehen von der Sprachmittlung kann auch hier eine willkommene Hörunterstützung „mitgeliefert“ werden: Manche Zuhörer tun sich leichter damit, eine Fremdsprache zu lesen, als sie zu verstehen.
Als Alternative zur Ausgabe unzähliger Kopfhörer wäre eine solche Lösung auch diskret und unkompliziert. Schon heute ist es ja teilweise so, dass man auf das Verteilen von Empfängern verzichtet und stattdessen die Verdolmetschung per Saallautsprecher für alle Zuhörer überträgt (egal ob sie die Ausgangssprache verstehen oder nicht).

Unbeantwortet blieb die Frage, ob man beim Simultandolmetschen statt für die menschlichen Zuhörer auch Dragon-gerecht sprechen kann, also nicht nur deutlich und mit gleichmäßigem Tempo, sondern auch unter Angabe von Satzzeichen und möglichst „druckreif“, also zum Beispiel ohne Selbstkorrekturen. Ich denke Komma es wäre einen Versuch wert Ausrufezeichen

Auch wenn für mich der Charme einer BDÜ-Konferenz unter anderem darin liegt, zu sehen, welche technologischen Neuerungen es bei den Übersetzern gibt (sprich: Was in ein paar Jahren zu den Dolmetschern herüberschwappen könnte), so gab es natürlich auch zum Konferenzdolmetschen viele spannende Vorträge, und zwar nicht nur zum Trendthema RSI. Was ich schon im Sommer beim DfD 2019 bemerkt hatte, bestätigte sich auch hier: Es lohnt sich, Hochschulabsolventen über ihre Masterarbeiten reden zu lassen. So hat Nora Brüsewitz von ihrer Arbeit zur automatischen Spracherkennung als Unterstützung von Simultandolmetschern berichtet. Sie hat hierzu die Systeme von Google, Watson (IBM), Aonix und Speechmatix getestet und anhand der Kriterien Zahlen, Eigennamen, Terminologie, Homophone und unlogische Aussagen evaluiert. In der Gesamtwertung lag Google vorne, bei dem korrekten Transkribieren von Zahlen hatte Watson mit 97 % deutlich die Nase vorne, bei den Eigennamen Google mit 92 % (alle anderen lagen hier deutlich unter 40 %!), im Korrigieren unlogischer Aussagen waren alle Systeme mit zwischen 70 und über 90 % erstaunlich gut (Einzelheiten im Tagungsband, Lesen lohnt sich!) Insgesamt ein Ansatz, bei dem es sich lohnt, die Entwicklungen im Auge zu behalten!

Sarah Fisher stellte die Ergebnisse ihrer Masterarbeit Voices from the booth – collective experiences of working with technology in conference interpreting vor. Die Einzelheiten lassen sich auch hier besser der Arbeit oder dem Tagungsband entnehmen, aber eine sehr aussagekräftige Folie möchte ich gerne für sich sprechen lassen:

Sarah Fisher
We need tech to work

Zu guter Letzt hat Claudio Fantinuoli in seinem Vortrag The Technological Turn in Interpreting: The Challenges That Lie Ahead einen willkommenen Perspektivenwechsel vorgenommen und darauf hingewiesen, dass es ohne „neue Technologien“ gar kein Simultandolmetschen gäbe. Die großen technischen Entwicklungen, so Claudio, geschehen schnell und unabhängig von uns. Ein besseres Schlusswort hätte ich mir nicht überlegen können, read review.

Und einmal mehr hat sich bestätigt: Das A und O jedes Kongresses sind die Kaffeepausen. Zwar konnte ich mich in den zwei (von insgesamt drei) Tagen nur mit schätzungsweise 0,1 % der über 1000 Teilnehmer unterhalten (und höchstens 10 % der Beiträge hören). Aber ich hätte kaum in zwei Tagen so viele Bücher, Zeitschriften, Blogs, Posts und Webseiten wälzen können, um durch reines Lesen so viele relevante Informationen und Eindrücke zu erhalten, wie ich sie hier von Mensch zu Mensch bekommen habe.

————————
Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.