Mein Gehirn beim Simultandolmetschen| My brain interpreting simultaneously

Gehirn einer Simultandolmetscherin
conference interpreter’s brain

+++ for English, see below +++

Für gewöhnlich fragen wir uns ja eher, was gerade um Himmels Willen im Kopf des Redners vorgeht, den wir gerade dolmetschen. Unsere Kollegin Eliza Kalderon jedoch stellt in ihrer Doktorarbeit die umgekehrte Frage: Was geht eigentlich in den Dolmetscherköpfen beim Simultandolmetschen vor? Zu diesem Zweck steckt sie Probanden in die fMRT-Röhre (funktionale Magnetresonanztomographie), um zu sehen, was im Gehirn beim Dolmetschen, Zuhören und Shadowing passiert. Und so habe auch ich mich im November 2014 aufgemacht an die Uniklinik des Saarlandes in Homburg. Nachdem uns dort Herr Dr. Christoph Krick zunächst alles ausführlich erklärt und ein paar Zaubertricks mit dem Magnetfeld beigebracht hat (schwebende Metallplatten und dergleichen), ging es in die Röhre.

fMRT

Dort war es ganz bequem, der Kopf musste still liegen, aber die Beine hatten zu meiner großen Erleichterung viel Platz. Dann habe ich zwei Videos im Wechsel gedolmetscht, geshadowt und gehört, ein spanisches ins Deutsche und ein deutsches ins Spanische. Neben dem Hämmern der Maschine, das natürlich ein bisschen störte, bestand für mich die größte Herausforderung eigentlich darin, beim Dolmetschen die Hände stillzuhalten. Mir wurde zum ersten Mal richtig klar, wie wichtig das Gestikulieren beim Formulieren des Zieltextes ist. Nach gut anderthalb Stunden (mit Unterbrechungen) war ich dann einigermaßen k.o., bekam aber zur Belohnung nicht nur sofort Schokolade, sondern auch direkten Blick auf mein Schädelinneres am Computer von Herrn Dr. Krick.

Natürlich lassen sich bei einer solchen Untersuchung viele interessante Dinge beobachten. Beispielhaft möchte ich zum Thema Sprachrichtungen Herrn Dr. Krick gerne wörtlich zitieren, da er mir das Gehirngeschehen einfach zu schön erläutert hat: “Da Sie muttersprachlich deutsch aufgewachsen sind, ergeben sich – trotz Ihrer hohen Sprachkompetenz – leichte Unterschiede dieser ähnlichen Aufgaben bezüglich sensorischer und motorischer Leistungen im Gehirn. Allerdings möchte ich nicht ausschließen, dass der Unterschied durchaus auch an der jeweiligen rhetorischen Kompetenz von Herrn Gauck und Herrn Rajoy gelegen haben mag … Wenn Sie den Herrn Gauck ins Spanische übersetzt hatten, fiel es Ihnen vergleichsweise leichter, die Sprache zu verstehen, wohingegen Ihr Kleinhirn im Hinterhaupt vergleichsweise mehr leisten musste, um die Feinmotorik der spanischen Sprechweise umzusetzen.”

simultaneous interpreting German Spanish
Simultandolmetschen Deutsch (A) – Spanisch (B)

“Wenn Sie aber den Herrn Rajoy ins Deutsche übersetzt hatten, verbrauchte Ihr Kleinhirn vergleichsweise weniger Energie, um Ihre Aussprache zu steuern. Allerdings musste Ihre sekundäre Hörrinde im Schläfenlappen mehr sensorische Leistung aufbringen, um den Ausführungen zu folgen. Dies sind allerdings nur ganz subtile Unterschiede, die in der geringen Magnitude den Hinweis ergeben, dass Sie nahezu gleich gut in beide Richtungen dolmetschen können.”

simultaneous interpreting Spanish German
Simultandolmetschen Spanisch (B) – Deutsch (A)

Dies ist nur einer von vielen interessanten Aspekten. So war beispielsweise auch mein Hippocampus relativ groß – ähnlich wie bei Labyrinth-Ratten oder den berühmten Londoner Taxifahrern … Welche wissenschaftlichen Erkenntnisse sich aus der Gesamtauswertung der Studienreihe ergeben, dürfen wir dann hoffentlich demnächst von Eliza Kalderon selbst erfahren!

PS: Und wer auch mal sein Gehirn näher kennenlernen möchte: Eliza sucht noch weitere professionelle Konferenzdolmetscher/innen mit Berufserfahrung, A-Sprache Deutsch, B-Sprache Spanisch (kein Doppel-A!), ca. 30-55 Jahre alt und möglichst rechtshändig. Einfach melden unter kontakt@ek-translat.com

PPS: Auch ein interessanter Artikel zum Thema: http://mosaicscience.com/story/other-words-inside-lives-and-minds-real-time-translators

+++

Normally, we rather wonder what on earth is going on in the mind of the speaker we are interpreting. Our colleague Eliza Kalderon, however, puts it the other way around. In her phD, she looks into what exactly happens in the brains of simultaneous interpreters. To find out, she puts human test subjects into an fMRI machine (functional Magnetic Resonance Imaging) and monitors their brains while interpreting, listening and shadowing. I was one of those volunteers and made my way to the Saarland University Hospital in Homburg/Germany in November 2014. First of all, Dr. Christoph Krick gave us a detailed technical introduction including a demo of how to do magic with the helfp of the magnetic field (flying metal plates and the like). And then off I went into the tube.

fMRT

To my delight, it was quite comfortable. My head wasn’t supposed to move, ok, but luckily my legs had plenty of room. Then Eliza made me listen to, interpret and shadow two videos: one from German into Spanish and one from Spanish into German. The machine hammering away around my head was a bit of a nuisance, obviously, but apart from that the biggest challenge for me was keeping my hands still while interpreting. I hadn’t realised until now how important it is to gesture when speaking. After a good one and a half hour’s work (with little breaks), I was rather knocked out, but I was rewarded promptly: Not only was I given chocolate right after the exercise, I was even allowed a glance at my brain on Dr. Krick’s computer.

There are of course a great many interesting phenomena to observe in such a study. To describe one of them, I would like to quote literally Dr. Krick’s nice explanation: “As you have grown up speaking German as a mother tongue, despite your high level of linguistic competence, we can see slight differences between the two similar tasks in terms of sensoric and motoric performance in your brain. However, it cannot be ruled out that these differences might be also be attributable to the respective rhetorical skills of Mr. Gauck and Mr. Rajoy. When translating Mr. Gauck into Spanish, understanding involved comparably less effort while the cerebellum in the back of your head had to work comparably harder in order to articulate the Spanish language.”

simultaneous interpreting German Spanish
Simultaneous interpreting German (A) – Spanish (B)

“When, on the other hand, translating Mr. Rajoy into German, your cerebellum needed comparably less energy to control your pronunciation. Your secondary auditory cortex, located in the temporal lobe, had to make a greater sensoric effort in order to understand what was being said. Those differences are, however, very subtle, their low magnitude actually leads to the assumption that you practically work equally well in both directions.”

simultaneous interpreting Spanish German
Simultaneous interpreting Spanish (B) – German (A)

This is only one of many interesting aspects. Another one worth mentioning might be the fact that my hippocampus was slightly on the big side – just like in maze rats or London cab drivers … I am really looking forward to getting the whole picture and reading about the scientific findings Eliza draws once she has finished her study!

PS: If you, too, would like to get a glimpse inside your head: Eliza is still looking for volunteers! If you are a professional, experienced conference Interpreter with German A and Spanish B as working languages (no double A!), about 30-55 years old and preferably right-handed, feel free to get in touch: kontakt@ek-translat.com

PPS: Some further reading: http://mosaicscience.com/story/other-words-inside-lives-and-minds-real-time-translators

PPPS: A portuguese version of this article can be found on Marina Borges’ blog: http://www.falecommarina.com.br/blog/?p=712

Comments

7 responses to “Mein Gehirn beim Simultandolmetschen| My brain interpreting simultaneously”

  1. Rafal Rogowski avatar

    Congratulations on your very interesting and fun article. I’d love to have the same done on me. A pity that my combination is not what Eliza is looking for 🙂
    Regarding the fMRI study: do you think the images differ depending on the interpreter’s technique: one ear vs. two ears? And WHICH one ear? I imagine the sensory patterns could vary even if the auditory tracts partly cross to the contralateral hemisphere and partly stay ipsilateral. I work with the speaker in my left ear (it has to be the left) and my own voice in the right one. What is your technique?

    1. AnjaRuetten avatar
      AnjaRuetten

      Dear Rafal, in fact I work the same way you do (left ear covered). In the study, both ears were completely covered for practical reasons. I would indeed be curious to know what Eliza or Dr. Krick think about your question!

      1. Rafal Rogowski avatar
        Rafal Rogowski

        Dear Anja, I admit it was silly of me to ask if you had one ear open in the MRI machine, which emits a loud hammering noise. Having given this some thought I came up with an experimental setup last night. To create a sort of “open ear” situation we need a skilled acoustician with a soldering iron 🙂 and a processing unit supporting a menu of bandpass filters. Then we get a pair of well-padded, strongly attenuating headphones with a mike, direct the speaker’s voice to one ear and the interpreter’s voice to the other while applying a set of filters to additionally cut out the hammering of the MRI machine, which would of course be picked up the microphone. Do you think that’s feasible?

  2. Dr. Krick avatar
    Dr. Krick

    Dear Rafal, astonishingly a right ear advantage has been previously found in several dichotic listening studies. Hence the left ear listening in interpreters looks like a nice investigation… Do you feel like thinking about a new neuroscience project with me…? 😉

    1. Rafal Rogowski avatar
      Rafal Rogowski

      Dear Dr. Krick, could the right ear advantage be related to the location of motoric and sensory speech centres in the left temporal lobe? Is the concept locating speech centres in that region still current (my neuro-otology has gotten rusty)? If so, could it be that most of the speech signal from the right ear is switched to the left side early (with less synaptic delay and less complex switching), while most of the left-ear signal has to play around (possibly dissipating) in the right-side cortex before it gets switched to the left via the corpus callosum?

      1. Dr. Krick avatar
        Dr. Krick

        Dear Rafal, here you may find recent observations about cerebral connections and language lateralization:
        http://www.sciencedirect.com/science/article/pii/S0304394014006259

        The German-Norwegian team concluded: “tract volume and fractional anisotropy of the left ARCUATE fasciculus were positively correlated to the strength of functional language lateralization, as was the volume of the right UNCINATE fasciculus. In conclusion, the results of the present study suggest that both micro- and macro-structural properties of language-relevant intrahemispheric white matter tracts modulate the behavioral correlates of language lateralization”.

        The left ARCUATE fascilulus connects language-related sites (Broca’s and Wernicke’s areas), whereas the UNCINATE fasciculus seems not to be direcly involved in language processing. However – as you wrote – the neural pathway splits the acoustic information from one ear over some centres in the brain stem towards both hemespheses. Normally the crossed pathway is preferred in healthy subjects. Possibly you naturally prefer to monitor your own words by the crossed pathway… However you also were able to top-down control your attention to each of the two ears, if you have to regard two acoustic streams simultanously. Perhaps the conscious control of attentional switching plays a role in your left ear preference… It’s an exciting field of research…!

  3. Rafal Rogowski avatar
    Rafal Rogowski

    Looks cool! By the way, I’m right-handed, -eyed and -legged 🙂

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.