Interpreting and the Computer – finally a happy couple?

This year’s 39th edition of the Translating and the Computer conference, which Barry Olsen quite rightly suggested renaming to Translating and Interpreting and the Computer :-), had a special focus on interpreting, so obviously I had to go to London again! And I was all the more looking forward to going there as – thanks to Alex Drechsel’s and Josh Goldsmith’s wonderful idea – I was going to be a panelist for the first time in my life (and you could tell by the   just how excited we all were about our panel, and the whole conference for that matter).

The panelists were (from left to right):
Joshua Goldsmith (EU and UN accredited freelance interpreter, teacher and researcher, Geneva)
Anja Rütten (EU and UN accredited AIIC freelance interpreter, teacher and researcher, Düsseldorf)
Alexander Drechsel (AIIC staff interpreter at the European Commission, Brussels – don’t miss Alex‘ report about Translating and the Computer 39 including lots of pictures of the event!)
Marcin Feder (Head of Interpreter​ Support and Training Unit at the European Parliament, Brussels)
Barry Slaughter Olsen (AIIC freelance interpreter, Associate Professor at the MIIS Monterey and Co-President of InterpretAmerica),
Danielle D’Hayer, our moderator (Associate Professor at London Metropolitan University)

If you have an hour to spare, here’s the complete audio recording (thanks for sharing, Alex!): „Live at TC39: New Frontiers in Interpreting Technology“ on Spreaker.

If I had to summarise the conference in one single word, it would be convergence. It appears to me from all the inspiring contributions I heard that finally things are starting to fall into place, converging towards supporting humans in doing their creative tasks and decision-making by sparing them the mechanical, stupid work. This obviously does not only apply to the small world of interpreting, but to many other professions, too. „It is not human against machine, but human plus mashine“, as Sarah Griffith Masson, Chair of the Institute of Translation and Interpreting (ITI) and Senior Lecturer in Translation Studies at the University of Portsmouth, put it in her speech.

OK, this is easy to say on a conference called „Translating and the Computer“, where the audience is bound to be a bit on the nerdy side. And the truth is, Gloria Corpas Pastor, Professor of Translation and Interpreting of the University of Malaga, presented some slightly sobering results of her survey about the use of computers among translators and interpreters. It looks like interpreters are less technology-prone than translators, a fact that made most of us in the audience nodd knowingly. But no reason to be pessimistic, given the many interesting use cases presented at the conference, plus the efforts being made, for example, at the European Commission and Parliament to provide confererence interpreters with the tools they need for their information and knowledge management, as Alexander Drechsel and Marcin Feder reported.

So while for everyone who is not an interpreter, interpreters rather seem to be the frontier to be overcome by technology, the conference was all about new frontiers in interpreting in the sense of how technology can best be used in order to support interpreters and turn the relation between interpreters and computers into a symbiotic one. Here are the key ideas I personally took home from the conference:

Whatsappify translators‘ software

A question asked by several representative from international organisations like WIPO (who by the way have this wonderful online term database called WIPO pearl), EU, WTO and UN was what our ideal software support for the booth would look like. Unfortunately, the infallible information butler described back in 2003 has not become reality yet, but many things like intuitive searching and filtering, parallel reading/scrolling of documents in two languages, linking the term in its textual context to the entry in the term database have been around for twenty years in Translation Memory systems. Most international organisations have so many translation ressources that could be tapped if only the access to them were open and a bit more tailored to the needs of interpreters. Translators and interpreters could then benefit from each others work much more than they tend to do nowadays. Obviously, a lot could be gained by developing more interpreter-friendly user interfaces.

Which reminds me a bit of WhatsApp. People who wouldn’t go anywhere near a computer before and could hardly manage to receive, let alone write, an email, seem to have become heavy WhatsApp users with the arrival of smartphones. While good old emails have been offering pretty much the same functions AND don’t force you to use always the same device, it’s stupid WhatsApp that finally has turned electronic written communication into the normal thing to do, simply by being much more fashionable, intuitive and user-friendly. So maybe what we need is a „WhatsAppification“ of Translation Memory systems in order to make them more attractive (not to say less ugly, to quote Josh Goldsmith) to interpreters?

Making the connection between glossaries and documents

Clearly in the world of glossary or terminology management for simultaneous interpreting, of the nine interpreter-specific solutions I am aware of and had the honour to present in a workshop (thanks to everyone for showing up at 9 am!), InterpretBank and InterpretersHelp are the most forward-moving, ambitious and innovative ones. InterpretersHelp has just released a term extraction feature (to be tested soon) similar to that of Intragloss, i.e. you can add terms from parallel reference texts to your glossary easily. InterpretBank has even integrated a real term extraction feature similar to that of SketchEngine (also to be tested soon). If interpreters cannot be bothered to use translation memories after all, maybe that’s the way forward.

Automatic speech recognition reducing interpreters‘ listening effort

Claudio Fantinuoli from Germersheim presented InterpretBank’s latest beta function: It uses speech recognition to provide life transcription of the speech, extracts numbers, names and technical terms and displays them, the latter together with their target language equivalents from the glossary.  This is the impressive demo video giving a glimpse of what is technically feasible.

Although it has to be admitted that it was made in a controlled environment with the speaker pronouncing clearly and in Britsh English. But still, there is reason to hope for more!

There was a nice coincidence that struck me in this context: Recently, I conducted a case study (to be published in 2018) where I analysed interpreters‘ booth notes. In this study, numbers, acronyms (mostly names of organs or organisations) and difficult technical terms (mainly nouns) were the items most frecuently written down – and this is exactly what InterpretBank automatically highlights on the transcription screen.

What I have always liked about InterpretBank, by the way, is the fact that there is always science behind it. This time Bianca Prandi, doctoral student at the University of Germersheim, presented the research she plans on the cognitive load of using CAI or computer-assisted interpreting or CAI tools. I am really looking forward to hearing more of her work in the future.

The second speaker who showed a speech recognition function to support interpreters was keynote speaker Prof. Alexander Waibel – not a conference interpreter, for a change, but Professor of Computer Science at Carnegie Mellon University, Pittsburgh and at the Karlsruhe Institute of Technology, Germany (who even has his own Wikipedia entry). During his extremely interesting and entertaining speech about deep learning, neuronal mashine translation and speech recognition, he also presented a life transcript function to support interpreters in the booth.

Paper and electronic devices all becoming one thing

I very much enjoyed talking and listening to the two most tablet-savy conference interpreters I am aware of (doubling as my co-panelists), Alexander Drechsel and Josh Goldsmith. I find the idea of using a tablet for note-taking very enticing, even more so after having seen Josh’s demo. And I don’t agree that the only reason to replace paper by a tablet is to look better or “just to try it out”. Alex and Josh could name so many advantages (consulting a dictionary or glossary in parallel, adjusting the pen colour, not having to turn the pages of your block after every two sentences). The most obvious to me is that don’t have to be afraid of running out of paper, by the way. And luckily, Josh’s study now tells us which devices are best suited to interpreters‘ needs:

When we discussed the use of computers among interpreters and interpreting students in the panel, it was interesting to hear about the different experiences. Everyone seemed to agree that young interpreters or interpreting students, despite being „digital natives“ and computer-savvy (which most panelists agreed is a myth), cannot necessarily be expected to be able to manage their information and knowledge professionally. On the other hand, common practice seemed to differ from using paper, laptop computers, tablets or even doing relying completely on smartphonse for information management, like our wonderful panel moderator Danielle D’Hayer reported her students did. She seemed to me the perfect example of not „teaching“ the use of technologies, but just using them right from the beginning of the courses.

Remote everything: cloud-based online collaboration and distance interpreting

Although in the panel discussion not everyone seemd to share my experience, I think that team glossaries, nowadays more often than not created in Google Sheets, are about to become common practice in conference preparation. Apart from being great fun, it saves time, boosts team spirit, and improves the knowledge base of everyone involved. Not to mention the fact that it is device and operating system neutral. There is, however, a confidentiality problem when using sensible customer data, but this could be solved by using encrypted solutions like or

Now once we are all able to collaborate, prepare and get to know each other online, we seem to be perfectly prepared to work in simultaneous interpreting teams remotely, i.e. from different places. Luckily, the two most knowledgeable colleagues in remote interpreting I know of, Klaus Ziegler (AIIC freelance interpreter and chair of the AIIC technical committee) and Barry Olsen), were at the conference, too. There is so much to be said about this subject that it would fill several blog posts. My most important lessons were: Remote interpreting technologies don’t necessarily imply lower rates in interpreting. The sound quality of videoconferences via normal (private) phone lines is usually not sufficient for simultaneous interpreting. The use of videoconference interpreting seems to be much more widespread in the U.S. than it is in Europe. It is a good idea for conference interpreters‘ associations like AIIC to play an active role (as Klaus Ziegler is thankfully doing) in the development of technologies and standards.

Simultaneous and consecutive interpreting merging into simconsec

The last thing to be noted as converging thanks to modern technology is simultaneous and consecutive interpreting, i.e. using tablets and smartpens to record the original speech and replaying it while rendering the consecutive interpretation. Unfortunately, there was not time to talk about this in detail, but here is a one minute demo video to whet your appetite.

And last but not least: Thank you very much to Barry Olsen for the lovely live interview we had (not to be missed: the funny water moment)!

And of course: Spread the word about next year’s 40th anniversary of Translating and the Computer!

Extract Terminology in No Time | OneClick Terms | Ruckzuck Terminologie extrahieren

[for German scroll down] What do you do when you receive 100 pages to read five minutes before the conference starts? Right, you throw the text into a machine and get out a list of technical terms that give you a rough overview of what it’s all about. Now finally, it looks like this dream has come true.

OneClick Terms by SketchEngine is a browser-based (a big like) terminology extraction tool which works really swiftly. It has all it takes and nothing more (another big like): Upload – Settings – Results.

Once you are logged in for your free trial, OneClickTerms accepts the formats tmx, xliff (2.x), pdf, doc(x), html, txt. The languages supported are Czech, German, English, Spanish, French, Italian, Japanese, Korean, Dutch, Polish, Portuguese, Russian, Slovak, Slovenian, Chinese Simplified, Chinese Traditional.

The settings in my opinion don’t really need to be touched. They include:

  • how rare or common should the extracted terms be
  • would you like to see the word form as it appears in the text or the base form
  • how often should a term candidate occur in the text in order to make it to the list of results
  • do you want numbers to appear in your results
  • how many terms should your list of results contain

When I tried OneClick Terms, it delivered absolutely relevant results at the first go. I uploaded an EU text on the free flow of non-personal data (pdf of about 100 pages) at about 8:55 am and the result I got at 8:57, displayed right on the same website, looked like this (and yes, the small W icons behind the words are links to related Wikipedia articles!):

It actually required rather four clicks than OneClick, but the result was worth the effort. There isn’t a lot of „noise“ (irrelevant terms) in the term candidate list, one of the reasons that often put me off in the past when I tried to use term extraction tools to prepare for an interpreting assignment. In the meeting where I tested OneClickTerms, at the end the only word I missed in the results was the regulatory scrutiny board. Interestingly, it was also missing from the list I had obtained from a German text on the same subject (Ausschuss für Regulierungskontrolle). But all the other relevant terms that popped up during the meeting were there. And what is more, by quickly scanning the extraction list in my target language, German, I could activate a lot of terminology I would otherwise definitely have had to think about twice while interpreting. So to me it definitely is a very efficient way of reducing the cognitive load in simultaneous interpreting.

The results list can be downloaded as a txt file, but copy & paste into MS Excel, for example, works just as fine, plus it puts both single and multi words into the same column. After unmerging all cells the terms can easily be sorted by frequency, which makes your five-minute emergency preparation almost perfect (as perfect as a five-minute preparation can get, that is).

Furthermore, even if you do have enough time for preparation, extracting and scanning the terminology as a first step may help you to focus on the substance when reading the text afterwards.

There is a free one month trial, after that the service can be subscribed to from 100 EUR/year (or 12.32 EUR/month) plus VAT. It includes many other features, like bilingual corpus building – but that’s a different story.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Noch fünf Minuten bis zum Konferenzbeginn und ein hundertseitiges pdf zur Vorbereitung schneit (hoffentlich elektronisch) in die Kabine. Was macht man? Klar: Text in eine Maschine werfen, Knopf drücken, Terminologieliste wird ausgespuckt. Damit kann man sich dann zumindest einen groben Überblick verschaffen … Nun, es sieht so aus, als sei dieser Traum tatsächlich wahr geworden!

OneClick Terms von SketchEngine ist ein browser-basiertes (super!) Terminologieextraktionstool, das extrem einfach in der Handhabung ist. Es hat alles, was es braucht, und mehr auch nicht (ebenfalls super!). Upload – Einstellungen – Ergebnisse. Fertig.

Wenn man sich mit seinem kostenlosen Testaccount eingewählt hat, kann man eine Datei im folgenden Format hochladen: tmx, xliff (2.x), pdf, doc(x), html, txt. Die unterstützten Sprachen sind Tschechisch, Deutsch, Englisch, Spanisch, Französisch, Italienisch, Japanisch, Koreanisch, Niederländisch, Polnisch, Portugiesisch, Russisch, Slowakisch, Slowenisch, Chinesisch vereinfacht und Chinesisch traditionell.

Die Einstellungen muss man zuächst einmal gar nicht anfassen. Möchte man es doch, kann man folgende Parameter verändern:

  • wie häufig oder selten sollte der extrahierte Terminus sein
  • soll das Wort in der (deklinierten oder konjugierten) Form angezeigt werden, in der es im Text vorkommt, oder in seiner Grundform
  • wie oft muss ein Termkandidat im Text vorkommen, um es auf die Ergebnisliste zu schaffen
  • sollen Zahlen bzw. Zahl-/Buchstabenkombinationen in der Ergebnisliste erscheinen
  • wie lang soll die Ergebnisliste sein

Als ich OneClick Terms, getestet habe, bekam ich auf Anhieb äußerst relevante Ergebnisse. Ich habe um 8:55 Uhr einen EU-Text über den freien Verkehr nicht-personenbezogener Daten hochgeladen (pdf, etwa 100 Seiten) und hatte um 8:57 Uhr gleich im Browser das folgende Ergebnis angezeigt (und ja, die kleinen Ws hinter den Wörtern sind Links zu passenden Wikipedia-Artikeln!):

Es waren zwar eher vier Klicks als EinKlick, aber das Ergebnis war die Mühe Wert. Es gab wenig Rauschen (irrelevante Termini) in der Termkandidatenliste, einer der Gründe, die mich bislang davon abgehalten haben, Terminologieextraktion beim Dolmetschen zu nutzen. In der Sitzung, bei der ich OneClickTerms getestet habe, fehlte mir am Ende in der Ergebnisliste nur ein einziger wichtiger Begriff aus der Sitzung, regulatory scrutiny board. Dieser Ausschuss für Regulierungskontrolle fehlte interessanterweise auch in der Extraktionsliste, die ich zum gleichen Thema anhand eines deutschen Textes erstellt hatte. Alle anderen relevanten Termini, die während der Sitzung verwendet wurden, fanden sich aber tatsächlich in der Liste. Und noch dazu hatte ich den Vorteil, dass ich nach kurzem Scannen der Liste auf Deutsch, meiner Zielsprache, sehr viele Terminie schon aktiviert hatte, nach denen die ich ansonsten während des Dolmetschens sicher länger in meinem Gedächtnis hätte kramen müssen. Für mich definitiv ein Beitrag zur kognitiven Entlastung beim Simultandolmetschen.

Die Ergebnisliste kann man als txt-Datei herunterladen, aber Copy & Paste etwa in MS-Excel hinein funktioniert genauso gut. Man hat dann auch gleich die Einwort- und Mehrwort-Termini zusammen in einer Spalte. Wenn man den Zellenverbund aufhebt, kann man danach auch noch die Einträge bequem nach Häufigkeit sortieren. Damit ist die Fünf-Minuten-Notvorbereitung quasi perfekt (so perfekt, wie eine fünfminütige Vorbereitung eben sein kann).

Aber selbst wenn man jede Menge Zeit für die Vorbereitung hat, kann es ganz hilfreich sein, bevor man einen Text liest, die vorkommende Terminologie einmal auf einen Blick gehabt zu haben. Mir zumindest hilft das dabei, mich beim Lesen stärker auf den Inhalt als auf bestimmte Wörter zu konzentrieren.

Man kann OneClick Terms einen Monat lang kostenlos testen, danach gibt es das Abonnement ab 100,00 EUR/Jahr (oder 12,32 EUR/Monat) plus MWSt. Es umfasst noch eine ganze Reihe anderer Funktionen, etwa auch den Aufbau zweisprachiger Korpora – aber das ist dann wieder eine andere Geschichte.

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

Team Glossaries | Tips & Tricks von Magda und Anja | DfD 2017

Aus unserer Kurzdemo zum Thema „Teamglossare in GoogleSheets“ bei „Dolmetscher für Dolmetscher“ am 15. September 2017 in Bonn findet Ihr hier ein paar Screenshots und Kurznotizen. Detaillierte Gedanken zu gemeinsamer Glossararbeit in der Cloud findet Ihr in diesem Blogbeitrag.

Magdalena Lindner-Juhnke und Anja Rütten

Grundsätzliches zur Zusammenarbeit

  • Einladung zum Online-Glossar
  • Erwartungshaltung auf beiden Seiten
  • “Share Economy” oder Schutz des Urheberrechts/vertrauensvolle Zusammenarbeit
  • Organisation in Ablage/eigene und gemeinsame Ordner
  • Glossarstruktur – mehrere Reiter?
  • Vorteile/Bedenken
  • Teilnehmerstimmen
  • “Live-Teamarbeit” vor und während der Konferenz
  • “praktisch, sich […] GEMEINSAM vorzubereiten”, einheitliche Terminologie, Zeitersparnis
  • Glossar sortieren nach Chronologie im Vortrag -> Absprache
  • “Ich kann mit Word besser umgehen als mit Excel …”
  • “Teilen mit anderen Kabinen evtl. zu kompliziert?”
  • “Datenschutz?“


demo glossary

Word geht auch!

word works, too

Aufbau (z.B. Zeilen/Spalten fixieren) oder Kopie erstellen

copy document

Die wichtigsten Kniffe, um nicht zu verzweifeln

  • Textumbruch oder fortlaufend, Textausrichtung in Zelle
  • Absatz in Zelle (Alt+Enter) – Vorsicht, zerschießt beim Exportieren evtl. die Tabellenstruktur
  • Suchen (Strg+F), alle Tabellenblätter durchsuchen
  • Zellen kopieren
  • Zeilen/Reiter hinzufügen
  • Spalten/Zeilen fixieren
  • Jedem eine “Privatspalte” zuweisen, für eigene Prioritäten/Anmerkungen etc.
  • Tabellendatei hochladen (schlecht zu bearbeiten) vs. Tabelle in Google erstellen – Kollegen fragen oder Webinar besuchen!

Kommentare, Chat, Kommentarspalte

comment in table

Sortierung, Chronologie

(nicht ohne Absprache umsortieren, es sei denn die ursprüngliche Reihenfolge lässt sich wieder herstellen)

sort chronologically

Datensicherung (was ist, wenn jemand anderes das komplette Glossar zerstört)

Private Filteransichten (bei großen Datenbeständen)

Herunterladen, Drucken, oder Copy&Paste

Paralleltexte “alignieren”

Alternative Airtable (verschlüsselt)

Paperless Preparation at International Organisations – an Interview with Maha El-Metwally

Maha El-Metwally has recently written a master’s thesis at the University of Geneva on preparation for conferences of international organisations using tablets. She is a freelance conference interpreter for Arabic A, English B, French and Dutch C domiciled in Birmingham.

How come you know so much about the current preparation practice of conference interpreters at so many international institutions?

The answer is quite simple really: I freelance for all of them! I am also pro paperless environments for obvious environmental and practical reasons. So even if some organisations offer a paper alternative (ILO, IMO, UNHQ, WFP) I go for the electronic version. Paperless portals of international organisations may differ in layout and how the information is organised but they essentially aim to achieve the same thing. Some organisations operate a dual document distribution system (paper and digital) with the aim of phasing out the former over time.

The European Parliament is already on its second paperless meeting and document portal. It used to be called Pericles and now it is called MINA, the Meeting Information and Notes Application. This required a bit of practice to become familiar with the new features.

I recently heard someone working in one of these paperless environments complain about the paperless approach, saying that they often struggle to find their way through a 400 pages document quickly. My first reaction was to say that hitting CTRL-F or CTRL-G is an efficient way to get to a certain part of a text quickly. But maybe there is more to it than just shortcuts. What is the reason, in your experience, that makes it difficult for colleagues to find their way around on a tablet or laptop computer?

I think that tablets represent a change and people in general resist change. It could be that we are creatures of habit. We are used to a certain way of doing things and some of us may be having a difficulty coping with all the changes coming our way in terms of technology developments. It could also be that some interpreters do not see the point of technology so they are not motivated to change something that works for them.

How is the acceptance of going paperless in general in the institutions you work for?

This depends on individual preferences. Many colleagues still prefer paper documents but I also see more and more tablets appearing in the booths. Some organisations try to accommodate both preferences. The ILO operates a dual distribution system as a step towards going completely paperless. Meeting documents are available on the organisation’s portal but are also printed and distributed to the booths. The same goes for the IMO where the interpreters are given the choice of paper or electronic versions of the documents or both.

Right, that’s what they do at SCIC, too. I take it that you wrote your master’s thesis about paperless preparation, is that right? Was the motivational aspect part of it? Or, speaking about motivation: What was your motivation at all to choose this subject?

Yes, this is correct. I am very much of a technophile and anything technological interests me. I was inspired by a paperless preparation workshop I attended at the European Parliament. It made sense to me as a lot of the time, I have to prepare on the go. It happens that I start the week with one meeting then end the week with another. Carrying wads of paper around is not practical. Having all meeting documents electronically in one place is handy. It happens a lot that I receive meeting documents last minute. There is no time to print them. So I learned to read and annotate the documents on apps on my tablet.

So while you personally basically did „learning by doing“, your researcher self tried to shed some more scientific light on the subject. Is that right? Would you like to describe a bit more in detail what your thesis was about and what you found was the most interesting outcome?

My thesis looked at training conference interpreting students to prepare for conferences of international organisations with the use of tablets. I noticed from my own experience and from anecdotes of older colleagues that meetings were getting more and more compressed. As a result, especially in peak seasons, interpreters may start the week with one conference and end it with another. Preparation on the go became a necessity. In addition, there are several international organisations that are moving towards paperless environments. Therefore, I think it is important for students to be introduced to paperless preparation at an early stage in their training for it to become a second nature to them by the time they graduate. And what a better tool to do that than the tablet? I created a course to introduce students to exactly that.

So when you looked at the question, was your conclusion that tablets are better suited than laptop computers? Currently, it seems to me that on the private market almost everyone uses laptops and at the EU, most people use tablets. I personally prefer a tablet for consecutive, but a laptop in the booth, as I can look at my term database, the internet and room documents at the same time more conveniently. I also blind-type much faster on a „real“ keyboard. I hope that the two devices will sooner or later merge into one (i.e. tablets with decent hard drives, processors and operating systems).

Now, from your experience, which of the two option would you recommend to whom? Or would you say it should always be tablets?

I prefer the tablet when travelling as:
– it is quieter in the booth (no tapping or fan noise),
– using an app like side by side, I can split the screen to display up to 4 apps/files/websites at the same time so the laptop has no advantage over the tablet here,
– it is lighter.

You have created a course for students. What is it you think students need to be taught? Don’t they come to the university well-prepared when it comes to handling computers or tablets?

The current generation of students is tech savvy so they are more likely to embrace tablets and go fully digital. The course I put together for teaching preparation with tablets relies on the fact that students already know how to use tablets. The course introduces the students to paperless environments of a number of international organisations, it looks at apps for the annotation of different types of documents, glossary management, more efficient google search among other things.

I also like to use the touchscreen of my laptop for typing when I want to avoid noise. But compared to blind-typing on a „normal“ keyboard, I find typing on a touchscreen a real pain. My impression is that when I cannot feel the keys under my fingers, I will never be able to learn how to type, especially blind-type, REALLY quickly and intuitively … Do you know of any way (an app, a technique) of improving typing skills on touchscreens?

I’m afraid I don’t really have an answer to that question. I am moving more and more towards dictating my messages instead of typing them and I am often flabbergasted at how good the output is, even in Arabic!

Talking about Arabic, is there any difference when working with different programs in Arabic?

Most of the time, I can easily use Arabic in different apps. The biggest exception is Microsoft Office on Mac. Arabic goes berserk there! I have to resort to Pages or TextEdit then. Having said that, a colleague just mentioned yesterday that this issue has been dealt with. But I have to explore it.

As to glossary management, not all terminology management tools for interpreters run on tablets. Which one(s) do you recommend to your students or to colleagues?

I use and recommend Interplex. It has a very good iPad version. The feature I like most about it is that you can search across your glossaries. I can do that while working and it can be a life saver sometimes!

If I wanted to participate in your seminar, where could I do that? Do you also do webinars?

I offer a number of seminars on technology for interpreters to conference interpreting students at some UK universities. I will keep you posted. I also have an upcoming eCPD webinar on September 19th on a hybrid mode of interpreting that combines the consecutive and simultaneous modes.

That sound like a great subject to talk about next time!










InterpretBank 4 review

InterpretBank by Claudio Fantinuoli, one of the pioneers of terminology tools for conference interpreters (or CAI tools), already before the new release was full to the brim with useful functions and settings that hardly any other tool offers. It was already presented in one of the first articles of this blog, back in 2014. So now I was all the more curious to find out about the 4th generation, and I am happy to share my impressions in the following article.

Getting started

It took me 2 minutes to download and install the new InterpretBank and set my working languages (1 mother tongue plus 4 languages). My first impression is that the user interface looked quite familiar: language columns (still empty) and the familiar buttons to switch between edit, memorize and conference mode. The options menu lets you set display colours, row height and many other things. You can select the online sources for looking up terminology (linguee, IATE, LEO, DICT, Wordreference and Reverso) and definitions (Wikipedia, Collins, as well as set automatic translation services (search IATE/old glossaries, use different online translation memories like glosbe and others).

Xlsx, docx and ibex (proprietary InterpretBank format) files can be imported easily, and unlike the former InterpretBank, I don’t have to change the display settings any more in order to have all my five languages displayed. Great improvement! Apart from the terms in five languages, you can import an additional “info” field and a link related to each language as well as a “bloc note”, which refers to the complete entry.

Data storage and sharing

All glossaries are saved on your Windows or Mac computer in a unique database. I haven’t tested the synchronization between desktop and laptop, which is done via Dropbox or any other shared folder. The online sharing function using a simple link worked perfectly fine for me. You just open a glossary, upload it to the secure InterpretBank server, get the link and send it to whomever you like, including yourself. On my Android phone, the plain two-language interface opened smoothly in Firefox. And although I always insist on having more than two languages in my term database, I would say that for mobile access, two languages are perfect, as consecutive interpreting usually happens between two languages back and forth and squeezing more than two languages onto a tiny smartphone screen might not be the easiest thing to do either.

I don’t quite get the idea why I should share this link with colleagues, though. Usually you either have a shared glossary in the first place, with all members of the team editing it and making contributions, or everyone has separate glossaries and there is hardly any need of sharing. If I wanted to share my InterpretBank glossary at all, I would export it and send it via email or copy it into a cloud-based team glossary, so that my colleagues can use it at their convenience.

The terminology in InterpretBank is divided into glossaries and subglossaries. Technically, everything is stored in one single database, “glossary” and “subglossary” just being data fields containing a topic classification and sub-classification. Importing only works glossary by glossary, i.e. I can’t import my own (quite big) database as a whole, converting the topic classification data fields into glossaries and sub-glossaries.

Glossary creation

After having imported an existing glossary, I now create a new one from scratch (about cars). In edit mode, with the display set to two languages only, InterpretBank will look up translations in online translation memories for you. All you have to do is press F1 or using the right mouse button or, if you prefer, the search is done automatically upon pressing the tab key, i.e. jumping from one language field to the next –empty– one. When I tried “Pleuelstange” (German for connecting rod), no Spanish translation could be found. But upon my second try, “Kotflügel” (German for mudguard), the Spanish “guardabarros” was found in MEDIAWIKI.

By pressing F2, or right-click on the term you want a translation for, you can also search your pre-selected online resources for translations and definitions. If, however, all your language fields are filled and you only want to double-check or think that what is in your glossary isn’t correct, the program will tell you that nothing is missing and therefore no online search can be made. Looking up terminology in several online sources in one go is something many a tool has tried to make possible. My favourite so far being, I must say that I quite like the way InterpretBank displays the online search results. It will open one (not ten or twenty) browser tabs where you can select the different sources to see the search results.

The functions for collecting reference texts on specific topics and extracting relevant terminology haven’t yet been integrated into InterpretBank (but, as Claudio assured me, will be in the autumn). However, the functions are already available in a separate tool named TranslatorBank (so far for German, English, French and Italian).

Quick lookup function for the booth

While searching in „normal“ edit mode is accent and case sensitive, in conference mode (headset icon) it is intuitive and hardly demands any attention. The incremental search function will narrow down the hit list with every additional letter you type. And there are many option to customize the behaviour of the search function. Actually, the „search parameters panel“ says it all: Would you like to search in all languages or just your main language? Hit enter or not to start your query? If not, how many seconds would you like the system to wait until it starts a fresh query? Ignore accents or not? Correct typos? Search in all glossaries if nothing can be found in the current one? Most probably very useful in the booth.

When toying around with the search function, I didn’t find my typos corrected, at least not that I was aware of. When typing „gardient“ I would have thought that the system corrected it into „gradient“, which it didn’t. However, when I typed „blok“, the system deleted the last letter and returned all the terms containing „block“. Very helpful indeed.

In order to figure out how the system automatically referred to IATE when no results were found in my own database, I entered „Bruttoinlandsprodukt“ (gross domestic product in German). Firstly, the system froze (in shock?), but then the IATE search result appeared in four of my five languages in the list, as Dutch isn’t supported and would have to be bought separately. At least I suppose it was the IATE result, as the source wasn’t indicated anywhere and it just looked like a normal glossary entry.

Queries in different web sources hitting F2 also works in booth mode, just as described above for edit mode. The automatic translation (F1) only works in a two-language display, which in turn can only be set in edit mode.

Memorize new terms

The memorizing function, in my view, hasn’t changed too much, which is good because I like it the way it was before. The only change I have noticed is that it will now let you memorize terms in all your languages and doesn’t only work with language pairs. I like it!


All in all, in my view InterpretBank remains number one in sophistication among the terminology tools made for (and mostly by) conference interpreters. None of the other tools I am aware of covers such a wide range of an interpreter’s workflow. I would actually not call it a terminology management tool, but a conference preparation tool.

The changes aren’t as drastic as I would have expected after reading the announcement, which isn’t necessarily a bad thing, the old InterpretBank not having been completely user-unfriendly in the first place. But the user interface has indeed become more intuitive and I found my way around more easily.

The new online look-up elements are very relevant, and they work swiftly. Handling more than two languages has become easier, so as long as you don’t want to work with more than five languages in total, you should be fine. If it weren’t for the flexibility of a generic database like MS Access and the many additional data fields I have grown very fond of , like client, date and name of the conference, degree of importance, I would seriously consider becoming an InterpretBank user. But then even if one prefers keeping one’s master terminology database in a different format, thanks to the export function InterpretBank could still be used for conference preparation and booth work „only“.

Finally, whatwith online team glossaries becoming common practice, I hope to see a browser-based InterpretBank 5 in the future!

PS: One detail worth mentioning is the log file InterpretBank saves for you if you tell it to. Here you can see all the changes and queries made, which I find a nice thing not only for research purposes, but also to do a personal follow-up after a conference (or before the next conference of the same kind) and see which were the terms that kept my mind busy. Used properly, this log file could serve to close the circle of knowledge management.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Zeit sparen bei der Videovorbereitung | How to save time when preparing video speeches

Videos als Vorbereitungsmaterial für eine Konferenz haben unzählige Vorteile. Der einzige Nachteil: Sie sind ein Zeitfresser. Man kann nicht wie bei einem schriftlichen Text das Ganze überfliegen und wichtige Stellen vertiefen, markieren und hineinkritzeln. Zum Glück hat Alex Drechsel sich in einem Blogbeitrag dazu Gedanken gemacht und ein paar Tools ausgegraben, die dem Elend ein Ende bereiten. Danke, Alex 🙂

Please enjoy Alex Drechsel’s blog post on how to make preparing video speeches less of a hassle.

Booth notes wanted for a study | Kabinenzettel für Studienzwecke gesucht

Dear fellow conference interpreters! For a study on information management in the booth, I am currently collecting sample booth notes (those papers you scribble terminology, names, numbers, acronyms or whatever on). So if you would like to make your personal contribution to this study, it would be great if you could email or whatsapp me a scan or foto of your booth notes to or +49 178 2835981. Your notes will of course be treated confidentially. Thanks a lot in advance!

Liebe DolmetschkollegInnen! Für eine informationswissenschaftliche Studie sammle ich derzeit Kabinenzettel, also die Blätter, auf denen Ihr Eure Notizen jeglicher Art verewigt, sei es Terminologie, Namen, Abkürzungen oder was auch immer. Wenn Ihr mir also einen Eurer Schriebe zur Verfügung stellen möchtet, würde ich mich sehr freuen. Gerne als Scan oder Foto emailen oder appen an oder 0178 2835981. Natürlich wird alles vertraulich behandelt. Schon jetzt mein herzliches Dankeschön!

Can computers outperform human interpreters?

Unlike many people in the translation industry, I like to imagine that one day computers will be able to interpret simultaneously between two languages just as well as or better than human interpreters do, what with artificial neuronal neurons and neural networks‘ pattern-based learning. After all, once hardware capacity allows for it, an artificial neural network will be able to hear and process many more instances of spoken languages and the underlying content than my tiny brain will in all its lifetime. So it may recognise and understand the weirdest accents and the most complicated matter just because of the sheer amount of information it has processed before and the vast ontologies it can rely on (And by that time, we will most probably not only be able to use digital interpreters, but also digital speakers).

The more relevant question by then might rather be if or when people will want to have digital interpretation (or digital speakers in the first place). How would I feel about being replaced by a machine interpreter, people often ask me over a cup of coffee during the break. Actually, the more I think about it, the more I realise that in some cases I would be happy to be replaced by a machine. And it is good old Friedemann Schulz von Thun I find particularly helpful when it comes to explaining when exactly I find that machine interpreters might outperform (out-communicate, so to say) us humans (or machine speakers outperform humans).

As Friedemann Schulz von Thun already put it back in 1981 in his four sides model (, communication happens on four levels:

The matter layer contains statements which are matter of fact like data and facts, which are part of the news.

In the self-revealing or self-disclosure layer the speaker – conscious or not intended – tells something about himself, his motives, values, emotions etc.

In the relationship layer is expressed resp. received, how the sender gets along with the receiver and what he thinks of him.

The appeal layer contains the desire, advice, instruction and effects that the speaker is seeking for.

We both listen and speak on those four layers, be it on purpose or inadvertently. But what does that mean for interpretation?

In terms of technical subject matter, machine interpretation may well be superior to humans, whose knowledge base despite the best effort will always be based on a relatively small part of the world knowledge. Some highly technical conferences consist of long series of mon-directional speeches given just for the sake of it, at a neck-breaking pace and with no personal interaction whatsoever. When the original offers little „personal“ elements of communication (i.e. layer 2 to 4) in the first place, rendering a vivid and communicative interpretation into the target language can be beyond what human interpretation is able to provide. In these cases, publishing the manuscript or a video might serve the purpose just as well, even more so in the future with increasing acceptance of remote communication. And if a purely „mechanical“ translation is what is actually needed and no human element is required, machine interpreting might do the job just as well or even better. The same goes e.g. for discussions of logistics (“At what time are you arriving at the airport?”) or other practical arrangements.

But what about the three other, more personal and emotional layers? When speakers reveal something about themselves and listeners want to find out about the other person’s motives, emotions and values or about what one thinks of the other, and it is crucial to read the message between the lines, gestures and facial expressions? When the point of a meeting is to build trust and understanding and, consequently, create a relationship? Face to face meetings are held instead of phone calls or video conferences in order to facilitate personal connections and a collective experience to build upon in future cooperation (which then may work perfectly well via remote communication on more practical or factual subjects). There are also meetings where the most important function is the appeal. The intention of sales or incentive events generally is to have a positive effect on the audience, to motivate or inspire them.

Would these three layers of communication, which very much involve the human nature of both speakers and listeners, work better with a human or a machine interpreter in between? Is a human interpreter better suited to read and convey personality and feelings, and will human interaction between persons work better with a human intermediary, i.e. a person? Customers might find an un-human interpreter more convenient, as the interpreter’s personality does not interfere with the personal relation of speaker and listener (but obviously not provide any facilitation either). This “neutral” interpreting solution could be all the more charming if it didn’t happen orally, but translation was provided in writing, just like subtitles. This would allow the voice of the original speaker to set the tone. However, when it comes to the „unspoken“ messages, the added value of interpreters is in their name: They interpret what is being said and constantly ask the question „What does the speaker mean or want?“ Mood, mocking, irony, reference to the current situation or persons present etc. will most probably not be understood (or translated) by machines for a long time, or never at all. But I would rather leave it to philosophers or sci-fi people to answer this question.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Wissensmanagement im Konferenzdolmetschen – ein bisschen Theorie

Was bedeutet eigentlich Wissensmanagement für Konferenzdolmetscher?

Was man für einen Dolmetscheinsatz wissen muss, ist schnell beschrieben: So ziemlich alles in allen Arbeitssprachen. Auf der Händlertagung eines Waschmaschinenherstellers kann es plötzlich um Fußballtaktik gehen, der CFO bei einer Bilanzpressekonferenz hat womöglich eine Schwäche für Bibelzitate. Deshalb gilt grundsätzlich: Die Vorbereitung ist nie zu Ende. Da aber bekanntlich Zeit Geld ist und selbst Dolmetscher nicht alles wissen können, ist bei unserer Informations- und Wissensarbeit der Management-Begriff entscheidend, das Streben nach dem Bestmöglichen zur Erreichung des Ziels (anstelle eins unreflektierten Abarbeitens).

Strategisch ist dabei die qualifizierte Entscheidung zwischen dem Bereithalten von Information (auf Papier oder im Computer) und dem Aneignen von Wissen (im Kopf). Dieses Wechselspiel zwischen Daten, Information und Wissen ist typisch für die verschiedenen – ineinandergreifenden – Verarbeitungsebenen der Dolmetschvorbereitung:

Zusammentragen von Daten und Recherche relevanter Informationen (Tagesordnung, alte Protokolle, Präsentationen, Manuskripte, Webseiten der involvierten Parteien, Glossare),

fehlende Information wie Sachzusammenhänge und terminologische Lücken recherchieren, Auswertung und Aufbereitung der Informationen,

Nutzung der aufbereiteten Informationen, also Abgleichen mit dem eigenen Wissensbestand, Abruf und Einbindung (gezieltes Lernen der wichtigsten Terminologie und Sachverhalte bzw. das intuitive Auffindbarmachen im Dolmetscheinsatz).

Semiotik des Dolmetschens

Ähnlich wie in einem Zeichensystem finden sich auch in der Dolmetschsituation drei Informations- und Wissensdimensionen: die sprachliche, wahrnehmbare Form, die Begriffswelt (Inhalt) und die außersprachliche Wirklichkeit (Situation). Diese drei „Himmelsrichtungen“ können in der Dolmetschvorbereitung als innerer Kompass Orientierung bieten.

Anders als Übersetzer oder Terminologen arbeiten Dolmetscher für den Moment, für eine einmalige Situation mit einer definierten Gruppe von Beteiligten. Entsprechend hat diese Situation mehr Gewicht, verleit aber auch mehr Freiheit. „Richtig“ gedolmetscht ist, was hier akzeptiert oder bevorzugt wird, auch wenn es außerhalb dieser Runde niemand verstehen würde. Was korrekt belegt ist, sei es durch eine Norm, ein Nachschlagewerk oder Referenzliteratur, wird umgekehrt nicht zwingend von den Zuhörern verstanden, sei es aufgrund des Bildungshintergrundes oder auch einer anderen regionalen Herkunft.

Selbstverständlich ist es durchaus wichtig, die andersprachigen Benennungen etwa für Aktiengesellschaft, Vorstand, Aufsichtsrat und Verwaltungsrat zu kennen (Form). Im Notfall sind solche Benennungen aber nachschlagbar oder (oft effizienter) mit Druck auf die Räuspertaste bei den Kollegen erfragbar. Ist der semantische Zusammenhang zwischen diesen Begriffen bzw. die Unterschiede zwischen den Systemen unterschiedlicher Länder nicht klar (Inhalt), so wird das Füllen dieser Verständnislücke ad hoc schwierig, und die durch fehlendes Verständnis entstehenden inhaltlichen Dolmetschfehler wiegen oft schwerer als „das falsche Wort“, das aber den Inhalt dennoch transportiert. Noch irritierender wirkt es mitunter, wenn das situative Verständnis fehlt, sprich die Funktion des Aufsichtsratsvorsitzenden mit der des Vorstandsvorsitzenden verwechselt wird. Bei aller sprachlich-fachlichen Einarbeitung sollte man daher nie vergessen, sich die konkrete Kommunikationssituation zu vergegenwärtigen.

Wenn nun also Informations- und Wissensmanagement für Dolmetscher bedeutet, sich nach jedem Dolmetscheinsatz zu fragen, ob man als Wissensarbeiter optimal gearbeitet hat, wo sind denn dann die Stellschrauben, an denen man drehen kann? Grundsätzlich lohnt es sich, verschiedene Optimierungsansätze für sich selbst einmal zu durchdenken.


Die hohe Kunst der effizienten Dolmetschvorbereitung – bzw. des Informationsmanagements im Allgemeinen – liegt häufig im Abwägen: Welche Informationen bzw . welches Wissen benötigt man (am dringendsten), was ist entbehrlich? Siehe hierzu auch Not-To-Learn Words.


Bei tausend Seiten Text, zweitausend Powerpoint-Folien oder einem 400-Einträge-Glossar ist es eine gute Idee, das Wichtigste (siehe oben) zu extrahieren/herauszufiltern/herauszusuchen. Für die Extraktion von Terminologie aus Texten gibt es mehr oder weniger gut funktionierende Tools. Für die inhaltliche Seite lohnt sich mitunter auch die Suche nach “menschlichen” Zusammenfassungen zu bestimmten Themen, zu vielen bekannteren Büchern gibt es für wenig Geld auch redaktionell angefertigte Zusammenfassungen, die EU hat gar eine spezielle Seite mit Zusammenfassungen von Rechtsakten angelegt.

Eingrenzen und Erweitern

Meistens hat man entweder zu viel oder zu wenig Information.

Im Internet lässt sich das zu viel oder unbrauchbar oft durch ein paar Handgriffe erledigen: Verwenden von Ausschlusswörtern (ein Minuszeichen oder NOT vor das Wort setzen), Eingrenzung auf eine bestimmte Seite (site:) verwandte Seiten finden (related:), Definitionen suchen (define:), Berücksichtigung von Synonymen bei der Suche (~), Herkunftsland (loc:) usw. Wenn man die zahlreichen Online-Nachschlagewerke durchsuchen möchte, können Such-Tools, die verschiedene Quellen gleichzeitig durchsuchen, die Arbeit erleichtern und Nerven schonen – so etwa

Bei der eigenen Terminologie ist es wichtig, dass man die Suche wahlweise nach Themenbereich („Bilanzen“), Kunden, Sprachen, Anlass („Jahresbilanzpressekonferenz Firma XY in Z“) oder Art der Veranstaltung (etwa EBR) eingrenzen kann.


Verbringt man (gefühlt) mehr Zeit mit dem Suchen als mit dem Finden, so kann es sinnvoll sein, gedanklich einen Schritt zurückzutreten und zu überlegen, an welcher Stelle genau es ruckelt. Beispielsweise kann man zu einem Auftrag alle Informationen in einer Excel-Datei sammeln (Tagesordnung, Glossar etc.), um diese auf einen Schlag durchsuchen zu können. Bei sehr vielen Manuskripten oder zu diskutierenden Sitzungsdokumenten können auch Notizbuch-Programme wie MS-OneNote oder Evernote hervorragend für Übersicht und Ordnung sorgen. Prinzipiell findet man seine Termini schneller, wenn man sie in einer einzigen Datenbank ablegt und nicht in unterschiedlichen Dokumenten – denn früher oder später gibt es immer Überschneidungen zwischen verschiedenen Einsatzbereichen oder Kunden. Muss man in der Kabine mit dem Blick bzw. der Aufmerksamkeit ständig zwischen Computer und Papierunterlagen wechseln, kann es sich lohnen, eine bewusste Entscheidung gegen diesen Medienbruch zu treffen und konsequent am Rechner oder auf Papier zu arbeiten.

Wenn in einer Sitzung mit Paralleltexten gearbeitet wird, etwa wenn ein Vertragstext diskutiert wird, lohnt es sich, über eine ordentliche Paralleltextanzeige nachzudenken. Oft leistet das Kopieren in eine Tabelle hier wahre Wunder, aber auch der Import in ein Translation-Memory-System ist, wenn das Alignieren einigermaßen funktioniert, eine sehr komfortable Kabinenlösung. Selbst pdfs in mehreren Sprachen lassen sich mit einem praktischen Tool nebeneinanderlegen.


Ordnung ist da halbe Leben … warum eigentlich? Ganz einfach: Man verbringt nicht so viel Lebenszeit mit dem Überlegen, wo man dieses und jenes nun hinräumen soll bzw. hingeräumt hat. Deshalb ist beispielsweise eine ordentliche Dateiablagesystematik und eine Terminologiedatenbank ein Beitrag zur schnellen Wissenslückenfüllung im Dolmetscheinsatz sowie zur allgemeinen Dolmetscherzufriedenheit. Hinzu kommt, dass eine gute Datenstruktur der beste Weg zur Null-Nachbereitung ist. Zumindest beim Simultandolmetschen meist leicht zu bewerkstelligen, ist es nützlicher, motivierender und leichter zu merken, Wissenslücken gleich in der Anwendungssituation zu recherchieren. Wenn man die gefundene Information ordentlich ablegt (Terminologie dem Kunden/Thema/Anlass zugeordnet), erübrigt sich die Nachbereitung am eigenen Schreibtisch.

Systematisierung und Beschleunigung

Egal, wie gut man sich strukturiert und organisiert, manchmal ist man einfach nicht schnell genug oder vergisst das Wichtigste. Hier hilft es, Standards zu schaffen. Handlungen, die eine Gewohnheit sind, erfordern weniger Aufmerksamkeit und Energie. Immer die gleiche Ablagesystematik bei E-Mails und Dokumenten, eine Checkliste für die Angebotserstellung, Standardfragen („Wer macht Was mit Wem Wozu?“) bei mündlichen Kundenbriefings, Rituale bei der Aktivierung von Passivwissen. Manches kann man auch an den Computer delegieren, zum Beispiel die Erinnerung an die Steuererklärung oder das Abonnieren von Zeitungen und Newslettern.

Oft kann Software helfen, etwa beim gleichzeitigen Durchsuchen unzähliger Dateien mit einer Desktopsuchmaschine, beim Abgleich von Texten mit Glossaren oder bei der Alignierung verschiedener Sprachfassungen mittels eines Translation-Memory-Systems. Aber auch rein geistige, nicht computergestützte Fähigkeiten lassen sich beschleunigen, so etwa durch das Erlernen effizienterer Lesetechniken wie Improved Reading.

Kooperation und Arbeitsteilung

Noch relativ neu, aber aus dem Dolmetschalltag vieler Kollegen schon kaum noch wegzudenken, sind cloudbasierte Plattformen zur Teamzusammenarbeit. Vor allem bei extrem fachspezifischen und dichten Konferenzen, in denen das Dolmetschteam sich die Vorbereitungsarbeit teilt, bieten sie eine Möglichkeit, emeinsam und doch unabhängig voneinander v.a. Terminlogie zu erarbeiten und Dokumente abzulegen. Die Vorteile liegen auf der Hand: Das Team ist immer auf dem gleichen Stand, kann sich zu Detailfragen austauschen, jeder kann die Wissenslücken des anderen ergänzen. Wenn eine solche einfache Datenbank gut strukturiert ist, hat auch jeder die Möglichkeit, seine persönliche Ansicht zu generieren, ohne von subjektiv als irrelevant empfundenen Daten „gestört“ zu werden.

Und bei dieser Vielzahlt an Möglichkeiten gilt natürlich genau wie in der Kabine: Nur nicht die Übersicht verlieren 😉

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

Informations- und Wissensmanagement im Konferenzdolmetschen. Sabest 15. Frankfurt: Peter Lang. [Dissertation]

Text-based Personality Prediction: Looks like I am male in Spanish and female in German and English

Eine kleine Jeckigkeit zum Karneval: Das Psychometrics Centre der Universität Cambridge hat einen Online-Persönlichkeitstest entwickelt (vielen Dank an Note To Self für die Empfehlung!), der anhand von Textproben die Persönlichkeit des Verfassers ermittelt.

Die psycholinguistische Analyse ergab in meinem Fall anhand meines Lebenslaufs in Deutsch, Englisch und Spanisch relativ einheitlich, dass ich männlich und Mitte bis Ende 20 bin, meine weibliche Seite aber zumindest in meiner englischen und deutschen Identität nicht unterdrücke. Alles in allem wird mir eine ausgeglichene Persönlichkeit beschieden, von ein wenig mangelnder Teamfähigkeit abgesehen.

Interessanter ist da der Sprachenvergleich meines letzten Blog-Artikels (Dolmetschen mit VR-Brille). In meiner Muttersprache Deutsch macht mich dieser Text 23 Jahre jung und zu 80 % weiblich. Englisch ließ mich prompt deutlich älter (31), aber immerhin nach wie vor weiblich (85 %) erscheinen, außerdem deutlich liberaler und introvertierter (vermutlich Altersweisheit). Auf Spanisch schließlich blieb zwar das Alter mit 25 Jahre relativ ähnlich, mein Geschlecht kehrte sich jedoch glatt ins Gegenteil: ich war mit 89 %-iger Wahrscheinlichkeit männlich. Immerhin bei ansonsten nach wie vor ausgeglichener Persönlichkeit.

Relativ konstant über alle Texte und Sprachen hinweg zeigten sich mein Führungspotential (zw. 40 und 50 %) und mein Jungscher Persönlichkeitstyp (Introvertiert – Empfindend – Denkend – Wahrnehmend). Das deckt sich immerhin mit den Ergebnissen eines gleichartigen Tests, den ich vor Jahren einmal ich im Rahmen eines Dolmetscheinsatzes live durchführen durfte.

Und sehr vertrauenerweckend: Mein Facebook-Profil entzog sich offensichtlich der analytischen Möglichkeiten: „Sorry, we are unable to generate a prediction. An insufficient number of your Likes match with those in our database, and we don’t believe in guesswork.“

Mein Fazit: Alter und Geschlecht scheinen anhand eines Textes deutlich schwieriger vorherzusagen zu sein als Persönlichkeitsmerkmale. Auf jeden Fall ist dieser Personality Predictor deutlich mehr als nur ein lustiges Spielzeug.

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

+++ EN +++ EN +++ EN +++ EN +++

If you have ever wondered what your writing says about you: The Psychometrics Centre at the University of Cambridge has developed an online personality predictor (thanks to Note To Self for recommending!). On the basis of sample texts, it analyses the personality of the author.

In my case, my CVs in German, English and Spanish unanimously said that I am male and around 25-29 years old, but luckily, at least my English and German selves do not repress their feminine sides. All in all, I seem to have quite a balanced personality, apart from a slight deficit in team working.

More interesting still: When comparing the different language versions of my most recent blog article (Simultaneous interpreting with VR headset), it turned out that my mother tongue, German, made me 23 years young and an 80 % female, while in English, I was much older (31), but still an 80 % female, albeit much more liberal and introverted (wisdom obviously comes with age). The same text in Spanish then made me look about as young as in German (25 years), but turned around my gender prediction completely, making me an 89 % male (however still of a very balanced personality).

What seems to be more consistent across all languages and texts submitted is my leadership potential (40-50 %) and Jungian Personality Type (Introverted – Sensing – Thinking – Perceiving). The latter coincides with the results of the same test I did some time ago when interpreting an event about exactly this type of personality test.

Finally, my Facebook profile did not lend itself to personality prediction. The answer I received was: „Sorry, we are unable to generate a prediction. An insufficient number of your Likes match with those in our database, and we don’t believe in guesswork.“

Bottom line: Age and gender seem to be more difficult to predict on the basis of text samples than personality traits. But still, the Personality Predictor is definitely worth playing around with.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.