Interpreting and the Computer – finally a happy couple?

This year’s 39th edition of the Translating and the Computer conference, which Barry Olsen quite rightly suggested renaming to Translating and Interpreting and the Computer :-), had a special focus on interpreting, so obviously I had to go to London again! And I was all the more looking forward to going there as – thanks to Alex Drechsel’s and Josh Goldsmith’s wonderful idea – I was going to be a panelist for the first time in my life (and you could tell by the   just how excited we all were about our panel, and the whole conference for that matter).

The panelists were (from left to right):
Joshua Goldsmith (EU and UN accredited freelance interpreter, teacher and researcher, Geneva)
Anja Rütten (EU and UN accredited AIIC freelance interpreter, teacher and researcher, Düsseldorf)
Alexander Drechsel (AIIC staff interpreter at the European Commission, Brussels – don’t miss Alex’ report about Translating and the Computer 39 including lots of pictures of the event!)
Marcin Feder (Head of Interpreter​ Support and Training Unit at the European Parliament, Brussels)
Barry Slaughter Olsen (AIIC freelance interpreter, Associate Professor at the MIIS Monterey and Co-President of InterpretAmerica),
Danielle D’Hayer, our moderator (Associate Professor at London Metropolitan University)

If you have an hour to spare, here’s the complete audio recording (thanks for sharing, Alex!): “Live at TC39: New Frontiers in Interpreting Technology” on Spreaker.

If I had to summarise the conference in one single word, it would be convergence. It appears to me from all the inspiring contributions I heard that finally things are starting to fall into place, converging towards supporting humans in doing their creative tasks and decision-making by sparing them the mechanical, stupid work. This obviously does not only apply to the small world of interpreting, but to many other professions, too. “It is not human against machine, but human plus mashine”, as Sarah Griffith Masson, Chair of the Institute of Translation and Interpreting (ITI) and Senior Lecturer in Translation Studies at the University of Portsmouth, put it in her speech.

OK, this is easy to say on a conference called “Translating and the Computer”, where the audience is bound to be a bit on the nerdy side. And the truth is, Gloria Corpas Pastor, Professor of Translation and Interpreting of the University of Malaga, presented some slightly sobering results of her survey about the use of computers among translators and interpreters. It looks like interpreters are less technology-prone than translators, a fact that made most of us in the audience nodd knowingly. But no reason to be pessimistic, given the many interesting use cases presented at the conference, plus the efforts being made, for example, at the European Commission and Parliament to provide confererence interpreters with the tools they need for their information and knowledge management, as Alexander Drechsel and Marcin Feder reported.

So while for everyone who is not an interpreter, interpreters rather seem to be the frontier to be overcome by technology, the conference was all about new frontiers in interpreting in the sense of how technology can best be used in order to support interpreters and turn the relation between interpreters and computers into a symbiotic one. Here are the key ideas I personally took home from the conference:

Whatsappify translators’ software

A question asked by several representative from international organisations like WIPO (who by the way have this wonderful online term database called WIPO pearl), EU, WTO and UN was what our ideal software support for the booth would look like. Unfortunately, the infallible information butler described back in 2003 has not become reality yet, but many things like intuitive searching and filtering, parallel reading/scrolling of documents in two languages, linking the term in its textual context to the entry in the term database have been around for twenty years in Translation Memory systems. Most international organisations have so many translation ressources that could be tapped if only the access to them were open and a bit more tailored to the needs of interpreters. Translators and interpreters could then benefit from each others work much more than they tend to do nowadays. Obviously, a lot could be gained by developing more interpreter-friendly user interfaces.

Which reminds me a bit of WhatsApp. People who wouldn’t go anywhere near a computer before and could hardly manage to receive, let alone write, an email, seem to have become heavy WhatsApp users with the arrival of smartphones. While good old emails have been offering pretty much the same functions AND don’t force you to use always the same device, it’s stupid WhatsApp that finally has turned electronic written communication into the normal thing to do, simply by being much more fashionable, intuitive and user-friendly. So maybe what we need is a “WhatsAppification” of Translation Memory systems in order to make them more attractive (not to say less ugly, to quote Josh Goldsmith) to interpreters?

Making the connection between glossaries and documents

Clearly in the world of glossary or terminology management for simultaneous interpreting, of the nine interpreter-specific solutions I am aware of and had the honour to present in a workshop (thanks to everyone for showing up at 9 am!), InterpretBank and InterpretersHelp are the most forward-moving, ambitious and innovative ones. InterpretersHelp has just released a term extraction feature (to be tested soon) similar to that of Intragloss, i.e. you can add terms from parallel reference texts to your glossary easily. InterpretBank has even integrated a real term extraction feature similar to that of SketchEngine (also to be tested soon). If interpreters cannot be bothered to use translation memories after all, maybe that’s the way forward.

Automatic speech recognition reducing interpreters’ listening effort

Claudio Fantinuoli from Germersheim presented InterpretBank’s latest beta function: It uses speech recognition to provide life transcription of the speech, extracts numbers, names and technical terms and displays them, the latter together with their target language equivalents from the glossary.  This is the impressive demo video giving a glimpse of what is technically feasible.


Although it has to be admitted that it was made in a controlled environment with the speaker pronouncing clearly and in Britsh English. But still, there is reason to hope for more!

There was a nice coincidence that struck me in this context: Recently, I conducted a case study (to be published in 2018) where I analysed interpreters’ booth notes. In this study, numbers, acronyms (mostly names of organs or organisations) and difficult technical terms (mainly nouns) were the items most frecuently written down – and this is exactly what InterpretBank automatically highlights on the transcription screen.

What I have always liked about InterpretBank, by the way, is the fact that there is always science behind it. This time Bianca Prandi, doctoral student at the University of Germersheim, presented the research she plans on the cognitive load of using CAI or computer-assisted interpreting or CAI tools. I am really looking forward to hearing more of her work in the future.

The second speaker who showed a speech recognition function to support interpreters was keynote speaker Prof. Alexander Waibel – not a conference interpreter, for a change, but reputable personal trainers and Professor of Computer Science at Carnegie Mellon University, Pittsburgh and at the Karlsruhe Institute of Technology, Germany (who even has his own Wikipedia entry). During his extremely interesting and entertaining speech about deep learning, neuronal mashine translation and speech recognition, he also presented a life transcript function to support interpreters in the booth.

Paper and electronic devices all becoming one thing

I very much enjoyed talking and listening to the two most tablet-savy conference interpreters I am aware of (doubling as my co-panelists), Alexander Drechsel and Josh Goldsmith. I find the idea of using a tablet for note-taking very enticing, even more so after having seen Josh’s demo. And I don’t agree that the only reason to replace paper by a tablet is to look better or “just to try it out”. Alex and Josh could name so many advantages (consulting a dictionary or glossary in parallel, adjusting the pen colour, not having to turn the pages of your block after every two sentences). The most obvious to me is that don’t have to be afraid of running out of paper, by the way. And luckily, Josh’s study now tells us which devices are best suited to interpreters’ needs:

When we discussed the use of computers among interpreters and interpreting students in the panel, it was interesting to hear about the different experiences. Everyone seemed to agree that young interpreters or interpreting students, despite being “digital natives” and computer-savvy (which most panelists agreed is a myth), cannot necessarily be expected to be able to manage their information and knowledge professionally. On the other hand, common practice seemed to differ from using paper, laptop computers, tablets or even doing relying completely on smartphonse for information management, like our wonderful panel moderator Danielle D’Hayer reported her students did. She seemed to me the perfect example of not “teaching” the use of technologies, but just using them right from the beginning of the courses.

Remote everything: cloud-based online collaboration and distance interpreting

Although in the panel discussion not everyone seemd to share my experience, I think that team glossaries, nowadays more often than not created in Google Sheets, are about to become common practice in conference preparation. Apart from being great fun, it saves time, boosts team spirit, and improves the knowledge base of everyone involved. Not to mention the fact that it is device and operating system neutral, visit www.orchidmaids.com if you have lack of time maybe they have solution you need. There is, however, a confidentiality problem when using sensible customer data, but this could be solved by using encrypted solutions like interpretershelp.com or airtable.com.

Now once we are all able to collaborate, prepare and get to know each other online, we seem to be perfectly prepared to work in simultaneous interpreting teams remotely, i.e. from different places. Luckily, the two most knowledgeable colleagues in remote interpreting I know of, Klaus Ziegler (AIIC freelance interpreter and chair of the AIIC technical committee) and Barry Olsen), were at the conference, too. There is so much to be said about this subject that it would fill several blog posts. My most important lessons were: Remote interpreting technologies don’t necessarily imply lower rates in interpreting. The sound quality of videoconferences via normal (private) phone lines is usually not sufficient for simultaneous interpreting. The use of videoconference interpreting seems to be much more widespread in the U.S. than it is in Europe. It is a good idea for conference interpreters’ associations like AIIC to play an active role (as Klaus Ziegler is thankfully doing) in the development of technologies and standards.

Simultaneous and consecutive interpreting merging into simconsec

The last thing to be noted as converging thanks to modern technology is simultaneous and consecutive interpreting, i.e. using tablets and smartpens to record the original speech and replaying it while rendering the consecutive interpretation. Unfortunately, there was not time to talk about this in detail, but here is a one minute demo video to whet your appetite.

And last but not least: Thank you very much to Barry Olsen for the lovely live interview we had (not to be missed: the funny water moment)!

And of course: Spread the word about next year’s 40th anniversary of Translating and the Computer!


Posted

in

by

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.