My hands-, eyes- and ears-on experience with SmarTerp, including a short interview with the UI designer

Last December, I was among the lucky ones who could beta-test SmarTerp, a CAI (Computer-Aided Interpreting) tool in development that offers speech-to-text support to simultaneous interpreters in the booth.

For those who haven’t heard of SmarTerp before, this is what it is all about:

SmarTerp comprises two components:
1. An RSI (Remote Simultaneous Interpreting) platform, aiming to create the ideal conditions for a high-quality RSI service through (a) ISO-compliant audio and video quality, (b) communication options with all actors involved in the assignment (technician, operator, conference moderator, other booths—via chat—and the boothmate— via chat and a direct audio and video channel), (c) an RSI console allowing interpreters to perform all key actions required by this interpreting mode (change input and output channel, control their microphone, listen to their boothmate, pick up relay).
2. An ASR (Automatic Speech Recognition)- and AI-powered CAI tool, supporting interpreters in the interpretation of common ‘problem triggers’ (named entities, acronyms and specialised terms, and numbers). The Beta version of SmarTerp is the outcome of a close collaboration with practitioners, who have taken an active part in shaping all phases of the design and development of the solution.

So for the beta-testing, Andrea Wilming and I were assigned to staff the German booth, and we were told to prepare a glossary of about 70 to 100 entries about the rights of LGBTIQ citizens in the EU. So to do our reputation justice, we started this endeavour with meticulous preparation. We created a team glossary in Google Sheets and started to cram all sorts of vocab into it (like basically we were asked to do). But then halfway through the task, we started to wonder how to make our glossary really “SmarTerp-proof”.

Questions came up like: If I put LGBTIQ into my glossary, and someone says LGBT, will the system recognise it as a partial match? No it won’t. Susana Rodríguez explained to us that short entries increase the probability of them being recognized by the system (which is counter-intuitive to our tendency to have longer entries with a bit of context). So we ended up creating four alternative entries for the different variants, and found it reassuring that we would be able to read from the screen which of the acronyms was being used.

LGBT LGBT
LGBTQ LGBTQ
LGBTIG LGBTIG
LGBTIQ+ LGBTIQ+

Next question that came up during preparation: If there are two terms, e.g. Gleichbehandlung and Diskriminierungsverbot in German and only one equivalent – non-discrimination – in English, will SmarTerp show me both German terms when the English term is used by the speaker, so that I can choose the one I like best? Yes, it will. But again, if you want it to recognise both Gleichbehandlung and Diskriminierungsverbot in a German speech, you will need to create two different entries for synonyms of the same term.

Gleichbehandlung non-discrimination
Diskriminierungsverbot non-discrimination
nichtdiskriminierend non-discriminatory
diskriminierungsfrei non-discriminatory
inklusiv non-discriminatory

So my lesson learned number 1 is: Beware how you prepare! Building a glossary for yourself is different from building it for (a boothmate or) a machine.

But the best was still to come: After having completed a self-paced, very efficient and well-structured online training module, we were sent to the virtual booth along with our colleagues from the French, Spanish and Italian booths.

We were given enough time in advance to have a look around and play with the different functions of the user interface. My first impression: It was well organised in functional groups so that it was very easy to find our way around. I personally would have preferred white or a light colour for the background, which I find easier to read—or even to be able to assign different background colours to different functional areas for ease of orientation. But maybe the “dark mode” has its advantages in terms of visual perception? Indeed it has, as Icíar Villamayor (see interview below) explained to me. It is thought to cause less eye fatigue than having to look at a white/bright screen for hours,  which may even cause headache. It is also easier to highlight important elements on a dark background so that it is less distracting to search for them.

SmarTerp offers all sorts of different communication channels among interpreters: You can chat with your boothmate, the whole team, the technician or the operator (interestingly, many of us spontaneously missed emojis :-)), and there is a dedicated video and audio connection between boothmates. These are functions that had been widely asked for in the interpreting community. Now we had the whole range of options at our fingertips, we were all absolutely delighted, although some found the plethora of options rather overwhelming. But then the individual preferences as to which elements were crucial were far from homogenous either. I personally, for example, can do perfectly well without a booth video connection and handover function as long as there is a good chat function and I can listen to my boothmate working. Others, however, found the different chat options too many, or wanted to be able to monitor/see the different chat channels at the same time. It was also discussed whether the video window was too big or too small, if the support area should be rearranged, and if it might be possible at all for the different functional areas to be rearranged according to personal preferences … Which then made me wonder: who is the brain behind this thoughtfully designed user interface? Luckily I was introduced to Icíar Villamayor, front-end developer and graphic designer from Madrid, and I had the chance to ask her some questions.

Question: What did you find special about designing the user interface of SmarTerp?

Icíar: Everything was special! Until the project started I had never thought about simultaneous interpreting, I barely even knew what it was about and what the interpreter’s workflow was. You’d expect anything involving languages to be at least a bit intricate, especially on the interpreter’s side, but the amount of people that take part in the interpreting process’ “backstage” was something I never expected.
It was an ambitious project from the beginning. The main goal was to simulate an in-person interpreting setting and this was something that hadn’t already been done in any of the other simultaneous interpreting apps available.
For me, one of the most special parts of Smarterp was the challenge, in terms of information architecture, it presented. We had a lot of information on the screen — features, text, tiny icons and interactive buttons — and a very focused individual who couldn’t be distracted from their task. Being able to define some hierarchy in those items and organize them in blocks to ease the interpreter’s job was a challenge but also a learning experience.

Question: What did you find difficult to implement?

Icíar: The difficulty wasn’t in the implementation really. It is rather the kind of app that poses the challenge. Usually when you start to create a new platform, the first thing you do is defining the target population. Then you can check which other applications or platforms there are and find out about usability patterns and the ways the existing solutions solve the problems that we will most probably encounter ourselves in the course of the development process. For example, if we wanted to create an app to share videos, we would have a clear idea that our target users would be between 13 and 35 years old and would have some experience with social networks. We know that these users know that by double clicking on a post they can “like” it, that by swiping down they can scroll, etc. We could build on these usability patterns that our users know so well.
At Smarterp we are dealing with a population segment that is not divided by age. Thus we have a user group with a huge spectrum in terms of how they handle technology. Add to this the fact that the only real usability reference we have are “traditional” hardware consoles, and here you are with a rather challenging project.
It is true that we have taken references from videoconferencing applications for the video module, as well as references from instant messaging applications for the communication module, but our users are not like any other user. The level of concentration required by simultaneous interpreting forces us to rethink the use of every little feature we want to provide.

Question: Was there anything you – as a non-interpreter – liked/did not like about SmarTerp?

Icíar: I really liked the whole concept from the very beginning of the project. It’s interesting to discover new professions and be in direct contact with professionals who constantly give you feedback. I wouldn’t think of it in terms of liking or not liking but more of “We really thought this was perfect but it turns out it isn’t THAT usable?” That’s where we’re currently at with the communication module (the chat), which is also one of the most important features. From the beginning, it was difficult to establish who each user could write to via chat and how we could make the messages easier to manage for active interpreters and this is still a current issue. We’ve received some feedback from a user testing carried out by Francesca Maria Frittella and Susana Rodriguez and, even though we can’t follow word by word every interpreter’s suggestion, the results show we still got some work to do.

Do you think it will be possible to accommodate yet another section on the screen (provided it is a big one of, say, 34 inches) to display meeting documents and key terminology?

Icíar: Yeah! We’ve never ruled out adding more features to the interpreter’s interface. If there’s enough space it is definitely something that can be done. I guess we would need to interview interpreters to find out what the preferred feature would be. I can’t make any promises though.

My lesson number 2: Interpreters are hard to please. And big screens are a good idea.

The interpreting experience itself – the debate of an EP committee of about 90 minutes on LGBTIQ rights – was rather exhausting: Many speeches were read out at full speed and almost un-interpretable. But this was the whole point of the exercise, after all: to see if, under real-life worst-case maximum-cognitive-load conditions, we were able to make use of the almost-real time transcripts of numbers and named entities and the display of terminology.

And indeed, this AI-based live support, the innovative heart piece of SmarTerp, did really do its job! The numbers and names came just in time to be helpful, to my mind – I don’t think I would have been faster in scribbling them down or typing them into the chat for my colleague. There were several moments where I checked the support section for numbers and two or three named entities (like OLAF and SECA), mainly because I wasn’t sure I had heard them correctly. It feels absolutely reassuring to know that there is a permanent flow of potentially useful information coming in, just in case you might need it. On the other hand, of course, this avalanche of information can be overwhelming, as you are bound to need only a tiny fraction of it. But I think that I would be able to just ignore the constant flow of incoming words and numbers as long as I don’t need it, and just resort to it when in need, just like you ignore your human colleague scribbling away next to you in the booth as long as you don’t need their help (and your digital companion will not resent you for not even looking at it). One thing I am not sure about yet is the reliability of information offered by the AI. You can never be sure if the numbers and names are correct, and I noted the occasional mistranslation in the terms provided. A great plus is that SmarTerp marks reliable terms (i.e. those coming from the glossaries provided by the interpreters themselves) with a green symbol, so I suppose that it is a question of internalising it.

Lesson number 3: SmarTerp – think of it as an extremely helpful and untiring boothmate who is never ever offended if you don’t accept the support offered.

So what’s to come? I do of course have my personal wishlist to SmarTerp: I would love to have a section for sharing meeting documents and displaying them permanently along with some crucial information like key terminology and names. And at the end of a meeting, I would like to be able to download a sort of log file for follow-up purposes, e.g. extracting important terminology for future meetings on the same subject. I imagine this might also come in handy during the meeting in case I wanted to scroll back and check a term I had just noticed from the corner of my eye or hadn’t had the capacity to check while I was interpreting but wanted to do so once my boothmate had taken over. Here again, the cognitive load (writing down stuff for later reference) could be eased by the system.

Because this is what CAI is all about – reducing the cognitive load of simultaneous interpreting and leaving capacities to use our professional sense of judgment instead of scribbling numbers and searching terminology. On this note, I am rather optimistic that SmarTerp will help us rethink our role and work as interpreters for the better.


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C), and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Here comes my first-AI written blog post

I was feeling a bit lazy today, so I thought I might give Artificial Intelligence a try. And here we go, this is my first blog post written by an AI writing assistant:


How to Use the Best Interpreting Technology

Translation and interpreting services are crucial for business success. If you’re a business owner, it is important to know the importance of these services and how to use them. Interpreters can help bridge language barriers and make sure your company's message comes through loud and clear. But how do you find the best interpreter for your needs? How does interpreting technology work? Where can I learn more about these topics?

This blog post will answer all of your questions about translation, interpreting, and using the latest technology to do it well. We’ll also provide some handy resources to help you on your journey of finding the right interpreter for your needs.

Translating vs. interpreting

Before we delve into how to find the best interpreter for your needs, let's take a  moment to explore the difference between translation and interpreting.

Translation is a process by which you convert information from one language to another. This conversion may include converting words, sentences, or even whole texts. Translation of a text can be done by a human translator or a computer programme.

Interpreting is a process that involves translating spoken messages from one language to another simultaneously, usually in real time. Most often, interpreters work in medicine, law enforcement, education and business dealings with people from other cultures.

What is Interpreting?

Interpreting is the act of facilitating communication between people who speak different languages. Interpreters translate spoken language, written texts, and sign language into another language to help bridge the communication barrier.

There are many different types of interpreting services that can be utilized in various scenarios. For example, there are conference interpreters who work to facilitate communication at business conferences or other events. There are also medical interpreters who provide an important service in hospitals and doctors' offices.

Types of interpreting 

When it comes to interpreting, there are two main types: simultaneous and consecutive.

Simultaneous interpreting is a type of interpretation where the interpreter listens to a speaker and then speaks in a language that someone else can understand. The interpreters repeat everything that is being said by the speaker which allows the other person to follow along with what is being said. This type of interpreting needs to be done quickly as well as accurately so as not to lose the message from one party

Consecutive interpreting is different because it involves an interpreter listening to what someone says and then paraphrasing it back in their own words. With this method, the interpreter does not have to do as much work as simultaneous interpreters since they are not struggling to keep up with words being spoken at a high speed. However, they must be able to listen carefully and express themselves clearly when translating for the person who does not understand that language.

The importance of interpreting in business 

Translation and interpreting are crucial for business success. Interpreters do more than just translate the spoken word. They help bridge language barriers, so your company's message can come through clearly.

This is critical for any company that does business internationally. Businesses need to have a clear understanding of what they want to say before they get on board with an interpreter service. It can be easy to overlook this detail when you're in a hurry or feel like it's not necessary. But taking the time up front to plan out what you want to say will save you time and frustration later on.

Before choosing your interpreter, make sure you know the following:

-What languages are they fluent in?

-Do they specialize in certain fields? -Do they have experience interpreting business meetings?

-Do they have the background or certification necessary for your industry?

How Does Technology Impact Interpreting?

Technology has made it easier to find interpreters for your business needs. Gone are the days of having to call all over town to find a professional interpreter. You can do so right from the comfort of your office with just a few clicks on your computer.

But, how does technology impact interpreting? As said before, technology has made it easier to find qualified interpreters without having to leave your desk or cell phone. You can even use Skype or FaceTime with video calling for interactive interpreting services.

This is perfect for attendees who prefer not to stand in front of a large group and speak in front of them, but would rather express themselves more through body language and facial expressions. It's also perfect for situations where the interpreter will be speaking in front of a group via microphone or speakerphone while video conferencing with the original speaker at the same time.

Where Can I Get the Right Resources to Keep Learning About These Topics?

If you want to learn more about the power of language and how interpreting can help your company, we recommend these sources:

Myths About Translation – https://www.linguisticbee.com/myths-about-translation

Language Barrier – https://www.linguisticbee.com/language-barrier

The Importance of Translation Services – https://www.linguisticbee.com/importance-of-translation-services

The above text is what Writesonic’s AI Article Writer 2.0 has generated based on the following keywords: simultaneous interpreting, glossaries, terminology, machine translation, voice recognition, speech-to-text, video conferences, collaboration

Once you have provided about 200 characters worth of keywords, the system takes you through different steps (“come up with ideas”, “generate an intro”, “get an outline”, “generate your draft article”). All I did along this process was to delete references to translators/translation in the intro and outline to narrow down the content to interpreting rather than translation. Other than that, the article above is the unmodified work of the AI writer.

At first sight, I was rather impressed by the result, both in terms of language and content (if “content” is the right word for what an AI writes?). Although, from an insider’s point of view, the content is a bit on the shallow side… But then why not try the “sentence expander” function to see if we get more out of it?

So I copied the following paragraph into the “sentence expander” (selecting the tone of voice “excited”) just to see what happens:

But, how does technology impact interpreting? As said before, technology has made it easier to find qualified interpreters without having to leave your desk or cell phone. You can even use Skype or FaceTime with video calling for interactive interpreting services.

And this is what the sentence expander made of it:

An interpreter is a professional trained in a foreign language who facilitates communication between two or more people who don't share a native language. The interpreter has to have an understanding of the cultures and languages being spoken in order to effectively mediate between them. They also have to be on the same page as the speakers, so they must keep up with everything being said.

There are various benefits to using technology for interpreting. Not only can you find an interpreter instantly, but you can use the video call to ensure that the interpretation is consistent with what is being said. You can also find interpreters that speak your language or dialect, which is not a problem with a video call.

Yet, even with all of these benefits, technology cannot replace the human element of interpreting. A key part of this human element is an interpreter's background and culture. The more you know about your native language and your native culture, the better interpreter you will be.

While the “witty” and “funny” tones of voice didn’t bring much of a change,  the “dramatic” version did indeed add a bit of drama:

As technology advances, it will likely change the way interpreters work. Who knows what the future holds? One thing is for sure, though: interpreting is currently experiencing a digital revolution.

Technology has changed the way we interpret. With a few taps or clicks, you can find perspective interpreters, have them come to you, and even see them through a video link. Technology is also making it easier to find interpreters for difficult languages. Computers can now translate Spanish into English with a speaker-to-speaker interpreter who only speaks Spanish! So, the answer is yes, technology has impacted interpreting! [...]

But still, my impression is that the system avoids “diving deeper” content-wise. The subject of terminology or glossaries, for example, was omitted altogether. Maybe quite wisely so – after all, it is always better not to engage in conversations about topics one has no understanding of (although I didn’t expect this kind of thoughtfulness from an Artificial Intelligence). And I am really not sure whether I would naturally say something like Computers can now translate Spanish into English with a speaker-to-speaker interpreter who only speaks Spanish!

When I tried the system in German, the result was a mixture of German and English text blocks. Some general content on what interpreters and translators are were there in German, whilst all the rest came out in English, and the conclusion was all about the importance of Search Engine Optimisation!

The Spanish version was much better – delivering a text completely in Spanish, for a start. It focused a lot on automatic translation rather than interpreting.


La Nueva Inteligencia Artificial traduce voz, texto y vídeo para ayudar a las empresas a conectar con sus clientes

Una de las características más útiles de la Inteligencia Artificial para los negocios es su capacidad para traducir vídeo, texto y voz. Este proceso se conoce como "traducción automática" y permite a las empresas interactuar con los clientes en una amplia variedad de idiomas. Como esta herramienta se convierte en cada vez más popular, sin duda ayudará a las empresas a acercarse más a sus clientes. Aquí hay algunas ideas sobre cómo la traducción automática podría beneficiarles.

¿Qué es la traducción automática?

La traducción automática es un proceso de traducción que utiliza la Inteligencia Artificial (AI) para convertir el texto y el vídeo de un idioma a otro, así como la voz. Por lo general, esta herramienta permite que las empresas interactúen con los clientes en una amplia variedad de idiomas sin tener que contratar a personas para hacerlo.

Cómo aumenta el alcance de las empresas

¿Qué significa traducir automáticamente? Cuando se traduce automáticamente, los usuarios pueden interactuar con su marca en su idioma nativo. Por ejemplo, si tu empresa tiene un sitio web en inglés y sus clientes están principalmente en España, pero también en México, traduciendo automáticamente es un buen paso.

La traducción automática es una ventaja para las compañías que requieren servicios internacionales ya que no hay ningún costo adicional para competir internacionalmente. Sin importar lo que estés vendiendo o cuanto estés cobrando por él, la traducción automática ayuda a ampliar tu alcance mundial.

Para qué sirve traducir vídeo, texto y voz

La Inteligencia Artificial es una herramienta muy útil para las empresas, que podrían traducir los vídeos, textos y voces de sus clientes. Esto se conoce como "traducción automática" y permite a las empresas interactuar con los clientes en casi todos los idiomas.

Como esta herramienta se convierte cada vez más popular, sin duda ayudará a las empresas a acercarse a sus clientes. Aquí hay algunas ideas sobre cómo la traducción automática podría beneficiarlas:

1- La Inteligencia Artificial les permitiría entender mejor lo que sus clientes quieren y necesitan. Los procesadores neuronales permiten la traducción automática y la comprensión natural del lenguaje humano. Sin embargo, no pueden simular el pensamiento complejo ni el arte dialéctico, sino que sol

Cuáles son las ventajas para las empresas de utilizar traducción automática

La capacidad de traducir vídeo, texto y voz es una característica útil para las empresas. Hoy en día, muchas personas están buscando productos y servicios en lenguajes que no son el inglés. Ya sea que usted necesita entrar en el mercado internacional o simplemente ser más accesible para sus clientes nacionales, la traducción automática es una herramienta que no puede faltar para su negocio.

La traducción automática tiene varias ventajas para las empresas, incluyendo:

- Aumento del número de seguidores. La mayoría de la gente no se siente cómoda usando Google Translate porque los resultados pueden ser inciertos. Con la inteligencia artificial, todavía podemos ofrecer a nuestros clientes con idiomas diferentes

Cómo saber si necesitas traducciones automáticas

Las compañías que se encuentran en el mercado internacional o que tienen una presencia global pueden experimentar con las traducciones automáticas para aumentar sus ventas.

Este proceso toma mucho tiempo y es costoso traducir texto, vídeo y voz manualmente. Sin embargo, los servicios de traducción automática ofrecen una manera más rápida, económica y efectiva de llegar a los clientes potenciales en otros idiomas.

Qué pasaría si tuvieras traducciones automáticas para todos tus productos?

El beneficio más obvio de la traducción automática es que podrías tener tus productos traducidos para cualquier idioma. Los clientes potenciales tendrían acceso a los mismos productos y servicios, independientemente del idioma en el que residen.

La traducción automática también sería útil si te encuentras con una nueva categoría o regiones de mercado. Por ejemplo, si quieres expandir tu negocio a la India, significaría que puedes hacerlo sin necesidad de contratar a un traductor profesional o existe la posibilidad de que proporcionarías los productos directamente en español.

Fortalecerás tu marca a nivel mundial 

La traducción automática es una excelente opción para ampliar la presencia de tu marca a nivel mundial. Tanto las empresas como los consumidores pueden beneficiarse de esta herramienta.

Al traducir tu material de marketing, por ejemplo, podrás ofrecer información más fiable y accesible a los consumidores en otros países, lo que les permitirá mantenerse al día con toda la información sobre ti y tus productos. Además, las personas que no hablan inglés o cualquier otro idioma podrán explorar con mayor facilidad el contenido que hay en tu sitio web.

La traducción automática también tiene ventajas para las empresas: puede ayudar a comprender mejor a los clientes y comprender qué está pasando en sus vidas y necesidades. Por ejemplo, puede ser ú

Ubicación de la empresa

En la actualidad, las empresas están tratando de expandirse para llegar a una audiencia más amplia. Esto se traduce en que muchas compañías necesitan establecerse en nuevos mercados.

Sin embargo, las empresas tradicionales se encuentran con el problema de llegar a los clientes potenciales. A veces es difícil entender lo que están preguntando los clientes, lo cual los hace inaccesibles para las compañías.

Pero ahora, la inteligencia artificial está permitiendo que las compañías traduzcan vídeo y texto para atraer y satisfacer a sus clientes internacionales. La tecnología puede ayudar a los negocios a expandirse sin importar su ubicación geográfica o el idioma del mercado en el que operan.

So all in all, the technology is impressive, and it can certainly be helpful in moments of writer’s block, just like DeepL helps us in moments of “translator’s block”.

Just make sure you don’t take everything at face value only because it is written in good English – but then the same goes for text written by humans. Personally, the AI writing assistant reminds me a bit of those people with whom you can talk about almost anything based on their common half-knowledge, but who then subtly steer the conversation to their favourite subject.


Ressources

https://writesonic.com/

https://headlime.com/ (not tested properly – has anyone?)

https://www.jarvis.ai/ (not tested yet – has anyone?)


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C), and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Synchronous Scrolling in Microsoft Word

Whilst it usually is fancy new apps designed for the benefit of interpreters (or everybody) that get my attention, this time it was boring old Microsoft Word I got excited about. Maybe I am the last one to discover the synchronous scrolling function, but I liked it so much that I wanted to share it.

When interpreting on the basis of a pre-translated manuscript, I like to copy & paste original and translation into a spreadsheet (which then also serves as a great terminology research corpus). But for a quick solution to just read in parallel the two language versions, MS-Word’s synchronous scrolling is a great alternative.

You simply open both documents in Word, click display next to each other (nebeneinander anzeigen, ver en paralelo) in one of them, choose the second document, and activate synchronous scrolling (synchrones Scrollen, desplazamiento síncrono).

Very handy: If the two language versions are of different lengths (as it usually happens with English and German), when the texts get out of sync, you just deactivate synchronous scrolling, re-adjust the text positions, and activate synchronous scrolling again.

Unfortunately, this function is not available for the Mac version. Thanks for pointing this out, Michaela Haller @sprachlicht!

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C), and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

How to Make CAI Tools Work for You – a Guest Article by Bianca Prandi

After conducting research and providing training on Computer-Assisted Interpreting (CAI) for the past 6 years, I feel quite confident in affirming that there are three indisputable truths about CAI tools: they can potentially provide a lot of advantages, do more harm than good if not used strategically, and most interpreters know very little about them.

The surveys conducted so far on interpreters’ terminological strategies[1] have shown that only a small percentage has integrated CAI tools in their workflow. Most interpreters still prefer “traditional” solutions such as Word or Excel tables to organize their terminology. There can be many reasons for this. Some may have already developed reliable systems and processes, and don’t see a point in reinventing the wheel. Others believe the cons outweigh the pros when it comes to these tools and are yet to find a truly convincing alternative to their current solutions. Others may simply never have heard about Flashterm, InterpretBank or Interpreter’s Help before.

Even though a lot still remains to be investigated empirically, the studies conducted so far have highlighted both advantages and disadvantages in the use of CAI tools.  On the positive side, CAI tools can provide effective preparation through automatic term extraction and in-built concordancers[2] (Xu 2015). They seem to contribute to higher terminological accuracy than paper glossaries[3] or even Excel tables[4] when used to look up terms in the booth. They help interpreters organize, reuse and share their resources, rationalize and speed up their preparation process, make the most of preparation documents, work efficiently on the go and go paperless if desired. On the negative side, they are often perceived as potentially distracting and less flexible than traditional solutions. When working with CAI tools, we might run the risk of relying too much on the tool, both during the preparation phase and interpretation proper[5].

I would argue that, if used strategically, the pros easily outweigh the cons. Just as with any tool and new technology, it all comes down to how you use them. Whether you are still sceptical, already CAI-curious, or a technology enthusiast, here are three tips on how to make CAI tools work for you.

  1. Take time to test your tools

Most tools offer a free demo to test out their functionalities. I know we are all busy, but you can use downtimes to work on improving your processes, just as you would (should!) do to work on your CPD and marketing strategy. I suggest you do the following:

  • Choose one of your recent assignments, something you had to do research on because the topic was unfamiliar to you.
  • Set aside 1-2 hours a day, or even just 30 minutes, to simulate preparing for the assignment again.
  • Set yourself a clear goal for each phase of your workflow (glossary creation, terminology extraction, memorization, debriefing).
  • Build your baseline: dedicate 1 session to assessing your current approach. Then, dedicate each of the following sessions to testing out a different tool, here is some key differences.
  • For a systematic comparison, keep track of the time needed for each activity, the pros and cons for each tool, your preferences and things that you found irritating.

You can conduct this analysis and selection process over a week or even a month if you are very busy. Once you have identified what might work for you, keep using those tools! Maybe test them out on a real assignment for a client you already know, where the risk of mishaps is lower.

  1. There is no perfect tool

Unless you can write code and develop your own tool, chances are there will always be something you don’t like about a tool, or that some functions you deem essential might be missing. But given the advantages that come from working with Wunder-Mold these solutions, it is definitely worth it to try and see whether you can find a tool that satisfies even just 50% of your interpreting needs. It may not seem much, but it’s already 50% of your workflow that you can optimize.

Once you get a feeling for what each tool can do for you, you might find out that there are some options you love that aren’t available in your tool of choice. My suggestion: mix and match. Most CAI tools are built modularly and allow users to only work with a specific function. For instance, I love Intragloss’ terminology extraction module, so I use that tool to work with documents, but I use InterpretBank for everything else. In a word: experiment and be creative!

  1. Tools can’t do the work for you

If you’re passionate about technology, you will agree that CAI tools are quite cool. However, we should never forget that they are a tool and, as such, they fulfil their function as long as we use them purposefully. Think before you use them, always make sure you follow a strategic course of action.

If you have the feeling you had never been as ill-prepared as when you worked with a CAI tool, here are some questions you can ask yourself:

  • Am I sure this is the right tool for me? Have I taken enough time to test it out?
  • Did I have a clear goal when I started preparing for my assignment? Or was I simply trying to cram together as many terms as possible?
  • Am I aware of my learning preferences? If I’m an auditory learner, does it make sense to use a flashcard method to study the terminology?
  • Did I include in my glossary just any term that came up in my documents? Or did I start from the relevant terminology I found to further explore the topic?

As for many things in life, reflection and a structured, strategic approach can really go a long way. For busy interpreters needing some guidance, Interpremy is preparing a course series that will help you effectively use CAI tools to optimize all phases of your workflow and avoid potential pitfalls. Get in touch at info@interpremy.com!


[1] See for instance: Zielinski, Daniel and Yamile Ramírez-Safar (2006). Onlineumfrage zu Terminologieextraktions- und Terminologieverwaltungstools. Wunsch und Wirklichkeit noch weit auseinander.” MDÜ. and Corpas Pastor, Gloria and Lily May Fern (2016). A Survey of Interpreters’ Needs and Practices Related to Language Technology.

[2] See Xu, Ran (2015). Terminology Preparation for Simultaneous Interpreters. University of Leeds.

[3] Biagini, Giulio (2015). Glossario cartaceo e glossario elettronico durante l’interpretazione simultanea: uno studio comparativo. Università degli studi di Trieste.

[4] Prandi, Bianca (2018). An exploratory study on CAI tools in simultaneous interpreting: Theoretical framework and stimulus validation. In Claudio Fantinuoli (ed.), Interpreting and technology, 29–59. Berlin: Language Science Press.

[5] Prandi, Bianca (2015). The Use of CAI Tools in Interpreters’ Training: A Pilot Study. 37th Conference Translating and the Computer, 48–57.


About the author:

Bianca Prandi

bianca@interpremy.com

  • Conference Interpreter IT-EN-DE, MA Interpreting (University of Bologna/Forlì), based in Mannheim (Germany), www.biancaprandi.com;
  • PhD candidate – University of Mainz/Germersheim. Research topic: impact of computer-assisted interpreting tools on terminological quality and cognitive processes in simultaneous interpreting;
  • CAI trainer and co-founder of InterpreMY – my interpreting academy: online academy for interpreters with goal-centered, research-based courses, www.interpremy.com (coming soon: July 2020).

Publications:

  • Prandi, B. (2015). L’uso di InterpretBank nella didattica dell’interpretazione: uno studio esplorativo. Università di Bologna/Forlì.
  • Prandi, B. (2015). The Use of CAI Tools in Interpreters’ Training: A Pilot Study. 37th Conference Translating and the Computer, 48–57. London.
  • Prandi, B. (2017). Designing a Multimethod Study on the Use of CAI Tools during Simultaneous Interpreting. 39th Conference Translating and the Computer, 76–88. London: AsLing.
  • Prandi, B. (2018). An exploratory study on CAI tools in Simultaneous Interpreting: theoretical framework and stimulus validation. In C. Fantinuoli (Ed.), Interpreting and technology, 28–59.
  • Fantinuoli, C., & Prandi, B. (2018). Teaching information and communication technologies: a proposal for the interpreting classroom. Trans-Kom, 11(2), 162–182.
  • Prandi, B. (forthcoming). CAI tools in interpreter training: where are we now and where do we go from here? InTRAlinea.

 

Remote Simultaneous Interpreting … muss das denn sein – und geht das überhaupt?!

In einem von AIIC und VKD gemeinsam organisierten Workshop unter der Regie von Klaus Ziegler hatten wir Mitte Mai in Hamburg die Gelegenheit, diese Fragen ausführlich zu ergründen. In einem Coworkingspace wurde in einer Gruppe organisierender Dolmetscher zwei Tage lang gelernt, diskutiert und in einem Dolmetschhub das Remote Simultaneous Interpreting über die cloud-basierte Simultandolmetsch-Software Kudo ausprobiert.

Zur Einstimmung haben wir uns zunächst vergegenwärtigt, worüber denn nun eigentlich genau reden, denn in der ISO-Norm 20108 wird begrifflich folgendermaßen unterschieden:

Distance Interpreting (interpreting of a speaker in a different location from that of the interpreter, enabled by information and communications technology): Dieser Begriff des “Ferndolmetschens” umfasst alle Szenarien, bei der einer der Kommunikationsteilnehmer sind nicht im gleichen Raum wie die anderen befindet. Dies kann unter anderem bedeuten, dass einfach ein Sitzungsteilnehmer per Videokonferenz zugeschaltet wird oder man beim Kunden sitzt und ein Telefonat dolmetscht.

Das Remote Interpreting, um das es hier gehen soll, ist eine „Unterart“ des Distance Interpreting. Es bedeutet, dass die Dolmetscher remote sind, sei es im Nebenraum oder an einem ganz anderen geographischen Ort. Dabei kann durchaus sein, dass darüber hinaus auch die Teilnehmer an unterschiedlichen Orten sind (full remote).

Viele technische Fragen zum Remote Interpreting, etwa zu den Anforderungen an die Übertragungsqualität, hat Tatiana KAPLUN im November 2018 in ihrem tollen Blogbeitrag “A fresh look at remote simultaneous interpreting” über das RSI-Seminar von AIIC Netherlands dargestellt – sehr lesenswert.

Was mich und viele andere Teilnehmer allerdings neben abgesehen von der technischen Umsetzbarkeit (wie gut sind die RSI-Softwarelösungen? Wo gibt es einen Premium-Hub, wie wir ihn uns wünschen?) am Ende des ersten Tages umtrieb, war die Frage, welche Remote-Simultan-Lösung für unseren Kunden in welchem Fall die beste ist. Wir hatten unzählige kritische Aspekte diskutiert, die jeweils für das Remote-Dolmetschen in gut erreichbaren Dolmetsch-Hubs oder die cloudbasierte RSI-Software am eigenen Rechner sprechen. Und da ich meine, nichts geht über eine knackige Entscheidungsmatrix, habe ich versucht, in einer Tabelle zusammenzufassen, welche Vorteile die jeweiligen Lösungen bieten:

Hub

(an einem zentralen Ort dauerhaft installierte Dolmetschkabinen mit RSI-Software und stabiler und sicherer Internetverbindung)

Home

(Simultandolmetschen mittels cloud-basierter Software; Nutzung von Rechner, Kopfhörer, Mikrofon und Internetverbindung des Dolmetschers)

Vertraulichkeit Bei Sorge vor Industriespionage bzw. entsprechenden IT-Sicherheitsstandards sind öffentliche, cloud-basierte Lösungen für viele Unternehmen problematisch;

Hub mit sicherer Verbindung/eigener oder externer geschützter Serverstruktur sinnvoll

Werden keine vertraulichen Informationen ausgetauscht bzw. wird ohnehin seitens der Teilnehmer mit offenen Systemen/unkontrollierten Zugangspunkten gearbeitet (Privatrechner, Heimbüro), ist eine sichere Verbindung weniger entscheidend
Verlässlichkeit Wenn bei technischem Ausfall hohe Kosten oder große Probleme entstehen; Hubs können eher zwei oder drei Internetverbindungen vorhalten und so einen Verbindungsausfall praktisch ausschließen Wenn ein technischer Ausfall keine größeren Kosten/Probleme verursacht und die Teilnehmer evtl. auch mit wenig verlässlichen Systemen arbeiten; die wenigsten Dolmetscher halten in Ihrem Büro einen redundanten zweiten Internetanschluss vor
ISO-ähnliche Übertragungsqualität und Arbeitsumgebung Einwandfreie Übertragungsqualität (Frequenzband, Latenz, Lippensynchronität) notwendig, wenn störungsfreie, “unbemerkte” Verdolmetschung erforderlich Suboptimale Übertragungsqualität kann dazu führen, dass häufiger Rückfragen notwendig sind, Inhalte verloren gehen, Pausen notwendig sind; akzeptabel v.a. bei Einsätzen von kurzer Dauer bzw. Intensität
Zeit Zeitersparnis nur, wenn Dolmetscher in Hubnähe verfügbar; Vorteil: Hub ist immer einsatzbereit Praktisch bei kurzfristiger Terminierung, vor allem, wenn Dolmetscherpool vorhanden, so dass gegenseitige Vertretung möglich ist – Achtung: Technikchecks, Stand- und Wartezeiten evtl. trotzdem berücksichtigen
Kosten Vorteilhaft ist, wenn Dolmetscher in Hub-Nähe wohnen (bzw. näher am Hub als am Kunden) Praktisch wenn Dolmetscher weder in Hub- noch in Kunden-Nähe
Logistik Praktisch, wenn die Sitzungsteilnehmer an Firmenstandorten mit Hub-ähnlichen Bedingungen sind (Übertragungstechnik vorhanden) Praktisch, wenn Teilnehmer an beliebigen Orten auf dem Globus verteilt sind und/oder beliebig viele Zuhörer mit eigenen Geräten zuhören sollen (BYOD)

 

Zusammenarbeit Im Hub ist Teamarbeit in der Kabine wie in Präsenzveranstaltungen möglich Mikrofonübergabe, Unterstützung (Zahlen/Namen notieren, Terminologierecherche) und Abstimmung erschwert, Blickkontakt muss durch Softwarefunktionen simuliert werden

Neue Perspektiven

Vergleicht man die unterschiedlichen Aspekte, wird klar, dass das Dolmetschen aus einem Hub tendenziell eher die Bedingungen des konventionellen Konferenzdolmetschens abbildet bzw. abzubilden in der Lage ist – wenn dies auch bei Weitem nicht immer gilt. So wäre ein vollständig ohne Konferenzraum, rein virtuell abgehaltenes Event durchaus sinnvoll aus einem Hub zu dolmetschen und in eine „normale“ Präsenzveranstaltung könnten Dolmetscher (wie Teilnehmer) cloud-basiert aus allen Teilen der Welt hinzugeschaltet werden. Klar wird dennoch, dass Veranstaltungen, die aktuell nicht simultan gedolmetscht werden, durch Remote-Technik dafür nun eher in Frage kommen – so etwa Webinare oder informelle Besprechungen zwischen größeren Sitzungen  internationaler Gremien.

Der Praxistest

Am interessantesten und unterhaltsamsten war natürlich der Praxistest. In verteilten Rollen konnten alle Teilnehmer in die Rolle des Redners, Dolmetschers und Zuhörers schlüpfen und alle Perspektiven kennenlernen.

Remote-Simultandolmetschen im Hub

Das Kabinen-Feeling war dabei im ersten Moment ganz normal und gewohnt, mit der Benutzeroberfläche hatten wir uns alle schnell vertraut gemacht. Das Bedienen der “Knöpfe” für Mikrofon, Räuspertaste und Lautstärke am Bildschirm erfordert dabei ein wenig Übung english college. Nicht, dass man nicht in der Lage wäre, mit der Maus einen Schieber zu bedienen – aber das blinde Drücken von ertastbaren Knöpfen, ohne das Sitzungsgeschehen oder den Redner dabei aus dem Auge lassen zu müssen, empfanden die meisten – eventuell auch aus Gewohnheit – als weniger aufmerksamkeitsraubend. Spannend war auch das Ausprobieren verschiedener Kopfhörer, Mikrofone und Betriebssysteme, was zu teils erheblichen Unterschieden in der übertragenen Lautstärke führte.

Da wir im Hub miteinander in den Kabinen saßen, wurde nicht ausprobiert, wie man ohne Blickkontakt das Mikrofon abgibt. Einen Übergabebutton hat Kudo nicht, im Gegensatz zu anderen RSI-Programmen. die teils recht pfiffige Lösungen hierfür bieten.

Zuhörer-Benutzeroberfläche (Android) von Kudo

Das Zuhören am eigenen Smartphone funktionierte bis auf die genannten Lautstärkeprobleme recht gut. Die noch ungeklärte Frage war eher: Was passiert, wenn 50, 100 oder 1000 Veranstaltungsteilnehmer ihr own device bringen und nach einem halben Tag des Zuhörens alle gleichzeitig ihren Akku laden müssen? First world problems, würde man sagen, bringt aber ganz gut auf den Punkt, mit welchem Eindruck mich ein spannendes und intensives RSI-Seminar zurücklässt: Tolle Chancen, faszinierende Technik – aber mitunter kritische technische Details, die dem Enthusiasmus (noch) ein Beinchen stellen.

————————
Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

How to Build Your Self-Translating Glossary in Google Sheets

I am certainly not saying that Google can create your glossaries for you when preparing for a technical conference on African wildlife or nanotubes. But if you know your languages well enough to tell a bad translation from a good one, it may still be a time-saver. Especially for those words you don’t use every day, like Salpetersäure or grey crowned crane, and that you only put into your glossary to trigger your memory.

To integrate the automatic translation function in a glossary in Google Sheets, you can either enter the translation formula directly with the click over here now into the respective cells, or use the nice little add-on Translate My Sheet.

This is what the formula looks like:
=Googletranslate(A2,“en”,“es”)
A2
being the cell with the orginal text, en the source language and es the target language.

Here is a short demo video on how to use the GoogleTranslate formula in a glossary:

If you want to use Translate My Sheet, you first need to go to the Add-ons menu and add it to your add-ons (“get add-ons”, search for the add-on, add it). The user interface looks like this:

This video shows an example of Translate My Sheet translating from English into Spanish:

And this one shows English into German:

I found the quality similar to that of the Microsoft Translator – not too bad as long as your language combination includes English.

And don’t forget: Never translate (or even handle) your client’s sensitive data in Google without their permission!


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Word Clouds – much nicer than Word Lists

I have been wondering for quite some time if word lists are the best thing I can come up with as a visual support in the booth. They are not exactly appealing to the eye, after all …

So I started to play around with word cloud generators a bit to see if they are of any use. Here comes a short summary of my conclusions:

The tool I liked most was WordItOut by Enideo from the UK. You can copy and paste text or tables easily and create nice word clouds in no time.

I tested it with three kinds of documents:

  1. My personal glossary
  2. Plain text
  3. Term extraction results from SketchEngine

Personal short glossary

I like to create a shortlist of my most-important-to-remember terms and have it on display permanently in the booth. Usually, there are no more than 10 to 20 terms on this list. So I copied in a short sample glossary with numbers from 1 to 10 added behind the terms (indicating frequency but meaning importance) and the result was this:

OK, it’s monolingual, but why not add some colour to the booth and print a second one?

Of course it does not help if you don’t know the equivalents. But especially when working mainly into one target language, some colleagues tend to write down terms in their target language anyway (more insight about this subject to be published in autumn!).

And if you really like a fancy booth decoration, you can always do some manual work and create a table with the equivalents in your working languages in one field

and get your bilingual word cloud:

By the way, you can choose the font and colour or simply press the “regenerate” button again and again until you like what you get.

My conclusion: I love it! Easy enough to use from time to time as a nice booth decoration – or use it as a desktop wallpaper, for that matter.

Plain text

When using plain text, words are displayed in varying sizes depending on their frequency in the text. While this is not as useful as term extraction, where terms are extracted based on much more complicated algorithms, it still gives you an idea of what the most frequent words in the text are. This can be useful, for example, for target language vocabulary activation (or when learning a new language?).

One downside, however, is that multi-word terms like “circular economy” are torn apart, so you would need to post-edit the list of words adding a ~ between the words you wish to be kept together.

Another problem is that when using any language other than English, no stop word list is pre-determined (you can add one, though). This means that, for example in German, you end up getting a cloud of der, die, das, und, er, sie, es, aber, weil, doch.

My conclusion: A lot of potential but little real use cases.

Term extraction results

The nicest thing is of course to have an extraction tool with a built-in word cloud generator, like SDL Trados Studio has.

But if you use other term extraction tools, you can still copy the extraction results into the word cloud generator. I used a term list extracted by SketchEngine,  copied in the list of extracted terms plus scores and the result was this:

Multi-word terms are no problem at all, and the size of the terms varies according to the scores calculated by SketchEngine for each term. Much more relevant than frequency in most cases …

My conclusion: Very nice!

PS: If you are interested in terminology extraction for interpreters, Josh Goldsmith is conducting an interesting study on this subject. First results may be expected to be presented in November at the 2nd Cologne Conference on Translation, Interpreting and Technical Documentation (CGN18).

 

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Microsoft Office Translator – Can it be of any help in the booth?

When it comes to Computer-Aided Interpreting (CAI), a question widely discussed in the interpreting community is whether information being provided automatically by a computer in the booth could be helpful for simultaneous interpreters or if would rather be a distraction. Or to put it differently: Would the cognitive load of simultaneous interpreting be increased by the additional input, or would it be decreased by providing helpful information that the interpreters would otherwise have to retrieve from their longterm memory.

Of course, interpreting is not about translating single words, but about ideas being understood in one language and then expressed in another. But on the other hand, we all (conference interpreters or not) know the occasional tip of the tongue, when we just can’t think of the German word for, say, nitric acid, and might appreciate a little trigger to remember a particular word or expression.

One scenario of CAI often discussed is that the source speech is analysed by a speech recognition software, critical terminology is extracted and, based on the interpreter’s glossary, a dictionary or other sources, the equivalent in the target language is displayed on the screen. This technology still has many limitations, especially the speed and quality/reliability of the speech recognition function. But while we are waiting for this solution to become market-ready, I have recently come to like a tool which is altogether quite different in its original aim but can be used for a similar purpose: The Microsoft Translator.

In Powerpoint, for example, by just clicking on a text element the translator window opens next to the slide, and remains open. It translates complete texts or single words, and it has turned out to be quite useful for me in some situations, especially when interpreting presentations based on Powerpoint files I had not had the time to read before the meeting.

But would I say that the Microsoft Translator is a tool I consider a valuable support in the booth? The answer clearly is: it depends.

Quality varies considerably between language pairs. While English-Spanish seems to be one of the well-developed “premium” combinations with sometimes impressive results, French-German did not really convince me.

You can never rely on the system to understand the message. And running a mental plausibility check in parallel to the normal interpreting job plus reading the translation on the screen is not an option.

But: If you manage to use the translator simply to prompt your brain when you are searching for a particular word, preferably one that leaves no room for mistranslations (like sodium, elderflower or forklift truck), it may make your life easier.

The nice thing is that this translator, which can also be used as a dictionary, runs within Powerpoint, so you can read your presentation and pre-translate texts very easily. It does not involve any typing or skipping between different windows.

After all, we are still at the beginning of what CAI brings. The Microsoft Translator is an easily accessible tools nice enough to play around with and get a flavour of what language technology brings for conference interpreters. And I am really curious to hear what your experience is!

About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

You love keyboard shortcuts? Meet GT4T!

GT4T – key shortcuts made for translators and interpreters

(for German and Spanish, scroll down)

If you asked me, everyone should learn key shortcuts at school together with their ABC. Once memorised, they are so convenient to use … unlike buttons on the screen, you just feeeeel them without having to look. It seems like this need for haptic feedback is quite human, by the way, as researchers are working on virtually emulating haptic feedback, and not only on touchscreens, for that matter.

Now, at least for translators and interpreters working with Windows, a new dimension is brought to the world of keyboard shortcuts: With GT4T, Cao Shou Guang (aka Dallas) from China has created a tool which is both simple and brilliant. By pressing CTRL+D, GT4T looks up words in different online dictionaries like Linguee, Glosbe, Microsoft Terminology, Wordreference and others, plus your personal glossary, and shows the results in one small popup window. If you want to see the search results right on the respective website (which is quite nice especially for Linguee with its valuable context display), you simply hit the O key.

Pressing CTRL+Win+D lets you look up the selected word in your GT4T glossary only, or open and edit this glossary – a very simple table indeed – in Excel. You can easily copy and paste your glossaries into this file, and it comes in really handy when preparing for a conference. Translations found with STRG+D (or otherwise) can be added to this personal glossary by pressing the A key, existing glossaries can be copied into this table, and of course the other way around.

What is more, pressing CTRL+J replaces the selected text by a machine translation from a preset source of your choice (e.g. Google Translate, Microsoft Translator, DeepL and others). Or you press CTRL+Win+J to get a list of translations from the different systems. Maybe that’s not exactly the most important feature in the life of a conference interpreter, but still I find it extremely interesting to check and compare the translations of the different machine translation systems from time to time.

GT4T works in any program, be it MS Word, Excel, Access, your browser, Google Docs or Sheets, or a Translation Memory system. This is great for conference interpreters, who have to switch between programs all the time as documentation may come in any possible format the digital world has to offer.

Unfortunately, this tools only works with language pairs, so if you interpret from or into more than one language (or you work back and forth between German and Spanish, but the documents are in English), you have to change settings all the time. It is not much hassle, but I will probably find it distracting when working in the booth. I absolutely missed IATE as an online dictionary option when I first tested GT4T, but when I emailed Dallas about it, it took him very few days to implement muy suggestion. I would also like to be able to look up phrases consisting of more than one word (e.g. “Best Available Techniques” or “Long-chain polyunsaturated fatty acid”). Theoretically, GT4T can look these up, the problem seems to be rather that not all online dictionaries contain such complex entries. And that’s about all I have to criticise after my first round of testing. After all, CTRL+D comes completely naturally to me – what more could one ask for?

More information and download:

Tutorial

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

GT4T – Tastenkombinationen für Übersetzer und Dolmetscher

Tastenkombinationen sollten meiner Meinung nach in der Schule gleich zusammen mit dem ABC gelernt werden – einmal verinnerlicht, kann man sie im Unterschied zu einer Schaltfläche auf dem Bildschirm ohne viel Nachdenken blind spüren erspüüüüüüren. Offensichtlich zutiefst menschlich, dieses Bedürfnis nach haptischem Feedback, an dessen virtueller Nachbildung man nicht nur für Touchscreens eifrig forscht (http://www.zeit.de/2018/09/haptik-digitalisierung-forschung-sinneseindruecke).

Nun tun sich es zumindest für die Windows-Nutzer unter den Sprachmittlern in Sachen noch einmal ganz neue Welten auf: Cao Shou Guang (auch Dallas genannt) aus China hat mit GT4T ein so einfaches wie geniales Tools entwickelt – entdeckt im MDÜ 1/2018 –, welches das Nachschlagen in verschiedenen Online-Wörterbüchern und maschinellen Übersetzungssystemen per Tastenkombination in jeder beliebigen Programmumgebung ermöglicht. Egal, ob in Word, Excel, Access, Google Sheets, Airtable, Powerpoint, Browser, oder in einer Translation-Memory-Umgebung:

Mit STRG+D erhält man für das markierte Wort Nachschlageergebnisse aus verschiedenen Online-Wörterbüchern (Linguee, Glosbe, Microsoft Terminology, Wordreference u.a.) oder dem eigenen GT4T-Glossar in einem kleinen Popup-Fenster angezeigt. Möchte man die Ergebnisse direkt auf der entsprechenden Webseite sehen (was bei Linguee mit den Kontextinformationen nicht zu vernachlässigen ist), drückt man einfach die O-Taste.

Mit STRG+Win-D kann man gezielt im GT4T-Glossar nachschlagen. Dieses GT4T-eigene Glossar ist eine einfache Tabelle und in der Dolmetschvorbereitung sehr praktisch, denn dort kann man mit STRG-D (oder anderweitig) gefundene Übersetzungen durch Drücken der Taste A hinzufügen. ­

Richtig nett wird es, wenn ich schon ein Glossar bspw. für einen bestimmten Kunden besitze, das zuvor in das GT4T-Glossar einkopiere und ganz bequem nachschlagen kann. Oder mein mit GT4T erstelltes Glossar den Kollegen schicken oder mit STRG+C und STRG+V in das Google-Teamglossar (link) oder eine andere tabellenartige Datenbank einfügen kann.

Mit STRG+J wird der ausgewählte Text sofort durch eine maschinelle Übersetzung aus einer vorher ausgewählten Quelle ersetzt.

Mit STRG+Win+J erhält man eine Auswahl verschiedener MT-Vorschläge, etwa von Google Translate, Microsoft Translator, DeepL oder anderen. Und wenn dies im Alltag eines Dolmetschers vielleicht nicht das wichtigste aller Features ist, so finde ich es doch spannend, die Varianten der verschiedenen Systeme zu vergleichen.

Leider funktioniert das Tool ähnlich wie ein TM nur mit Sprachenpaaren, d.h. wenn man mit mehreren Ausgangs- oder Zielsprachen dolmetscht oder auch nur das Vorbereitungsmaterial in unterschiedlichen Sprachen vorliegt, muss man die Sprachen umstellen Für den Schreibtisch ein bisschen lästig, aber ok – nur beim Simultandolmetschen wahrscheinlich etwas störend. Ich würde auch gerne nach Ausdrücken suchen, die aus mehr als einem Wort bestehen, so etwa „langkettige mehrfach ungesättigte Fettsäuren“ oder „beste verfügbare Technik“. Aber da liegt das Problem wohl weniger bei GT4T, sondern bei den Wörterbüchern, die nicht immer solche komplexen Mehrwortausdrücke im Angebot haben. Bei der Auswahl der Online-Wörterbücher habe ich beim Testen IATE vermisst. Aber als ich das Dallas schrieb, hatte er die Änderung binnen weniger Tage umgesetzt! Und viel mehr finde ich spontan nicht zu meckern. STRG+D ist mir jedenfalls schon jetzt in Fleisch und Blut übergegangen – und was will man schon mehr?

Mehr Infos und Download:

Tutorial

 

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

Atajos de teclado hechos para traductores e intérpretes

Según yo, los atajos de teclado se tendrían que aprender juntamente con el abecedario. Una vez memorizados, resultan super útiles … simplemente se pueden sentir a ciegas, sin tener que pensarlo, o buscarlo en la pantalla. Por lo visto es muy humano este deseo de tener un feedback háptico, pues los científicos están trabajando en emularlo de forma virtual en muchas situaciones, y no solo en las pantallas táctiles.

Ahora, por lo menos para los usuarios de Windows entre los traductores e intérpretes, hay buenas noticias: Cao Shou Guang (alias Dallas) de China creó una herramienta tan sencilla como genial: nada más pulsando CTRL+D, el programa GT4T busca la palabra seleccionada, consultando varios diccionarios en línea a la vez, entre ellos Linguee, Glosbe, Microsoft Terminology, Wordreference, y muestra las diferentes traducciones en una pequeña ventana popup. Para abrir la página web respectiva con los resultados de búsqueda (lo que resulta muy útil para ver los contextos en Linguee.com), simplemente se pulsa la tecla O.

Con CTRL+Win+D se puede buscar la palabra seleccionada solo en el glosario GT4T propio, o abrir el mismo y editarlo en Excel. Este glosario es una tabla muy sencilla, y resulta muy fácil copiar y pegar glosarios ya existentes a esta tabla. A la hora de preparar textos para una conferencia es súper útil. También los términos encontrados mediante la búsqueda con STRG+D (o de otra forma) se pueden añadir muy fácilmente pulsando la tecla A. Y claro que también al revés se puede copiar el glosario GT4T a otra tabla, como por ejemplo un glosario compartido. GT4T funciona con cualquier programa, ya sea en MS Word, Excel, Access, el navegador, Google Docs o Sheets, o un programa de memoria de traducción.

Es más: con CTRL+J, el texto seleccionado se sustituye directamente por una traducción automática de un sistema preseleccionado (puede ser Google Translate, Microsoft Translator, DeepL y otros). O se pulsa CTRL+Win+J para ver una lista de posibles traducciones provenientes de diferentes sistemas de traducción automática. Y aunque esta no es precisamente la función más importante en la vida de una intérprete de conferencias, de vez en cuando me parece muy interesante ver y comparar lo que sugieren las diferentes máquinas como traducción.

Desafortunadamente, esta herramienta funciona con pares de idiomas, o sea cuando uno trabaja con más de dos idiomas en una conferencia (o trabaja entre dos idiomas y la documentación viene en un tercer idioma, como el inglés), hace falta cambiar los idiomas en las configuraciones. Aunque no es nada complicado, en cabina sí que es un poco molesto.

Lo que más extrañé en este programa era IATE como diccionario en línea. Pero cuando se lo sugirió a Dallas, ¡pocos días después ya lo tenía implementado! Otra cosa que me llamó la atención era que resulta dificil encontrar expresiones de varias palabras, como por ejemplo “ácidos grasos poliinsaturados de cadena larga” o “Mejor Técnica Disponible”. Pero, por lo visto,  el problema es que muchas veces los diccionarios no contienen muchas de estas expresiones medio complejas. Y por ahora no se me ocurren más cosas que criticar después de mis primeras pruebas. Al final, ya tengo el CTRL+D completamente automatizado – ¿qué más se puede esperar?

Para saber más y descargar el programa:

Tutorial

La autora:
Anja Rütten es intérprete de conferencias autónoma para alemán (A), español (B), inglés y francés (C), domiciliada en Düsseldorf/Alemania. Se dedica al tema de la gestión de los conocimientos desde mediados de los años 1990.

 


selection of GT4T shortcuts | Auswahl von GT4T Tastaturkürzeln | selección de atajos de teclado GT4T

CTRL+j – replace text by MT

CTRL+WIN+j – check several MT

CTRL+D – check several online dictionaries

CTRL+Win+D – SimpleGlossary feature

Useful key shortcuts for anyone | Nützliche Tastenkombinationen für jedermann | Atajos de teclado útiles para todo el mundo

CTRL+S – Save/Speichern/Guardar

CTRL+F – Find/Suchen/Buscar

CTRL+Z – Undo/Rückgängig/Deshacer

CTRL-A – Select all/Alles auswählen/Seleccionar todo

CTRL+C – Copy/Kopieren/Copiar

CTRL+X – Cut/Ausschneiden/Cortar

CTRL+V – Paste/Einfügen/Pegar

Alt+Tab – skip to next open program window/ Cambiar entre programas abiertos

Speechpool, InterpretimeBank & InterpretersHelp – the Perfect Trio for Deliberate Practice in Conference Interpreting

After testing the practice module of InterpretersHelp last month, the whole practice thing got me hooked. Whilst InterpretersHelp gives us the technical means to record our interpretation together with the original and receive feedback from peers, there are two more platforms out there which cover further aspects of the practice workflow: InterpretimeBank and Speechpool.

To start with, you need source material to interpret – which is where Sophie Llewellyn-Smith‘s fabulous speechpool comes into play. Just like InterpretimeBank and InterpretersHelp, it is a platform for practicing conference interpreters. It serves for exchanging speeches suitable for this purpose, i.e. you can upload speeches you recorded yourself and listen to those others recorded. Of course, there are zillions of speeches out there on the internet. But to practice purposefully, e.g. on everyday subjects like, say, sourdough bread, or dive into more technical subjects like ballet or fracking, it is sometimes difficult to find suitable speeches in the required source language. And this is where Sophie’s great idea of pooling speeches made by interpreters for interpreters kicks in. It currently has over 200 German, Spanish and French speeches and more than 900 in English. Obviously, the whole platform only works if many people participate actively and make contributions. Technically, you upload your video on youtube.com and add it to Speechpool using the YouTube url. This makes it a perfect match for InterpretersHelp, which also uses YouTube to handle the source speech videos.

Once you have found your speech to practice with, there’s InterpreTimeBank, a platform to find peer conference interpreters from all over the world to pair up with and exchange feedback. My colleague (and former student) Fernanda Vila Kalbermatten drew my attention to InterpretimeBank recently:

Es una plataforma muy recomendable para continuar con la práctica después de terminados los estudios. La mayor ventaja es que nos conecta con intérpretes en otros países, que actúan como público en consecutivas y nos pueden dar feedback sobre nuestros retours. La comunidad necesita crecer, por lo que invito a todos los intérpretes en activo a participar.

InterpreTimeBank is still in its initial phase, so the community needs to grow. Once you have found your perfect practice monder law buddy, you can start interpreting, be it doing consecutive interpretation over the web, or exchanging your simultaneous interpretations using the fabulous practice module of InterpretersHelp and give and receive feedback using its integrated feedback function.

How much more could you ask for to keep practicing after university, brush up a language combination or just keep your skills sharp? For, as Karl Anders Ericsson puts it, an automated level of acceptable performance is not improved by just going on for years. In fact, it may even deteriorate. But that’s for next time …


About the author

Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.