Extract Terminology in No Time | OneClick Terms | Ruckzuck Terminologie extrahieren

[for German scroll down] What do you do when you receive 100 pages to read five minutes before the conference starts? Right, you throw the text into a machine and get out a list of technical terms that give you a rough overview of what it’s all about. Now finally, it looks like this dream has come true.

OneClick Terms by SketchEngine is a browser-based (a big like) terminology extraction tool which works really swiftly. It has all it takes and nothing more (another big like): Upload – Settings – Results.

Once you are logged in for your free trial, OneClickTerms accepts the formats tmx, xliff (2.x), pdf, doc(x), html, txt. The languages supported are Czech, German, English, Spanish, French, Italian, Japanese, Korean, Dutch, Polish, Portuguese, Russian, Slovak, Slovenian, Chinese Simplified, Chinese Traditional.

The settings in my opinion don’t really need to be touched. They include:

  • how rare or common should the extracted terms be
  • would you like to see the word form as it appears in the text or the base form
  • how often should a term candidate occur in the text in order to make it to the list of results
  • do you want numbers to appear in your results
  • how many terms should your list of results contain

When I tried OneClick Terms, it delivered absolutely relevant results at the first go. I uploaded an EU text on the free flow of non-personal data (pdf of about 100 pages) at about 8:55 am and the result I got at 8:57, displayed right on the same website, looked like this (and yes, the small W icons behind the words are links to related Wikipedia articles!):

It actually required rather four clicks than OneClick, but the result was worth the effort. There isn’t a lot of „noise“ (irrelevant terms) in the term candidate list, one of the reasons that often put me off in the past when I tried to use term extraction tools to prepare for an interpreting assignment. In the meeting where I tested OneClickTerms, at the end the only word I missed in the results was the regulatory scrutiny board. Interestingly, it was also missing from the list I had obtained from a German text on the same subject (Ausschuss für Regulierungskontrolle). But all the other relevant terms that popped up during the meeting were there. And what is more, by quickly scanning the extraction list in my target language, German, I could activate a lot of terminology I would otherwise definitely have had to think about twice while interpreting. So to me it definitely is a very efficient way of reducing the cognitive load in simultaneous interpreting.

The results list can be downloaded as a txt file, but copy & paste into MS Excel, for example, works just as fine, plus it puts both single and multi words into the same column. After unmerging all cells the terms can easily be sorted by frequency, which makes your five-minute emergency preparation almost perfect (as perfect as a five-minute preparation can get, that is).

Furthermore, even if you do have enough time for preparation, extracting and scanning the terminology as a first step may help you to focus on the substance when reading the text afterwards.

There is a free one month trial, after that the service can be subscribed to from 100 EUR/year (or 12.32 EUR/month) plus VAT. It includes many other features, like bilingual corpus building – but that’s a different story.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.


Noch fünf Minuten bis zum Konferenzbeginn und ein hundertseitiges pdf zur Vorbereitung schneit (hoffentlich elektronisch) in die Kabine. Was macht man? Klar: Text in eine Maschine werfen, Knopf drücken, Terminologieliste wird ausgespuckt. Damit kann man sich dann zumindest einen groben Überblick verschaffen … Nun, es sieht so aus, als sei dieser Traum tatsächlich wahr geworden!

OneClick Terms von SketchEngine ist ein browser-basiertes (super!) Terminologieextraktionstool, das extrem einfach in der Handhabung ist. Es hat alles, was es braucht, und mehr auch nicht (ebenfalls super!). Upload – Einstellungen – Ergebnisse. Fertig.

Wenn man sich mit seinem kostenlosen Testaccount eingewählt hat, kann man eine Datei im folgenden Format hochladen: tmx, xliff (2.x), pdf, doc(x), html, txt. Die unterstützten Sprachen sind Tschechisch, Deutsch, Englisch, Spanisch, Französisch, Italienisch, Japanisch, Koreanisch, Niederländisch, Polnisch, Portugiesisch, Russisch, Slowakisch, Slowenisch, Chinesisch vereinfacht und Chinesisch traditionell.

Die Einstellungen muss man zuächst einmal gar nicht anfassen. Möchte man es doch, kann man folgende Parameter verändern:

  • wie häufig oder selten sollte der extrahierte Terminus sein
  • soll das Wort in der (deklinierten oder konjugierten) Form angezeigt werden, in der es im Text vorkommt, oder in seiner Grundform
  • wie oft muss ein Termkandidat im Text vorkommen, um es auf die Ergebnisliste zu schaffen
  • sollen Zahlen bzw. Zahl-/Buchstabenkombinationen in der Ergebnisliste erscheinen
  • wie lang soll die Ergebnisliste sein

Als ich OneClick Terms, getestet habe, bekam ich auf Anhieb äußerst relevante Ergebnisse. Ich habe um 8:55 Uhr einen EU-Text über den freien Verkehr nicht-personenbezogener Daten hochgeladen (pdf, etwa 100 Seiten) und hatte um 8:57 Uhr gleich im Browser das folgende Ergebnis angezeigt (und ja, die kleinen Ws hinter den Wörtern sind Links zu passenden Wikipedia-Artikeln!):

Es waren zwar eher vier Klicks als EinKlick, aber das Ergebnis war die Mühe Wert. Es gab wenig Rauschen (irrelevante Termini) in der Termkandidatenliste, einer der Gründe, die mich bislang davon abgehalten haben, Terminologieextraktion beim Dolmetschen zu nutzen. In der Sitzung, bei der ich OneClickTerms getestet habe, fehlte mir am Ende in der Ergebnisliste nur ein einziger wichtiger Begriff aus der Sitzung, regulatory scrutiny board. Dieser Ausschuss für Regulierungskontrolle fehlte interessanterweise auch in der Extraktionsliste, die ich zum gleichen Thema anhand eines deutschen Textes erstellt hatte. Alle anderen relevanten Termini, die während der Sitzung verwendet wurden, fanden sich aber tatsächlich in der Liste. Und noch dazu hatte ich den Vorteil, dass ich nach kurzem Scannen der Liste auf Deutsch, meiner Zielsprache, sehr viele Terminie schon aktiviert hatte, nach denen die ich ansonsten während des Dolmetschens sicher länger in meinem Gedächtnis hätte kramen müssen. Für mich definitiv ein Beitrag zur kognitiven Entlastung beim Simultandolmetschen.

Die Ergebnisliste kann man als txt-Datei herunterladen, aber Copy & Paste etwa in MS-Excel hinein funktioniert genauso gut. Man hat dann auch gleich die Einwort- und Mehrwort-Termini zusammen in einer Spalte. Wenn man den Zellenverbund aufhebt, kann man danach auch noch die Einträge bequem nach Häufigkeit sortieren. Damit ist die Fünf-Minuten-Notvorbereitung quasi perfekt (so perfekt, wie eine fünfminütige Vorbereitung eben sein kann).

Aber selbst wenn man jede Menge Zeit für die Vorbereitung hat, kann es ganz hilfreich sein, bevor man einen Text liest, die vorkommende Terminologie einmal auf einen Blick gehabt zu haben. Mir zumindest hilft das dabei, mich beim Lesen stärker auf den Inhalt als auf bestimmte Wörter zu konzentrieren.

Man kann OneClick Terms einen Monat lang kostenlos testen, danach gibt es das Abonnement ab 100,00 EUR/Jahr (oder 12,32 EUR/Monat) plus MWSt. Es umfasst noch eine ganze Reihe anderer Funktionen, etwa auch den Aufbau zweisprachiger Korpora – aber das ist dann wieder eine andere Geschichte.

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

Paperless Preparation at International Organisations – an Interview with Maha El-Metwally

Maha El-Metwally has recently written a master’s thesis at the University of Geneva on preparation for conferences of international organisations using tablets. She is a freelance conference interpreter for Arabic A, English B, French and Dutch C domiciled in Birmingham.

How come you know so much about the current preparation practice of conference interpreters at so many international institutions?

The answer is quite simple really: I freelance for all of them! I am also pro paperless environments for obvious environmental and practical reasons. So even if some organisations offer a paper alternative (ILO, IMO, UNHQ, WFP) I go for the electronic version. Paperless portals of international organisations may differ in layout and how the information is organised but they essentially aim to achieve the same thing. Some organisations operate a dual document distribution system (paper and digital) with the aim of phasing out the former over time.

The European Parliament is already on its second paperless meeting and document portal. It used to be called Pericles and now it is called MINA, the Meeting Information and Notes Application. This required a bit of practice to become familiar with the new features.

I recently heard someone working in one of these paperless environments complain about the paperless approach, saying that they often struggle to find their way through a 400 pages document quickly. My first reaction was to say that hitting CTRL-F or CTRL-G is an efficient way to get to a certain part of a text quickly. But maybe there is more to it than just shortcuts. What is the reason, in your experience, that makes it difficult for colleagues to find their way around on a tablet or laptop computer?

I think that tablets represent a change and people in general resist change. It could be that we are creatures of habit. We are used to a certain way of doing things and some of us may be having a difficulty coping with all the changes coming our way in terms of technology developments. It could also be that some interpreters do not see the point of technology so they are not motivated to change something that works for them.

How is the acceptance of going paperless in general in the institutions you work for?

This depends on individual preferences. Many colleagues still prefer paper documents but I also see more and more tablets appearing in the booths. Some organisations try to accommodate both preferences. The ILO operates a dual distribution system as a step towards going completely paperless. Meeting documents are available on the organisation’s portal but are also printed and distributed to the booths. The same goes for the IMO where the interpreters are given the choice of paper or electronic versions of the documents or both.

Right, that’s what they do at SCIC, too. I take it that you wrote your master’s thesis about paperless preparation, is that right? Was the motivational aspect part of it? Or, speaking about motivation: What was your motivation at all to choose this subject?

Yes, this is correct. I am very much of a technophile and anything technological interests me. I was inspired by a paperless preparation workshop I attended at the European Parliament. It made sense to me as a lot of the time, I have to prepare on the go. It happens that I start the week with one meeting then end the week with another. Carrying wads of paper around is not practical. Having all meeting documents electronically in one place is handy. It happens a lot that I receive meeting documents last minute. There is no time to print them. So I learned to read and annotate the documents on apps on my tablet.

So while you personally basically did „learning by doing“, your researcher self tried to shed some more scientific light on the subject. Is that right? Would you like to describe a bit more in detail what your thesis was about and what you found was the most interesting outcome?

My thesis looked at training conference interpreting students to prepare for conferences of international organisations with the use of tablets. I noticed from my own experience and from anecdotes of older colleagues that meetings were getting more and more compressed. As a result, especially in peak seasons, interpreters may start the week with one conference and end it with another. Preparation on the go became a necessity. In addition, there are several international organisations that are moving towards paperless environments. Therefore, I think it is important for students to be introduced to paperless preparation at an early stage in their training for it to become a second nature to them by the time they graduate. And what a better tool to do that than the tablet? I created a course to introduce students to exactly that.

So when you looked at the question, was your conclusion that tablets are better suited than laptop computers? Currently, it seems to me that on the private market almost everyone uses laptops and at the EU, most people use tablets. I personally prefer a tablet for consecutive, but a laptop in the booth, as I can look at my term database, the internet and room documents at the same time more conveniently. I also blind-type much faster on a „real“ keyboard. I hope that the two devices will sooner or later merge into one (i.e. tablets with decent hard drives, processors and operating systems).

Now, from your experience, which of the two option would you recommend to whom? Or would you say it should always be tablets?

I prefer the tablet when travelling as:
– it is quieter in the booth (no tapping or fan noise),
– using an app like side by side, I can split the screen to display up to 4 apps/files/websites at the same time so the laptop has no advantage over the tablet here,
– it is lighter.

You have created a course for students. What is it you think students need to be taught? Don’t they come to the university well-prepared when it comes to handling computers or tablets?

The current generation of students is tech savvy so they are more likely to embrace tablets and go fully digital. The course I put together for teaching preparation with tablets relies on the fact that students already know how to use tablets. The course introduces the students to paperless environments of a number of international organisations, it looks at apps for the annotation of different types of documents, glossary management, more efficient google search among other things.

I also like to use the touchscreen of my laptop for typing when I want to avoid noise. But compared to blind-typing on a „normal“ keyboard, I find typing on a touchscreen a real pain. My impression is that when I cannot feel the keys under my fingers, I will never be able to learn how to type, especially blind-type, REALLY quickly and intuitively … Do you know of any way (an app, a technique) of improving typing skills on touchscreens?

I’m afraid I don’t really have an answer to that question. I am moving more and more towards dictating my messages instead of typing them and I am often flabbergasted at how good the output is, even in Arabic!

Talking about Arabic, is there any difference when working with different programs in Arabic?

Most of the time, I can easily use Arabic in different apps. The biggest exception is Microsoft Office on Mac. Arabic goes berserk there! I have to resort to Pages or TextEdit then. Having said that, a colleague just mentioned yesterday that this issue has been dealt with. But I have to explore it.

As to glossary management, not all terminology management tools for interpreters run on tablets. Which one(s) do you recommend to your students or to colleagues?

I use and recommend Interplex. It has a very good iPad version. The feature I like most about it is that you can search across your glossaries. I can do that while working and it can be a life saver sometimes!

If I wanted to participate in your seminar, where could I do that? Do you also do webinars?

I offer a number of seminars on technology for interpreters to conference interpreting students at some UK universities. I will keep you posted. I also have an upcoming eCPD webinar on September 19th on a hybrid mode of interpreting that combines the consecutive and simultaneous modes.

That sound like a great subject to talk about next time!

 

 

 

 

 

 

 

 

 

InterpretBank 4 review

InterpretBank by Claudio Fantinuoli, one of the pioneers of terminology tools for conference interpreters (or CAI tools), already before the new release was full to the brim with useful functions and settings that hardly any other tool offers. It was already presented in one of the first articles of this blog, back in 2014. So now I was all the more curious to find out about the 4th generation, and I am happy to share my impressions in the following article.

Getting started

It took me 2 minutes to download and install the new InterpretBank and set my working languages (1 mother tongue plus 4 languages). My first impression is that the user interface looked quite familiar: language columns (still empty) and the familiar buttons to switch between edit, memorize and conference mode. The options menu lets you set display colours, row height and many other things. You can select the online sources for looking up terminology (linguee, IATE, LEO, DICT, Wordreference and Reverso) and definitions (Wikipedia, Collins, Dictionary.com) as well as set automatic translation services (search IATE/old glossaries, use different online translation memories like glosbe and others).

Xlsx, docx and ibex (proprietary InterpretBank format) files can be imported easily, and unlike the former InterpretBank, I don’t have to change the display settings any more in order to have all my five languages displayed. Great improvement! Apart from the terms in five languages, you can import an additional “info” field and a link related to each language as well as a “bloc note”, which refers to the complete entry.

Data storage and sharing

All glossaries are saved on your Windows or Mac computer in a unique database. I haven’t tested the synchronization between desktop and laptop, which is done via Dropbox or any other shared folder. The online sharing function using a simple link worked perfectly fine for me. You just open a glossary, upload it to the secure InterpretBank server, get the link and send it to whomever you like, including yourself. On my Android phone, the plain two-language interface opened smoothly in Firefox. And although I always insist on having more than two languages in my term database, I would say that for mobile access, two languages are perfect, as consecutive interpreting usually happens between two languages back and forth and squeezing more than two languages onto a tiny smartphone screen might not be the easiest thing to do either.

I don’t quite get the idea why I should share this link with colleagues, though. Usually you either have a shared glossary in the first place, with all members of the team editing it and making contributions, or everyone has separate glossaries and there is hardly any need of sharing. If I wanted to share my InterpretBank glossary at all, I would export it and send it via email or copy it into a cloud-based team glossary, so that my colleagues can use it at their convenience.

The terminology in InterpretBank is divided into glossaries and subglossaries. Technically, everything is stored in one single database, “glossary” and “subglossary” just being data fields containing a topic classification and sub-classification. Importing only works glossary by glossary, i.e. I can’t import my own (quite big) database as a whole, converting the topic classification data fields into glossaries and sub-glossaries.

Glossary creation

After having imported an existing glossary, I now create a new one from scratch (about cars). In edit mode, with the display set to two languages only, InterpretBank will look up translations in online translation memories for you. All you have to do is press F1 or using the right mouse button or, if you prefer, the search is done automatically upon pressing the tab key, i.e. jumping from one language field to the next –empty– one. When I tried “Pleuelstange” (German for connecting rod), no Spanish translation could be found. But upon my second try, “Kotflügel” (German for mudguard), the Spanish “guardabarros” was found in MEDIAWIKI.

By pressing F2, or right-click on the term you want a translation for, you can also search your pre-selected online resources for translations and definitions. If, however, all your language fields are filled and you only want to double-check or think that what is in your glossary isn’t correct, the program will tell you that nothing is missing and therefore no online search can be made. Looking up terminology in several online sources in one go is something many a tool has tried to make possible. My favourite so far being http://sb.qtrans.de, I must say that I quite like the way InterpretBank displays the online search results. It will open one (not ten or twenty) browser tabs where you can select the different sources to see the search results.

The functions for collecting reference texts on specific topics and extracting relevant terminology haven’t yet been integrated into InterpretBank (but, as Claudio assured me, will be in the autumn). However, the functions are already available in a separate tool named TranslatorBank (so far for German, English, French and Italian).

Quick lookup function for the booth

While searching in „normal“ edit mode is accent and case sensitive, in conference mode (headset icon) it is intuitive and hardly demands any attention. The incremental search function will narrow down the hit list with every additional letter you type. And there are many option to customize the behaviour of the search function. Actually, the „search parameters panel“ says it all: Would you like to search in all languages or just your main language? Hit enter or not to start your query? If not, how many seconds would you like the system to wait until it starts a fresh query? Ignore accents or not? Correct typos? Search in all glossaries if nothing can be found in the current one? Most probably very useful in the booth.

When toying around with the search function, I didn’t find my typos corrected, at least not that I was aware of. When typing „gardient“ I would have thought that the system corrected it into „gradient“, which it didn’t. However, when I typed „blok“, the system deleted the last letter and returned all the terms containing „block“. Very helpful indeed.

In order to figure out how the system automatically referred to IATE when no results were found in my own database, I entered „Bruttoinlandsprodukt“ (gross domestic product in German). Firstly, the system froze (in shock?), but then the IATE search result appeared in four of my five languages in the list, as Dutch isn’t supported and would have to be bought separately. At least I suppose it was the IATE result, as the source wasn’t indicated anywhere and it just looked like a normal glossary entry.

Queries in different web sources hitting F2 also works in booth mode, just as described above for edit mode. The automatic translation (F1) only works in a two-language display, which in turn can only be set in edit mode.

Memorize new terms

The memorizing function, in my view, hasn’t changed too much, which is good because I like it the way it was before. The only change I have noticed is that it will now let you memorize terms in all your languages and doesn’t only work with language pairs. I like it!

Summary

All in all, in my view InterpretBank remains number one in sophistication among the terminology tools made for (and mostly by) conference interpreters. None of the other tools I am aware of covers such a wide range of an interpreter’s workflow. I would actually not call it a terminology management tool, but a conference preparation tool.

The changes aren’t as drastic as I would have expected after reading the announcement, which isn’t necessarily a bad thing, the old InterpretBank not having been completely user-unfriendly in the first place. But the user interface has indeed become more intuitive and I found my way around more easily.

The new online look-up elements are very relevant, and they work swiftly. Handling more than two languages has become easier, so as long as you don’t want to work with more than five languages in total, you should be fine. If it weren’t for the flexibility of a generic database like MS Access and the many additional data fields I have grown very fond of , like client, date and name of the conference, degree of importance, I would seriously consider becoming an InterpretBank user. But then even if one prefers keeping one’s master terminology database in a different format, thanks to the export function InterpretBank could still be used for conference preparation and booth work „only“.

Finally, whatwith online team glossaries becoming common practice, I hope to see a browser-based InterpretBank 5 in the future!

PS: One detail worth mentioning is the log file InterpretBank saves for you if you tell it to. Here you can see all the changes and queries made, which I find a nice thing not only for research purposes, but also to do a personal follow-up after a conference (or before the next conference of the same kind) and see which were the terms that kept my mind busy. Used properly, this log file could serve to close the circle of knowledge management.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Zeit sparen bei der Videovorbereitung | How to save time when preparing video speeches

Videos als Vorbereitungsmaterial für eine Konferenz haben unzählige Vorteile. Der einzige Nachteil: Sie sind ein Zeitfresser. Man kann nicht wie bei einem schriftlichen Text das Ganze überfliegen und wichtige Stellen vertiefen, markieren und hineinkritzeln. Zum Glück hat Alex Drechsel sich in einem Blogbeitrag dazu Gedanken gemacht und ein paar Tools ausgegraben, die dem Elend ein Ende bereiten. Danke, Alex 🙂

Please enjoy Alex Drechsel’s blog post on how to make preparing video speeches less of a hassle.

Can computers outperform human interpreters?

Unlike many people in the translation industry, I like to imagine that one day computers will be able to interpret simultaneously between two languages just as well as or better than human interpreters do, what with artificial neuronal neurons and neural networks‘ pattern-based learning. After all, once hardware capacity allows for it, an artificial neural network will be able to hear and process many more instances of spoken languages and the underlying content than my tiny brain will in all its lifetime. So it may recognise and understand the weirdest accents and the most complicated matter just because of the sheer amount of information it has processed before and the vast ontologies it can rely on (And by that time, we will most probably not only be able to use digital interpreters, but also digital speakers).

The more relevant question by then might rather be if or when people will want to have digital interpretation (or digital speakers in the first place). How would I feel about being replaced by a machine interpreter, people often ask me over a cup of coffee during the break. Actually, the more I think about it, the more I realise that in some cases I would be happy to be replaced by a machine. And it is good old Friedemann Schulz von Thun I find particularly helpful when it comes to explaining when exactly I find that machine interpreters might outperform (out-communicate, so to say) us humans (or machine speakers outperform humans).

As Friedemann Schulz von Thun already put it back in 1981 in his four sides model (https://en.wikipedia.org/wiki/Four-sides_model), communication happens on four levels:

The matter layer contains statements which are matter of fact like data and facts, which are part of the news.

In the self-revealing or self-disclosure layer the speaker – conscious or not intended – tells something about himself, his motives, values, emotions etc.

In the relationship layer is expressed resp. received, how the sender gets along with the receiver and what he thinks of him.

The appeal layer contains the desire, advice, instruction and effects that the speaker is seeking for.

We both listen and speak on those four layers, be it on purpose or inadvertently. But what does that mean for interpretation?

In terms of technical subject matter, machine interpretation may well be superior to humans, whose knowledge base despite the best effort will always be based on a relatively small part of the world knowledge. Some highly technical conferences consist of long series of mon-directional speeches given just for the sake of it, at a neck-breaking pace and with no personal interaction whatsoever. When the original offers little „personal“ elements of communication (i.e. layer 2 to 4) in the first place, rendering a vivid and communicative interpretation into the target language can be beyond what human interpretation is able to provide. In these cases, publishing the manuscript or a video might serve the purpose just as well, even more so in the future with increasing acceptance of remote communication. And if a purely „mechanical“ translation is what is actually needed and no human element is required, machine interpreting might do the job just as well or even better. The same goes e.g. for discussions of logistics (“At what time are you arriving at the airport?”) or other practical arrangements.

But what about the three other, more personal and emotional layers? When speakers reveal something about themselves and listeners want to find out about the other person’s motives, emotions and values or about what one thinks of the other, and it is crucial to read the message between the lines, gestures and facial expressions? When the point of a meeting is to build trust and understanding and, consequently, create a relationship? Face to face meetings are held instead of phone calls or video conferences in order to facilitate personal connections and a collective experience to build upon in future cooperation (which then may work perfectly well via remote communication on more practical or factual subjects). There are also meetings where the most important function is the appeal. The intention of sales or incentive events generally is to have a positive effect on the audience, to motivate or inspire them.

Would these three layers of communication, which very much involve the human nature of both speakers and listeners, work better with a human or a machine interpreter in between? Is a human interpreter better suited to read and convey personality and feelings, and will human interaction between persons work better with a human intermediary, i.e. a person? Customers might find an un-human interpreter more convenient, as the interpreter’s personality does not interfere with the personal relation of speaker and listener (but obviously not provide any facilitation either). This “neutral” interpreting solution could be all the more charming if it didn’t happen orally, but translation was provided in writing, just like subtitles. This would allow the voice of the original speaker to set the tone. However, when it comes to the „unspoken“ messages, the added value of interpreters is in their name: They interpret what is being said and constantly ask the question „What does the speaker mean or want?“ Mood, mocking, irony, reference to the current situation or persons present etc. will most probably not be understood (or translated) by machines for a long time, or never at all. But I would rather leave it to philosophers or sci-fi people to answer this question.

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

Impressions from Translating and the Computer 38

The 38th ‚Translating and the Computer‘ conference in London has just finished, and, as always, I take home a lot of inspiration. Here are my personal highlights:

  • Sketch Engine, a language corpus management and query system, offers loads of useful functions for conference preparation, like web-based (actually Bing-based) corpus-building, term extraction (the extraction results come with links to the corresponding text, the lists are exportable to common, reusable formats) and thesaurus-building. The one thing l liked most was the fact that if, for example, your clients have their websites in several languages, you can enter the urls of the different language versions and SketchEngine will download them, so that you can then use the texts a corpus. You might hear more about SketchEngine from me soon …
  • XTM, a translation memory system, offers parallel text alignment (like many others do) with the option of exporting the aligned texts into xls. This finally makes them reusable for those many interpreting colleagues who, for obvious reasons, do not have any translation memory system. And the best thing is, you can even re-export an amended version of this file back into the translation memory system for your translator colleagues to use. So if you interpret a meeting where a written agreement is being discussed in several language versions, you can provide the translators first hand with the amendments made in the meeting.
  • SDL Trados now offers an API and has an App Store. New hope for an interpreter-friendly user interface!

All in all, my theory that you just have to wait long enough for the language technology companies to develop something that suits conference interpreters‘ needs seems to materialise eventually. Also scientists and software providers alike were keen to stress that they really want to work with translators and interpreters in order to find out what they really need. The difficulty with conference interpreters seems to be that we are a very heterogeneous community with very different needs and preferences.

And then I had the honour to run a workshop on interpreters workflows and fees in the digital era (for some background information you may refer to The future of Interpreting & Translating – Professional Precariat or Digital Elite?). The idea was to go beyond the usual „digitalisation spoils prices and hampers continuous working relations“ but rather find ways to use digitalisation to our benefit and to boost good working relationships, quality and profitability. I was very happy to get some valuable input from practicioners as well as from several organisations‘ language services and scientists. What I took away were two main ideas: interface-building and quality rating.

Interface-building: By cooperating with the translation or documentation department of companies and organisations, quality and efficiency could be improved on both sides (translators providing extremely valuable and well-structured input for conference preparation and interpreters reporting back „from the field“). Which brings me back to the aforementioned positive outlook on the sofware side.

Quality rating: I noticed a contradiction which has never been so clear to me before. While we interpreters go on about the client having to value our high level of service provided and wanting to be paid well for quality, quality rating and evaluation still is a subject that is largely being avoided and that many of us feel uncomfortable with. On the other hand, some kind of quality rating is something clients sometimes are forced to rely on in order to justify paying for that (supposedly) expensive interpreter. I have no perfect solution for this, but I think it is worth some further thinking.

In general, there was a certain agreement that formalising interpreters‘ preparation work has its limitations. It is always about filling the very personal knowledge gaps of the individual (for a very particular conference setting), but that technologies can still be used to improve quality and keep up with the rapidly growing knowledge landscape around us.

————————–

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

 

Booth-friendly terminology management: Glossarmanager.de

Believe it or not, only a few weeks ago I came across just another booth-friendly terminology management program (or rather it was kindly brought to my attention by a student when I talked about the subject at Heidelberg University). It has been around since 2008 and completely escaped my attention. So I am all the happier to present today yet another player on the scene of interpreter-friendly terminology management tools:

Glossarmanager by Glossarmanager GbR/Frank Brempel (Bonn, Germany)

As the name suggests, in Glossarmanager terms are organised in different glossaries, each glossary including the data fields language 1 („Sprache 1“), language 2 („Sprache 2“), synonym, antonym, picture and comment. The number of working languages in each glossary is limited to two (or three if you decide to use the synonyms column for a third language). Each glossary can also be subdivided into chapters („Kapitel“).

Glossarmanager GlossarEdit

You may import and export rtf, csv and txt files, so basically anything that formerly was a text or table/spreadsheet document, and the import function is very user-friendly (it lets you insert the new data into an existing glossary and checks new entries against existing ones, or create a new glossary).

The vocab training module requires typing in and is very unforgiving, so each typo or other deviation from what is written in the database counts as a mistake. But if you are not put off by the nasty comments („That was rubbish“, „Please concentrate!“) or the even nastier learning record, you may well use this trainer as a mental memorising tool without typing the required terms.

The search module comes as a small window which, if you want it to, always stays in the foreground. Entering search terms is intuitive and mouse-free and the results can be filtered by language pairs, glossaries and authors. Ignoring of special characters like ü, è, ß etc. and case-sensitive search can be activated. Right under the hit list, Glossarmanager provides (customisable) links to online resources for further searching.

Glossarmanager Suche

Available for Windows

Cost: Free of charge (download here and use the free licence key)

————————–

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

 

Why not listen to football commentary in several languages? | Fußballspiele mehrsprachig verfolgen

+++ for English, see below +++ para español, aún más abajo +++

Ich weiß nicht, wie es Euch ergeht, aber wenn ich im Fernsehen ein Fußballspiel verfolge, frage ich mich mitunter, was jetzt wohl der Kommentator der anderen Mannschaft bzw. Nationalität dazu gerade sagt. Die Idee, (alternative) Fußballkommentare über das Internet zu streamen, hat sich zumindest im deutschsprachigen Raum bislang noch nicht so recht durchgesetzt, www.marcel-ist-reif.de hat den Dienst zumindest wieder eingestellt. In Spanien ist man da mit Tiempo de Juego besser dran, und dann gibt es noch talksport.com, was ich noch nicht ausprobiert habe. Aber eine andere durchaus praktikable Alternative, um simultan neben der eigentlichen TV-Übertragung andere Kommentare aus nahezu aller Herren Länder zu hören, bietet ja das Smartphone (wer auch sonst) mit einer entsprechenden Radio-App (mein Favorit für Android ist Radio.fm). So habe ich dann kürzlich beim EM-Gruppenspiel Schweden gegen Belgien einfach über den Kopfhörer parallel zur deutschen TV-Übertragung im belgischen Radio mitgehört – und das war durchaus amüsant! Zeitweise war ich mir nicht sicher, ob der deutsche und der belgische Kommentator das gleiche Spiel sahen, aber die Hintergrundgeräusche aus dem Stadion (im Radio immer minimal verzögert) waren beruhigenderweise identisch. Also eine echte Empfehlung nicht nur für die, die sich nicht entscheiden können, zu welcher Mannschaft sie halten sollen. Und nun hoffe ich natürlich umso mehr auf ein Viertelfinale Deutschland – Spanien!

+++ English version +++

I don’t know about you, but when I watch football on TV, I often wonder what the other team’s/country’s commentator might be saying right now. If you want to listen to Spanish commentary in parallel, you are lucky, as there is Tiempo de Juego streaming football commentary via browser, Android and iOS app. There is http://talksport.com in English, which I have not tried yet. In Germany, however, the idea of streaming (alternative) football commentary over the internet has not quite made it so far (www.marcel-ist-reif.de have given up apparently), but another way of listening in to other commentaries of almost any country in the world is (guess what!) the good old smartphone, with those many radio streaming apps available. My favourite one for Android is Radio.fm, and I have just lately tuned into the Belgian radio while watching the European Championship match Sweden vs. Belgium on German TV. It was both fun and interesting, really. Sometimes I was not sure whether they were talking about the same match, but the background noise from the stadium (slightly delayed over the radio) told me they were indeed. So it is really worth a try, especially for those who cannot decide which team to support. And I’m now hoping all the more for a quarter final between Germany and Spain!

+++ versión española +++

No sé ustedes, pero yo, cuando veo un partido de fútbol en la tele, a veces me pregunto qué estará diciendo el comentarista del otro bando en ese momento. Esta idea de facilitar comentarios audio por medio de internet, casi no se usa en Alemania (en www.marcel-ist-reif.de ya dejaron de ofrecerlo). En España, ya van bastante mejor, con Tiempo de Juego se pueden escuchar los comentarios a través del navegador o usando una app (Android y iOS). En inglés existe talksport.com, todavía me falta probarlo. Pero también existe otra posibilidad muy práctica para escuchar comentarios de fútbol de casi todas partes del mundo y es (adivinen qué) el smartphone que con sus tantas apps transmite por internet un sinfín de radioemisoras del mundo entero (mi app favorita es Radio.fm). De este modo, hace poco, viendo el partido de la Eurocopa entre Suecia y Bélgica en la televisión, por medio de mi celular y los auriculares escuché en paralelo a los comentaristas de una radioemisora belga y me resultó súper divertido. A veces me surgían dudas de si realmente los comentaristas belgas y alemanes estaban viendo el mismo partido, pero el ruido del estadio era idéntico (con un pequeño desfase en la transmisión por radio), así que… todo bien. Realmente lo recomiendo, no sólo para aquellos que no saben a qué selección apoyar. ¡Y por ahora espero aún más los cuartos de final entre España y Alemania!

Airtable.com – a great replacement for Google Sheets | tolle Alternative zu Google Sheets

+++ for English see below +++

Mit der Terminologieverwaltung meiner Träume muss man alles können: Daten teilen, auf allen Geräten nutzen und online wie offline darauf zugreifen (wie mit Interpreters’ Help/Boothmate für Mac oder auch Google Sheets), möglichst unbedenklich Firmenterminologie und Hintergrundinfos des Kunden dort speichern (wie bei Interpreters’ Help), sortieren und filtern (wie in MS Access, MS Excel, Lookup, InterpretBank, Termbase und anderen), individuelle Voreinstellungen wie Abfragen und Standardwerte festlegen (wie in MS Access) und, ganz wichtig: den Terminologiebestand so durchsuchen, dass es kaum Aufmerksamkeit kostet, also blind tippend und ohne Maus, eine inkrementelle Suche, die sich nicht darum schert, ob ich “rinon” oder “riñón” eingebe, und mir so oder so sagt, dass das Ding auf Deutsch Niere heißt, möglichst in Form einer gut lesbaren Trefferliste (wie Interplex und InterpretBank es tun).

Airtable, eine gelungene Mischung aus Tabellenkalkulation und Datenbank, kommt der Sache ziemlich nah. Es ist sehr intuitiv in der Handhabung und sieht einfach gut aus. Das Sortieren und Filtern geht sehr leicht von der Hand, man kann jedem Datensatz Bilder, Dateianhänge und Links hinzufügen und unterschiedliche Abfragen (“Views”)  von Teilbeständen der Terminologie (etwa für einen bestimmten Kunden, ein Thema, eine bestimmte Veranstaltungsart oder eine Kombination aus allem) definieren und auch Standartwerte für bestimmte Felder festlegen, damit man z. B. den Kundennamen, die Konferenzbezeichnung und das Thema nicht jedesmal neu eingeben muss. Die Detailansicht, die aufpoppt, wenn man auf eine Zeile klickt, ist auch super.  Eigene Tabellen lassen sich in Nullkommanix per Drag & Drop einfügen oder importieren. Und im Übrigen gibt es eine Menge nützlicher Tastenkombinationen.

Teamglossare (oder was auch immer) können von verschiedenen Personen über die iPad-, iPhone- oder Android-(beta)-App oder die Browseroberflächer bearbeitet werden. Allerdings können bei Zugriff über den Browser die Daten nicht offline bearbeitet und später online synchronisiert werden. Das funktioniert nur über die mobile App. Die Daten werden bei der Übermittlung und Speicherung verschlüsselt.

Nur eine Sache vermisse ich bei Airtable schmerzlich, nämlich die oben beschriebene intuitive, akzent-ignorierende Suchfunktion, die ihre Fundstücke in einer Trefferliste präsentiert, statt mich von Suchergebnis zu Suchergebnis hüpfen zu lassen. Ansonsten aber eine wahrhaft schnuckeliges Datenbankanwendung, nicht nur für Terminologie!

Airtable ist kostenlos, solange jede Tabelle nicht mehr als 1500 Zeilen umfasst. Für bis zu 5000 Zeilen bezahlt man 12 $ monatlich und für bis zu 50 000 Zeilen 24 $.

Übrigens: Eine Übersicht von am Markt verfügbaren Terminologieverwaltungsprogrammen für Dolmetscher findet sich hier.

————————–

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

+++ English version +++

My perfect terminology database must be shareable, portable and accessible both on and off line (like Interpreters’ Help/Boothmate for Mac and also Google Sheets) but at the same time trustworthy to the point that companies feel comfortable having their terminology stored there (like Interpreters’ Help), sortable and filterable (like MS Access, MS Excel, Lookup, InterpretBank, Termbase and others), customisable with pre-defined views and default values (like in MS Access) and, very importantly, searchable in a way that requires almost no attention – meaning a mouse-free, incremental search function that does not care whether I type “rinon” or “riñón” and tells me that it is kidney in English either way (like Interplex and InterpretBank do), if possible in an easy-to-read hit list.

Airtable, a mix of spreadsheet and database, seems to get very close to it. It is very intuitive to handle and, even more so, it looks just nice and friendly. It has very comfortable sorting and filtering, you can add pictures, links and files, define different views of subsets of your data (like for a specific customer, particular subject area, type of conference or a combination thereof) and set default values so that, while working at a given conference, you don’t need to type the conference name, customer and subject area time and again when entering new terms. And the detailed view of each data set popping up at one click or tap is just lovely. You can import or drag and drop your tables in no time. And Airtable has loads of useful keyboard shortcuts, by the way.

Team glossaries (or anything else) can be worked on by several people and accessed via an iPad, iPhone and Android (beta) app or the browser-based interface, although, when using the browser interface, there is no way to edit your data offline and update the online version later. This works on the mobile apps only. Data being transferred back and forth as well as stored data are encrypted.

The one thing I miss most on Airtable is an intuitive, accent-ignoring search function as described above, which displays hit lists instead of jumping from one search hit to the next. But apart from that, Airtable is just great for data management, not only in terms of „terms“.

It is free of charge as long as your tables don’t have more than 1500 lines, costs 12 $ per month for up to 5000 lines per database and 24 $ for up to 50 000 lines per database.

If you need an overview of available terminology management tools for conference interpreters, click here.

————————–

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.

 

Dictation Software instead of Term Extraction? | Diktiersoftware als Termextraktion für Dolmetscher?

+++ for English see below +++

Als neulich mein Arzt bei unserem Beratungsgespräch munter seine Gedanken dem Computer diktierte, anstatt zu tippen, kam mir die Frage in den Sinn: „Warum mache ich das eigentlich nicht?“ Es folgte eine kurze Fachsimpelei zum Thema Diktierprogramme, und kaum zu Hause, musste ich das natürlich auch gleich ausprobieren. Das High-End-Produkt Dragon Naturally Speaking, von dem mein Arzt schwärmte, wollte ich mir dann aber doch nicht gleich gönnen.  Das muss doch auch mit Windows gehen und mit dem im Notebook eingebauten Raummikrofon, dachte ich mir (haha) … Eingerichtet war auch alles in Nullkommanix (unter Windows 10 Auf Start klicken, den Menüpunkt „Erleichterte Bedienung“ suchen, “ Windowsspracherkennung“ auswählen) und los ging’s. Beim ersten Start durchläuft man zunächst ein kurzes Lernprogramm, das die Stimme kennenlernt.

Und dann konnte es auch schon losgehen mit dem eingebauten Diktiergerät, zunächst testhalber in Microsoft Word. Von den ersten zwei Spracheingaben war ich auch noch einigermaßen beeindruckt, aber schon bei „Desoxyribonukleinsäure“ zerplatzten alle meine Träume. Hier meine ersten Diktierproben mit ein paar gängigen Ausdrücken aus dem Dolmetschalltag:

– 12345
– Automobilzulieferer
– Besserungszeremonien Kline sollte es auch viel wie Wohnen Nucleinsäuren für das (Desoxyribonukleinsäure)
– Beste Rock Siri Wohnung Klee ihnen sollte noch in Welle (Desoxyribonukleinsäure)
– Verlustvortrag
– Rechnungsabgrenzungsposten
– Vorrats Datenspeicherung
– Noch Händewellenlänge (Nockenwelle)
– Keilriemen
– Brennstoffzellen Fahrzeuge

Gar nicht schlecht. Aber so ganz das Spracherkennungswunder war das nun noch nicht. In meiner Phantasie hatte ich mich nämlich in der Dolmetschvorbereitung Texte und Präsentationen entspannt lesen und dabei alle Termini und Zusammenhänge, die ich im Nachgang recherchieren wollte, in eine hübsche Tabelle diktieren sehen.  Aber dazu musste dann wohl etwas „Richtiges“ her, wahrscheinlich zunächst einmal ein gescheites Mikrofon.

Also setzte ich mich dann doch mit der allseits gepriesenen Diktiersoftware Dragon Naturally Speaking auseinander, chattete mit dem Support und prüfte alle Optionen. Für 99 EUR unterstützt die Home-Edition nur die gewählte Sprache. Die Premium-Version für 169 EUR unterstützt die gewählte Sprache und auch Englisch. Ist die gewählte Sprache Englisch, gibt es nur Englisch. Möchte ich mit Deutsch, Spanisch, Englisch und womöglich noch meiner zweiten C-Sprache Französisch arbeiten, wird es also erstens kompliziert und zweitens teuer. Also verwarf ich das ganze Thema erst einmal, bis wenige Tage später in einem völlig anderen Zusammenhang unsere liebe Kollegin Fee Engemann erwähnte, dass sie mit Dragon arbeite. Da wurde ich natürlich hellhörig und habe es mir dann doch nicht nehmen lassen, sie für mich und Euch ein bisschen nach ihrer Erfahrung mit Spracherkennungssoftware auszuhorchen:


Fee Engemann im Interview am 19. Februar 2016

Wie ist die Qualität der Spracherkennung bei Dragon Naturally Speaking?

Erstaunlich gut. Das Programm lernt die Stimme und Sprechweise kennen und man kann ihm auch neue Wörter „beibringen“, oder es liest über sein „Lerncenter“ ganze Dateien aus. Man kann auch Wörter buchstabieren, wenn das System gar nichts mehr versteht.

Wozu benutzt Du Dragon?

Ich benutze es manchmal als OCR-Ersatz, wenn eine Übersetzungsvorlage nicht maschinenlesbar ist. Das hat den Vorteil, dass man gleich den Text einmal komplett gelesen hat.

In der Dolmetschvorbereitung diktiere ich meine Terminologie in eine Liste, die ich dann nachher durch die Begriffe in der anderen Sprache ergänze. Das funktioniert in Word und auch in Excel. Falls es Schwierigkeiten gibt, liegt das evtl. daran, dass sich die Kompatibilitätsmodule für ein bestimmtes Programm deaktiviert haben. Ein Besuch auf der Website des technischen Supports schafft hier Abhilfe. Für Zeilenumbrüche und viele andere Befehle gibt es entsprechende Sprachkommandos. Wenn man das Programm per Post bestellt und nicht als Download, ist sogar eine Übersicht mit den wichtigsten Befehlen dabei – so wie auch ein Headset, das für meine Zwecke völlig ausreichend ist. Die Hotline ist im Übrigen auch super.

Gibt es Nachteile?

Wenn ich einen Tag lang gedolmetscht habe, habe ich danach manchmal keine Lust mehr, mit meinem Computer auch noch zu sprechen. Dann arbeite ich auf herkömmliche Art.

Wenn man in unterschiedlichen Sprachen arbeitet, muss man für jede Sprache ein neues Profil anlegen und zwischen diesen Profilen wechseln. Je nach Sprachenvielfalt in der Kombination könnte das lästig werden.


Mein Fazit: Das hört sich alles wirklich sehr vielversprechend an. Das größte Problem für uns Dolmetscher scheint – ähnlich wie bei der Generierung von Audiodateien, also dem umgekehrten Weg – das Hin und Her zwischen den Sprachen zu sein. Wenn jemand von Euch dazu Tipps und Erfahrungen hat, freue ich mich sehr über Kommentare – vielleicht wird es ja doch noch was mit der Terminologieextraktion per Stimme!

Über die Autorin:
Anja Rütten ist freiberufliche Konferenzdolmetscherin für Deutsch (A), Spanisch (B), Englisch (C) und Französisch (C) in Düsseldorf. Sie widmet sich seit Mitte der 1990er dem Wissensmanagement.

+++ English version +++

The other day, when I was talking to my GP and saw him dictate his thoughts to his computer instead of typing them in, I suddenly wondered why I was not using such a tool myself when preparing for an interpreting assignment? So I asked him about the system and, back home, went to try it myself straight away. Although what I was planning to do was not to buy the high-end dictation program Dragon Naturally Speaking I had been recommended, but instead to go for the built-in Windows speech recognition function and the equally built-in microphone of my laptop computer (bad idea) … The speech recognition module under Windows 10 was activated in no time (got to the Start menu, select „Ease of Access > Speech Recognition„) and off I went.

When the voice recognition function is first started, it takes you through a short learning routine in order to familiarise itself with your voice. After that, my Windows built-in dictation device was ready. For a start, I tried it in Microsoft Word. I found the first results rather impressive, but when it came to „Desoxyribonukleinsäure“ (deoxyribonucleic acid), I was completely disillusioned. See for yourselves the results of my first voice recognition test with some of the usual expressions from the daily life of any conference interpreter:

– 12345
– Automobilzulieferer
– Besserungszeremonien Kline sollte es auch viel wie Wohnen Nucleinsäuren für das (Desoxyribonukleinsäure)
– Beste Rock Siri Wohnung Klee ihnen sollte noch in Welle (Desoxyribonukleinsäure)
– Verlustvortrag
– Rechnungsabgrenzungsposten
– Vorrats Datenspeicherung
– Noch Händewellenlänge (Nockenwelle)
– Keilriemen
– Brennstoffzellen Fahrzeuge

Not bad for a start – but not quite the miracle of voice recognition I would need in order to live this dream of dictating terminology into a list on my computer while reading documents to prepare for an interpreting assignment. Something decent was what I needed, probably a decent microphone, for a start.

So I enquired about the famous dictation software Dragon Naturally Speaking, chatted with one of the support people and checked the options. For 99 EUR, Dragon’s Home Edition only supports one language. The Premium Edition for 169 EUR supports one selected language plus English (If you choose English when buying the software, it is English-only.)  If I want German, Spanish, English and possibly also my second C-language, French, it gets both complicated and expensive. So I discarded the whole idea until, only a few days later, our dear colleague Fee Engemann happened to mention to me – in a completely different context – that she actually worked with Dragon! I was all ears and spontaneously asked her if she would like to share some of her experience with us in an interview. Luckily, she accepted!


Interview with Fee Engemann February 19th, 2016

What is the voice recognition quality of Dragon Naturally Speaking like?

Surprisingly good. The program familiarises itself with your voice and speech patterns, and you can also „teach“ it new words, or let it read loads of new words from entire files. You can also spell words in case the system does not understand you at all.

What do you use Dragon for?

I use it as an OCR substitute when I get a text to translate which is not machine-readable. The big advantage is that once you have done that, you know the entire text.

When preparing for an interpreting assignment, I dictate my terminology into a list and add the equivalent terms in the other language once I have finished reading the texts. That works in MS-Word and MS-Excel. If there are problems, this may be due to the compatibility module for a certain program being deactivated. The technical support website can help in this case. There are special commands for line breaks and the like. And if you order the software on a CD (instead of simply downloading it), your parcel will not only include a list with the most important commands, but also a headset, which is absolutely sufficient for my purpose. And by the way … the hotline is great, too.

Are there any downsides?

After a whole day of interpreting, I sometimes don’t feel like talking to my computer. In this case, I simply work the traditional way.

When working with several languages, you must create one profile per language and switch between them when switching languages. This may be quite cumbersome if you work with many different languages.


My personal conclusion is that this all sounds very promising. As always, our problem as conference interpreters with these technologies (just like when creating multilingual audio files, i.e. the other way around) seems to be the constant changing back and forth between languages. If any of my readers has experience or good advice to share, I will be happy to read about it in the comments – maybe voice-based term extraction is not that far away after all!

————————–

About the author:
Anja Rütten is a freelance conference interpreter for German (A), Spanish (B), English (C) and French (C) based in Düsseldorf, Germany. She has specialised in knowledge management since the mid-1990s.