Skip to main content
Knowledge Centre on Translation and Interpretation

AI and New Language Technologies

A forum for exchanging knowledge and ideas on the use of technology in interpreting, translation, and training. This is a space to explore initiatives, research, and practices at the intersection of language and technology, bringing together professionals, researchers, and trainers to discuss the benefits and challenges of ongoing developments.
Welcome to the community. We can’t wait to read your thoughts!

Blog

Language
English

I came across this video and thought it might be interesting for this community. It's from 8 months ago, so I guess some details may have evolved since then, but most points still seem relevant. What...

I came across this video and thought it might be interesting for this community. It's from 8 months ago, so I guess some details may have evolved since then, but most points still seem relevant. What do you think about the points being put forward? Does anyone have feedback on automated interpretation systems or computer-assisted interpreting in IRL and in training?

Read more

Language
English

All Andrea's points are valid, but the people developing AI interpreting and buying it as a service have other priorities, I fear. Those processes are something we have very little influence on.Computer-assisted interpreting means live transcription...

All Andrea's points are valid, but the people developing AI interpreting and buying it as a service have other priorities, I fear. Those processes are something we have very little influence on.Computer-assisted interpreting means live transcription and boothmate tools that use the transcription to highlight terminology and numbers. It has potential IRL but its use is limited by the confidentiality issue. CymoNote is Chinese, Microsoft's ASR sends data to the US... either way it's going out of the building, which many clients would not allow.

InterpretBank and TErpMate claim to have solved this know with an off-line version of their boothmate tools. Now we should test them to see if they work.

When you have a tool prompting with terminology it should probably be from a specific glossary prepared in advance for that session. Allowing the computer to look for terminology whereever it likes will lead to mistakes. On the other hand, if you have already prepared a glossary the terms will arguably already be activated and you won't need a CAI prompter!

I have tested CymoNote with mixed results. Anyone interested can see the short tests here.

https://www.youtube.com/watch?v=ZIQLi-fZNy4

https://www.youtube.com/watch?v=oS30FrHVSY0

TRAINING: I don't think we should be teaching something that is NOT widely used in the profession yet. Students need to be able to interpret WITHOUT CAI (because CAI will not always be available) and then add CAI to their process after that. Checking a CAI screen is an extra cognitive load which will lead students (and indeed professional interpreters) to interpret less well before it helps them interpret better.

Read more

Language
English

Hi everyone,

As AI tools (from speech recognition to real-time transcription) become more common, we’re interested in exploring how to promote AI literacy among conference interpreters, students, and professionals. Have you:

  • Run workshops or training on...

Hi everyone,

As AI tools (from speech recognition to real-time transcription) become more common, we’re interested in exploring how to promote AI literacy among conference interpreters, students, and professionals. Have you:

  • Run workshops or training on AI tools for interpreters?
  • Integrated AI-related modules into interpreting courses?
  • Faced challenges or successes in helping colleagues adapt to these technologies?
  • Used specific resources, platforms, or case studies that worked well?

We’d love to hear your about your experiences, tips, and research. Feel free to share knowledge and best practices so we can learn from each other 😊

Read more

Language
English

I think it's valid to promote AI literacy amongst professionals, but far less important to do it with students. 99% of an interpreting course needs still to be about how to actually interpret, because without that...

I think it's valid to promote AI literacy amongst professionals, but far less important to do it with students. 99% of an interpreting course needs still to be about how to actually interpret, because without that no AI-tool will help a student.

I would also say that tech-literacy could begin "before" getting as far as AI. Many interpreters do not know how to get the most out of pdf readers (annotation, bookmarks, navigation between documents, exporting terminology) nor about the principles underlying glossary-building and the tools available for keeping glossaries. These topics are less sexy and fashionable than AI but they are easily as useful for interpreters.

The real (and possibly only!) expert amongst interpreters in this field is Josh Goldsmith. He teaches AI and other tech-related stuff at TechforWord.

To answer your questions... "AI-related modules into interpreting courses?" Yes. A brief look at how to make summaries without AI and a comparison of the results with AI generated summaries... and a couple of other functionalities of NotebookLM and Readwise. My conclusions were 1) there's not that much to "teach". AI is very simple to use. And 2) that it is still essential that students be able to make summaries themselves, even if it takes longer. Knowing how to distill the essence out of something is an important skill.The biggest challenge is for the moment confidentiality. We are simply not allowed to put many of our documents through an AI and efficiency means having a similar workflow for all meetings, rather than having a different one (involving AI) only for public meetings/documents.

Another big issue with AI is that if we let it do the analysis, thinking, summarising, terminology work for us, then we are not using the parts of our brains that we need most to be interpreters. I've addressed this in this lecture "AI or DIY" https://www.youtube.com/watch?v=o7VkMC0lGWA&t=1s

I've written about AI and glossary-building here https://interpretersoapbox.com/reasons-not-to-automate-glossary-building/

I ran a survey of AI use by interpreters and confidentiality issues here https://interpretersoapbox.com/ai-tools-confidentiality-conference-interpreters-survey/

Read more

Language
English

Hi Andy,

Thank you for your thorough reply and the links. You raise quite a few important points. I agree that before working on AI skills students need to already have a strong foundation in interpretation...

Hi Andy,

Thank you for your thorough reply and the links. You raise quite a few important points. I agree that before working on AI skills students need to already have a strong foundation in interpretation. I also agree that we shouldn’t focus only on AI. In the end, what we are trying to achieve is that tricky balance of using technology to our advantage, without losing our voices and our abilities in the process.

You mention that the biggest challenge at the moment is confidentiality. Just a suggestion: The EC offers a secure, in-house AI tool that is free to use once you’ve registered. I hope you'll find it useful for such cases :)

Read more

Language
English

hi Andrea, I'm aware of the tool, yes, thanks. My initial experiments with it were not very promising. I found the results slow and poor. But that is not really the solution for all those students...

hi Andrea, I'm aware of the tool, yes, thanks. My initial experiments with it were not very promising. I found the results slow and poor. But that is not really the solution for all those students and interpreters who don't have access to it or who work elsewhere, outside the EU institutions! Non-EU clients would be very unlikely to accept that the EU tool was any different to LM Notebook or Claude when it comes to their data... so interpreters would not be able to use it either

Read more

Language
English

Hi Andy, You’re absolutely right, I was mainly thinking of either EU-specific contexts, or other contexts where clients might be more accepting towards a tool if they knew it had stricter firewalls, data centers located within...

Hi Andy, You’re absolutely right, I was mainly thinking of either EU-specific contexts, or other contexts where clients might be more accepting towards a tool if they knew it had stricter firewalls, data centers located within Europe so data doesn't leave Europe, and a guarantee that data is not stored, reused, or fed into training models. So none of the data will end up in an LLM's training corpus. It's a tool that's available for use outside EU institutions, so it can be used in certain non-EU contexts (when working for public administrations, academia, SMEs, NGOs etc...). That said, of course it's not a perfect solution and private clients are well within their rights to not accept it. I am well aware that no one tool can be fit for all scenarios.

Read more

Language
English

I think it's valid to promote AI literacy amongst professionals, but far less important to do it with students. 99% of an interpreting course needs still to be about how to actually interpret, because without that...

I think it's valid to promote AI literacy amongst professionals, but far less important to do it with students. 99% of an interpreting course needs still to be about how to actually interpret, because without that no AI-tool will help a student.

I would also say that tech-literacy could begin "before" getting as far as AI. Many interpreters do not know how to get the most out of pdf readers (annotation, bookmarks, navigation between documents, exporting terminology) nor about the principles underlying glossary-building and the tools available for keeping glossaries. These topics are less sexy and fashionable than AI but they are easily as useful for interpreters.

The real (and possibly only!) expert amongst interpreters in this field is Josh Goldsmith. He teaches AI and other tech-related stuff at TechforWord.

To answer your questions... "AI-related modules into interpreting courses?" Yes. A brief look at how to make summaries without AI and a comparison of the results with AI generated summaries... and a couple of other functionalities of NotebookLM and Readwise. My conclusions were 1) there's not that much to "teach". AI is very simple to use. And 2) that it is still essential that students be able to make summaries themselves, even if it takes longer. Knowing how to distill the essence out of something is an important skill.The biggest challenge is for the moment confidentiality. We are simply not allowed to put many of our documents through an AI and efficiency means having a similar workflow for all meetings, rather than having a different one (involving AI) only for public meetings/documents.

Another big issue with AI is that if we let it do the analysis, thinking, summarising, terminology work for us, then we are not using the parts of our brains that we need most to be interpreters. I've addressed this in this lecture "AI or DIY" https://www.youtube.com/watch?v=o7VkMC0lGWA&t=1s

I've written about AI and glossary-building here https://interpretersoapbox.com/reasons-not-to-automate-glossary-building/

I ran a survey of AI use by interpreters and confidentiality issues here https://interpretersoapbox.com/ai-tools-confidentiality-conference-interpreters-survey/

Read more

Language
English

I think it's valid to promote AI literacy amongst professionals, but far less important to do it with students. 99% of an interpreting course needs still to be about how to actually interpret, because without that...

I think it's valid to promote AI literacy amongst professionals, but far less important to do it with students. 99% of an interpreting course needs still to be about how to actually interpret, because without that no AI-tool will help a student.

I would also say that tech-literacy could begin "before" getting as far as AI. Many interpreters do not know how to get the most out of pdf readers (annotation, bookmarks, navigation between documents, exporting terminology) nor about the principles underlying glossary-building and the tools available for keeping glossaries. These topics are less sexy and fashionable than AI but they are easily as useful for interpreters.

The real (and possibly only!) expert amongst interpreters in this field is Josh Goldsmith. He teaches AI and other tech-related stuff at TechforWord.

To answer your questions... "AI-related modules into interpreting courses?" Yes. A brief look at how to make summaries without AI and a comparison of the results with AI generated summaries... and a couple of other functionalities of NotebookLM and Readwise. My conclusions were 1) there's not that much to "teach". AI is very simple to use. And 2) that it is still essential that students be able to make summaries themselves, even if it takes longer. Knowing how to distill the essence out of something is an important skill.The biggest challenge is for the moment confidentiality. We are simply not allowed to put many of our documents through an AI and efficiency means having a similar workflow for all meetings, rather than having a different one (involving AI) only for public meetings/documents.

Another big issue with AI is that if we let it do the analysis, thinking, summarising, terminology work for us, then we are not using the parts of our brains that we need most to be interpreters. I've addressed this in this lecture "AI or DIY" https://www.youtube.com/watch?v=o7VkMC0lGWA&t=1s

I've written about AI and glossary-building here https://interpretersoapbox.com/reasons-not-to-automate-glossary-building/

I ran a survey of AI use by interpreters and confidentiality issues here https://interpretersoapbox.com/ai-tools-confidentiality-conference-interpreters-survey/

Read more

Language
English

Hi Andrea,

thanks for launching the discussion. As a translator I do not have any relevant experience with AI tools for interpreters, but I just came across an interesting event AI ThoughtCon 2026 by the AI...

Hi Andrea,

thanks for launching the discussion. As a translator I do not have any relevant experience with AI tools for interpreters, but I just came across an interesting event AI ThoughtCon 2026 by the AI Localization Think Tank. On day 2 there is a session on The New Role of Linguists in the Age of AI Speech that might be interesting for the interpreter community.

Read more

Language
English

Thank you very much for the recommendation, I will look into it :)

Language
English

Hi all!

Those of you who missed the Research Corner on Teaching Translation in the Age of Generative AI last Thursday now have the chance to view the recording.

It is available here.

Cheers!

Language
English

Hi! We are all aware that most AI models are biased towards English (usually American English) which has a negative impact on language diversity. Low-resource languages in particular are often overlooked. The European Commission is tackling...

Hi! We are all aware that most AI models are biased towards English (usually American English) which has a negative impact on language diversity. Low-resource languages in particular are often overlooked. The European Commission is tackling this challenge by using the multilingual data generated by EU institutions to contribute to European LLMs with the goal of promoting greater language equality. You can find out more here.

Read more

Language
English

Thank you for sharing this project!I would be interested in finding out more about projects working on increasing the multilingualism and multiculturalism of LLMs.Does anyone know of any interesting sources I can explore?

Language
English

Hello, I am aware of eurollm.io which the University of Lisbon is working on, among others. It's an open-source, multilingual LLM.

Language
English

Hello everyone!

I thought I would give you a heads-up about the upcoming KCTI Research Corner. On 5 March, a new KCTI Research Corner titled "Teaching Translation in the Age of Generative AI" will...

Hello everyone!

I thought I would give you a heads-up about the upcoming KCTI Research Corner. On 5 March, a new KCTI Research Corner titled "Teaching Translation in the Age of Generative AI" will take place.

The event will feature speakers Joss Morkens, JC Penet, and Masaru Yamada, who authored the recently published book, "Teaching Translation in the Age of Generative AI: New Paradigm, New Learning?".

The event will be held in English, and there's no need to register. You can learn more about the event, topics, and speakers on the event page.

We also invite you to check out the speakers' book, which is available on KCTI Publications.

Read more

Language
English

Hello everyone,

If you have not come across it yet, here is a recent reflection paper by the Special Interest Group on AI in Translation and Interpreting of the European Language Council:

Peeters, K. et al...

Hello everyone,

If you have not come across it yet, here is a recent reflection paper by the Special Interest Group on AI in Translation and Interpreting of the European Language Council:

Peeters, K. et al. (2025). AI for Translation and Interpreting. A Roadmap for Users and Policy Makers.

https://repository.uantwerpen.be/docman/irua/ecef05motoM49

Authored by 25 scholars from 19 universities across 14 countries, the paper calls for a more informed and human-centred use of Generative AI and Large Language Models (LLMs) in translation and interpreting (T&I).

While recognising AI’s productivity potential, the authors highlight key concerns:

  • LLMs generate fluent text by predicting words, not by understanding meaning — which can result in deceptive fluency, inaccuracies, bias and disparities between high- and low-resource languages.

  • Survey data (n=600 professionals, 55 countries) show persistent quality concerns, mixed productivity gains, economic anxiety and a strong demand for proper AI training.

  • Important ethical and legal issues arise around data protection, accountability, bias, environmental cost and human deskilling.

The central message is clear: GenAI offers powerful tools, but not substitute solutions for human expertise. In high-stakes or sensitive contexts, professional translators and interpreters remain essential to ensure quality, trustworthiness and responsible communication.

I hope you find it an interesting and worthwhile read.

Read more

Language
English

Dear community,

We have news to share! eReporting, the European Commission’s new AI-based multilingual tool is now available to support academia, public administrations, SMEs, and NGOs with complex reporting obligations. Built on the same secure...

Dear community,

We have news to share! eReporting, the European Commission’s new AI-based multilingual tool is now available to support academia, public administrations, SMEs, and NGOs with complex reporting obligations. Built on the same secure infrastructure as eTranslation, eReporting helps you quickly map all the steps required under EU-funded projects and helps you generate structured activity reports from your existing documents (such as meeting minutes, progress reports and deliverables) in over 30 languages. With a simple drag-and-drop interface and no prompting skills needed, it turns time-consuming, multilingual reporting into a streamlined process, giving you more time to focus on the content that matters.

Test it and let us know what you think, and why not test all the other Digital Europe - AI-based Multilingual Services

Read more

Language
English

Dear Saule,

You can register here: Digital Europe AI-based services. All the necessary steps are explained on the registration page. I hope you find the tools useful!

Language
English

On Tuesday, 24 February 2026, from 10:30 to 12:00 (CET), colleagues from DG Translation will share key insights from the 2025 Translating Europe Forum (TEF) and reveal the topic for TEF 2026.

The theme of the...

On Tuesday, 24 February 2026, from 10:30 to 12:00 (CET), colleagues from DG Translation will share key insights from the 2025 Translating Europe Forum (TEF) and reveal the topic for TEF 2026.

The theme of the TEF 2025 was 'quality matters', in a sector where AI is being used more and more. If this sounds interesting to you, you can sign up here: Echoes of the 2025 Translating Europe Forum.

We're looking forward to seeing you!

Read more

Language
English

I recently came across this Tedx talk and I think her topic of research is very interesting! We often hear about how AI may replace human translators and interpreters, but we don't often hear about how...

I recently came across this Tedx talk and I think her topic of research is very interesting! We often hear about how AI may replace human translators and interpreters, but we don't often hear about how being exposed to machine translation is affecting our own use of language right now; maybe even making it less varied and spontaneous. I also tend to agree with Lise Volkart's assertion that 'AI hype often outstrips performance'. From what I could find out online, her research is still in progress. Do you have any recommendations about other research on the topic? Or maybe other presentations that go into a little more detail, general observations, etc...? Thanks!

Read more

Language
English

Hello Andrea,

I would recommend watching the presentation called 'Code and prejudice' by Marina Pantcheva from Translating Europe Forum 2025. Here's the link: #2025TEF - TEF–TALK: CODE AND PREJUDICE

The presentation touches upon how AI bias...

Hello Andrea,

I would recommend watching the presentation called 'Code and prejudice' by Marina Pantcheva from Translating Europe Forum 2025. Here's the link: #2025TEF - TEF–TALK: CODE AND PREJUDICE

The presentation touches upon how AI bias emerges in larger language models, and illustrates the presence of AI bias through examples. Marina Pantcheva also explains why we should avoid bias in AI, which could be interesting to see in light of how AI shapes or affects our language right now.

Read more

Language
English

Hello,

I’m sharing below some upcoming conferences on new technologies are how they are shaping interpretation and translation. I hope you find them interesting. Feel free to share any other events you're aware of :)

  1. Traduction...

Hello,

I’m sharing below some upcoming conferences on new technologies are how they are shaping interpretation and translation. I hope you find them interesting. Feel free to share any other events you're aware of :)

  1. Traduction & Qualité 2026: How to Stay Creative in the Era of Machines? - Lille, 30 January 2026
  2. Artificial Intelligence and Audiovisual Translation: Challenges and New Horizons - Palermo, 23-24 April 2026
  3. New Research in Translation and Interpreting Studies 2026 - Tarragona, 26-27 June 2026
  4. NeTTIT 2026: International Conference ‘New Trends in Translation and Interpreting Technology’- Dubrovnik, 24-27 June 2026
Read more

Language
English

Thank you! I would like to add another conference on this topic:Convergence 2026: Human-AI Integration for Multilingual and Accessible Communication - Guildford, UK, 17 - 19 June 2026

The second edition of the Convergence conference...

Thank you! I would like to add another conference on this topic:Convergence 2026: Human-AI Integration for Multilingual and Accessible Communication - Guildford, UK, 17 - 19 June 2026

The second edition of the Convergence conference will create an opportunity to bring together innovative research on the evolving landscape of AI in the context of multilingual and accessible communication, reflecting on the complexity and effects of using AI-driven technologies in these fields. The conference will foster a multidisciplinary dialogue that will generate new theoretical perspectives and practical research, focusing on themes such as the ethical aspects of AI in translation and interpreting, AI-enabled digital accessibility and societal inclusion, and the impact of Generative AI on language mediation. We will also examine the evolving role of language professionals, the power of Large Language Models (LLMs) in supporting multilingual communication, and the crucial need for responsible use of language AI in the public sector. The conference will publish full papers in open access proceedings with assigned ISBN and DOI.

Read more

Language
English

This looks great, thank you!We can add it to the Events section of the KCTI to give it more visibility. Would you be interested in that?

Language
English

The KCTI Research Corners are events where you can present research and studies to an active and engaged audience. We had already two very successful editions this year, both about AI, with over 300 participants, joining...

The KCTI Research Corners are events where you can present research and studies to an active and engaged audience. We had already two very successful editions this year, both about AI, with over 300 participants, joining the streaming from all around the world:

  1. AI tools in the booth and cognitive load with Bart Defranq and Ana-Maria Pleșca in April
  2. Introducing the LT-LiDER Project with Alina Secară, Dragoș Ciobanu and Miguel Rios Gaona in October

We are always looking for more academic projects to cover, so if you would like to present your research, get in touch with us!

Read more

Language
English

Dear Ivano,

at TH Köln (Cologne University of Applied Sciences) we are currently building a self-learning platform on AI in interpreting, co-funded by the EU. Might this be a project you would like to present?

Best wishes

Anja

Language
English

Dear Anja,

Thanks for your message! Indeed, it would be interesting and relevant. I'm sending you a follow up email so we can discuss it in more detail.

Best,

Ivano

Language
English

On February 29, 2020, AIIC Switzerland hosted a seminar on remote interpreting featuring Klaus Ziegler, Kilian Seeber, Amy Brady and Monica Varela Garcia.

The summary I wrote with Jéssica Ayala and Sébastien Longhurst is now online:...

On February 29, 2020, AIIC Switzerland hosted a seminar on remote interpreting featuring Klaus Ziegler, Kilian Seeber, Amy Brady and Monica Varela Garcia.

The summary I wrote with Jéssica Ayala and Sébastien Longhurst is now online: https://aiic.ch/press/remote-interpreting-ws2/

Read more

Language
English

Thanks for sharing this here and in our thread on conference reports!

Language
English

I recently recorded a wide-ranging episode on interpreting technologies for the Troublesome Terps podcast. Feel free to check it out!

https://www.troubleterps.com/45

Language
English

I really enjoyed the discussion. It amazes me that so much was covered, and yet so much wasn't even mentioned! And the feedback shows that interpreters are really interested in the topic.

Language
English

I'm currently looking into upgrading our facilities and wondered if colleagues had experiences this wished to share on technologies, or they felt warrant praise in classroom settings.

I'm specifically interested in hearing about classroom management solutions...

I'm currently looking into upgrading our facilities and wondered if colleagues had experiences this wished to share on technologies, or they felt warrant praise in classroom settings.

I'm specifically interested in hearing about classroom management solutions. I’ve found that interpreting hardware is ubiquitous but managing the recording of booths, having playback through the system, having dual playback, all managed at a trainer’s desk seems to baffle the AV providers. Have any of you managed to clear the fog and find an ideal solution?

Read more

Language
English

Hi Susan!

I've experienced similar problems with our AV-technicians. According to them the problem lies in how cables can feed into computers. But I think Barbara Ahrens in Cologne found a soulution to that when they...

Hi Susan!

I've experienced similar problems with our AV-technicians. According to them the problem lies in how cables can feed into computers. But I think Barbara Ahrens in Cologne found a soulution to that when they installed their system couple of years ago.

In Stockholm we have resorted to having students plug in mp3 players to their own console and thereby recording themselves on two channels, only works with live speaker though. As soon as we want to do it with a recording from e.g. Speech Repository we have the sound mixed on the two channels.

That's all I have,

Elisabet

Read more

Language
English

Thanks for your response Elisabet. Not being a technician myself I had no idea that what I was asking to do was so difficult to express to AV and to have them understand.

The current crisis...

Thanks for your response Elisabet. Not being a technician myself I had no idea that what I was asking to do was so difficult to express to AV and to have them understand.

The current crisis leaves me wondering how to do the same, when conducting a SIM exam online. Assessment of two different recordings of the original and the interpreter, is not the same as assessing the overlaid tracks. Recording both when both are live, simultaneous and in remote locations (their self-isolated homes) is also difficult (to my limited knowledge)!!

Read more

Language
English

Surprising that this is so difficult. I would have expected that language learning tech has all these bases covered. Maybe a bring-your-own-device can help? Recording yourself as an interpreter is easy, of course. Those recordings (and...

Surprising that this is so difficult. I would have expected that language learning tech has all these bases covered. Maybe a bring-your-own-device can help? Recording yourself as an interpreter is easy, of course. Those recordings (and the speech, either pre-recorded video or self-recorded by speaker) could then be uploaded quickly to a central location (LMS? Cloud storage?) But my understanding is you want both source and target aligned for comparison? Ideally on two separate channels or at least left and right in stereo spectrum? Practice platforms like the SCIC Speech Repository or Interpreters' Help already do this, but do not currently allow a direct upload of your own source and target material - I could imagine a spin-off tool that does exactly this.

Read more

Language
English

Isn’t it so surprising? When I saw that AV were agog at my “simple request” I wondered if I had lost my grip on reality. I think it was my grip on technology however!

Everyone brings...

Isn’t it so surprising? When I saw that AV were agog at my “simple request” I wondered if I had lost my grip on reality. I think it was my grip on technology however!

Everyone brings their own device but a centralised system to record both in real time is the angle I’m after. Dual track recording and playback so that you can skip back to a specific timestamp and see what was said by the speaker, what was said by the student and then you can assess the why and the when of matters. Then, to listen to another attempt the same passage to observe approaches, styles, techniques, successes and errors is useful for peer learning in the latter stages of training especially. To do this from the comfort of a screen in front of you rather than plugging in and out cables is top of the list.

Left and right in stereo spectrum would work for a single student but limits the multiple student approach.

A spin off of SCIC Rec is exactly the type of tool required!

Read more

Language
English

It is indeed complex and requires some planning when designing your solution. Bart Defrancq worked together closely with Televic for the UGent redesign. We can now record with just one click both source (live speaker or...

It is indeed complex and requires some planning when designing your solution. Bart Defrancq worked together closely with Televic for the UGent redesign. We can now record with just one click both source (live speaker or video source) and the student booths (all 10). Afterwards, we can listen to both tracks for evaluation and feedback (source right and student rendition left). I don't think we can compare two student versions as easily. The recordings are stored on the local server, not in the cloud. I understand this is due to large file sizes. Drawback is that recordings are not accessible remotely so if we want students to listen back to them, we need to upload them manually to WeTransfer, for example. Feasible but time-consuming.

BYOD is what we used to do before we had our new lab. Students would record their own performances on smartphones, tablets or laptops. It's a workaround but it clearly has its limits. Now that we have the new lab, I'm am blown away by how useful it is to listen to both versions simultaneously. More insights gained by students and trainers lead to immensely improved feedback. Being able to pick up on increasing décalage, omissions, use of intonation, etc. It really makes it a lot easier to pick up what happened, identify the cause and suggest fixes and strategies for the future.

Read more

Language
English

Amen Tom! Giving live context to feedback with replay of the efforts and suggestions for improvement! I had assumed this was already a thing but discovered it's still an idea albeit one that we can build...

Amen Tom! Giving live context to feedback with replay of the efforts and suggestions for improvement! I had assumed this was already a thing but discovered it's still an idea albeit one that we can build upon if we correctly convey the right message to the right designers.

Read more

Language
English

Rereading Alexander's comment, yours and my reply, I feel I need to clarify: we have ten booths and can record 11 versions: source + 1 per booth. While we cannot listen to two students simultaneously, we...

Rereading Alexander's comment, yours and my reply, I feel I need to clarify: we have ten booths and can record 11 versions: source + 1 per booth. While we cannot listen to two students simultaneously, we can listen to source + student 1 OR source + student 2. Using timestamps, it is indeed easy quickly switch from one student to the other and go and compare different solutions. Extremy useful feature! Quite the eye-opener for many students (and trainers, I must admit).

Read more

Fri, 14/12/2018 - 15:19

Language
English

I'm intrigued byapplications like this auXala and whether they could be used for Virtual Classes. At the moment the hardware infrastructure seems cumbersome - often it doesn't work for simultaneous and when it does only one...

I'm intrigued byapplications like this auXala and whether they could be used for Virtual Classes. At the moment the hardware infrastructure seems cumbersome - often it doesn't work for simultaneous and when it does only one student can be listened to at a time. I know the VC department is considering an overhaul of its systems but maybe something like this could be the way forward instead. Would this system enable me to get a number of trainers together in Brussels to listen in to a class of students in Shanghai, with each trainer able to listen to a different student by slecting the appropriate channel on their mobile phone? Or am I being wildly unrealistic? I speak as a total tech-ignoramus so any pointers would be very welcome.

Read more

Language
English

Hi Andy,

I have used GoReact before https://get.goreact.com/ and it works very well as a on online platform to upload activities, listen to students' recordings and give feedback. It is not perfect, but it is a...

Hi Andy,

I have used GoReact before https://get.goreact.com/ and it works very well as a on online platform to upload activities, listen to students' recordings and give feedback. It is not perfect, but it is a great platform to create different types of assignments (upload webinars, video or audio interpreting assignments, recordings uploaded by yourself, students' presentations, etc.) and the best feature is that it is very interactive. You can provide feedback in different ways (written comments, tags created by yourself to flag up different aspects/criteria, audio feedback, links to videos, etc.). Feedback comments are time coded: students can see when they made that "minor omission" or came up with a beautiful solution that you tagged as "great solution"; they get statistics of the different types of "errors" or good solutions for each recording and course and you can create templates to enter marks. You can set up activities for self assessment, peer assessment and tests.

From a more technical point of you, students need to have a webcam in order to be able to upload recordings, even if they want to upload audio recordings only, but most laptops have one and, alternatively, students can use the app on their phones to access the platform and record themselves.

Each student has to pay around $30, if I remember correctly, unless you have an institutional license. That is a downside, but cheaper than most text books in other disciplines!

I tested it and led its implementation with my former colleague Aída Martínez-Gómez at John Jay a couple of years ago, so happy to answer any questions.

Eloísa

Read more

Language
English

Andy, you might want to look into one of the remote interpreting platforms (e.g. Voiceboxer), where you can actually switch between interpreting channels. It's audio-only, I think, because with video, it just requires too much bandwidth.

Language
English

When you say audio only, you mean that the trainer only gets the audio feed, but students get the video feed of the original? Working from an audio-only source would take us back a few decades.

Language
English

Yes, what I mean is that one can hear the interpreters, but not see them. Interpreters on the platform get both video and audio as input.

Language
English

Hi Andrew. If a platform can offer a number of languages, i.e. channels, then I am sure it can do the same for a number of interpreting students. So rather than switch between FR, DE, ES...

Hi Andrew. If a platform can offer a number of languages, i.e. channels, then I am sure it can do the same for a number of interpreting students. So rather than switch between FR, DE, ES, EN, PL etc you'd switch between student 1, 2, 3, 4 etc. I'm not an expert, but that would be my guess anyway. The best thing to do would be to contact providers with that query.

Read more

Language
English

Here's a very brief summary of what Alex Drechsel, Susana Rodríguez and I experienced when we tried out auXala online recently: https://translationcommons.org/wp-content/uploads/2018/11/Auxala-trial-repo…

Files

Log in to have access to the files shared

Wiki docs

Log in to have access to interactive documents and start editing

Showing results 1 of 4
Monday, 1 October 2018

Manual

Videos

Courses

Podcasts about interpreting and translation

A big list of podcasts about language in general, linguistics etc. can be found here.

Thursday, 23 January 2020
  • Fantinuoli, Claudio: "Interpreting and technology" (Berlin: Language Science Press 2019) - More info
  • O'Hagan, Minako: "The Routledge Handbook of Translation and Technology" - More info
  • Corpas Pastor, Gloria; Durán-Muñoz, Isabel: "Trends in E-Tools and Resources for Translators and Interpreters" - More info
  • "Here or There: Research on Interpreting via Video Link" - More info
  • Interpreters vs Machines: Can Interpreters Survive in an AI-Dominated World? - More info
  • Heidi Salaets & Geert Brône: "Linking up with Video. Perspectives on interpreting practice and research" - More info
  • Suzanne Ehrlich & Jemina Napier (eds.): "Interpreter Education in the Digital Age: Innovation, Access, and Change" - More info
Wednesday, 29 January 2020

Content coming this week.

Friday, 31 January 2020

Content coming this week.

Members

Log in to see who is part of this community and start interacting

No members available.