Chartered Institute
of Linguists

CIOL AI Voices


Introduction


Since the advent of Computer-Assisted Translation (CAT) tools in the 1980s and 1990s, translators have been adapting to technology in translation. This began with Translation Memory (TM) software and then evolved to Machine Translation (MT), with the 3rd generation Neural Machine Translation (NMT) becoming an increasingly common part of translation workflows. 

Therefore, for many in our profession, the hype or buzz surrounding the latest developments with Large Language Models (LLM) or Generative AI (GenAI) such as ChatGPT and their potentially transformational effects may feel familiar.

The future impact of these tools on the translation industry remains uncertain. The latest survey from the Association of Translation Companies (ATC) indicated that 25% of their members are using LLMs and other GenAI tools, compared to 82% using TM and 71% using MT. Some of the primary use cases for GenAI tools currently appear to be content creation, data cleaning, and post-editing. We are still in the early stages of adapting to these new technologies, but it seems inevitable that their use and the associated impacts will grow clearer and more widespread.

As I explore Artificial Intelligence (AI) governance and regulation, I find myself particularly drawn to the EU AI Act, the US Executive Order on safe use of AI, and the discussions held at the UK AI Safety Summit in London (see references). These initiatives represent significant strides in establishing comprehensive frameworks for the responsible development and use of AI. The EU AI Act, for instance, proposes a risk-based approach to AI regulation, focusing on data quality, transparency, human oversight, and accountability. Similarly, the US executive order establishes new standards for AI safety and security, protecting privacy, advancing equity and civil rights, and promoting innovation and competition. The recent UK AI Summit brought together governments, leading AI companies, civil society groups, research experts, tech companies and thought leaders to discuss the risks of frontier AI and how they can be mitigated through internationally coordinated action.

These developments are highly relevant to professional bodies like the Chartered Institute of Linguists (CIOL), as they provide valuable insights into how we can shape our position on AI and the consequent advice we give to the profession, members and stakeholders. As Chair of CIOL Council, I am keen to support the institute as it navigates this complex landscape. I believe that our position should not only provide reassurance to linguists, especially professional translators of their role in a world influenced by GenAI, but also clearly flag the issues and concerns, offering practical guidance on how to work best with this latest transformative technology. CIOL will soon be conducting a survey among its members to gain a deeper understanding of the impact of these technological shifts. The insights obtained will be pivotal in gauging the real-world effects of the most recent technological advancements.

As a Royal Chartered body, CIOL is here to represent and act in the best interests of the entire translation profession and industry, and indeed the wider profession of linguists. It is clear that transparency, safety, and ethical considerations should be at the forefront of our approach. We will advocate for these principles in our thinking and advise on the use of AI in translation and language services and we will encourage our members and language professionals to do the same.

I believe that language practitioners have a confirmed record of resilience and successful adaptation in adapting to technological advancements. We have learned to adjust to previous tech introductions and can apply those lessons as we navigate the world of AI. By embracing this resilience and leveraging our collective knowledge and experience, we can ensure that linguists remain at the forefront of AI integration in our field – the essential expert ‘humans in the loop’.

Steve Doswell
Linguist, consultant and Chair of CIOL Council

 


CIOL Council’s initial reflections on AI in translation and language services


1. We can’t stop AI technology but there are aspects of it we may need to challenge 

Embracing AI technologies is inevitable and unavoidable, and they can make linguists’ work more efficient and enjoyable, but we do need to highlight and confront the downsides.

2. New models will mean new opportunities 

As AI technologies become ubiquitous, they will also become more accessible, both lowering the costs of using them and enabling new business models for individual translators and for translation companies; there should be opportunities for all, so existing linguists need not be losers.

3. In the future much more content will be translated

The vast majority of content produced worldwide is not translated. New technology will enable much more content to be translated globally with commensurate opportunities for linguists who can ensure it is fit for purpose.

4. The human is – and will continue to be – essential 

The skills of linguists in post-editing, interpretation of nuance and cross-cultural navigation will become all the more important and valuable as AI-based translation tools become more and more ubiquitous.

5. The most skilled linguists will be the most in demand 

As it becomes increasingly hard to distinguish machine from human translation – at least at the superficial level – creative and skilled linguists will be more and more valuable as they will make all the difference to the end product.

 


CIOL Council’s major concerns regarding AI’s use in today’s translation and language services


1. Machines being left to their own devices

If machines seem to produce good (enough) quality content, companies, public services and governments may be tempted to cut corners and take risks.

2. Data runs out of control

Society, organisations and individuals need to be alert to the data we ‘give away’ when we input it into translation tools and generative AI. We need to be more alert to where data goes, what use is made of it
and who profits from it.

3. AI gives a whole new meaning to ‘fatal errors’

The errors that translation tools and AI make are more common in less common languages. Over time errors may also become harder and harder to spot (especially in English) as AI improves. This makes it all the more important that AI isn’t used without human oversight in high stakes interpreting or translating - and this needs government, public services and regulatory attention.

4. The hype will undermine the understanding of the added value an expert human linguist brings to the process

How the language industry works is already poorly understood in the wider world. The impact of these changes risks persuading the most sophisticated and the least that algorithms can reliably do things
they can’t.

5. An even deeper digital divide for less well-resourced languages

Given the overwhelming advantages English has in the amount of digital content and the focus of commercial digital players there is a major risk that only the biggest 3-5 world languages will continue to thrive in an AI-led languages paradigm.

 


Download the 'CIOL AI Voices' White Paper


 

 

Click the image or download the PDF here

 


Useful References


  1. The EU's Artificial Intelligence Act, explained | World Economic Forum. https://www.weforum.org/agenda/2023/06/european-union-ai-act-explained/.
  2. EUR-Lex - 52021PC0206 - EN - EUR-Lex. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206.
  3. Artificial Intelligence Act - Wikipedia. https://en.wikipedia.org/wiki/Artificial_Intelligence_Act.
  4. How to use AI in Translation. https://slator.com/resources/how-to-use-ai-in-translation/
  5. OpenAI Gains Traction as New Default for Machine Translation. https://slator.com/openai-gains-traction-as-new-default-for-machine-tran...
  6. US announces 'strongest global action yet' on AI safety. https://www.bbc.co.uk/news/technology-67261284.
  7. Biden hails ‘bold action’ of US government with order on safe use of AI .... https://www.theguardian.com/technology/2023/oct/30/biden-orders-tech-fir....
  8. FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and .... https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/....
  9. FACT SHEET: Biden-Harris Administration Announces New Actions to .... https://www.whitehouse.gov/ostp/news-updates/2023/05/04/fact-sheet-biden....
  10. UK PM Sunak to lead AI summit talks before Musk meeting. https://www.msn.com/en-gb/news/world/uk-pm-sunak-to-lead-ai-summit-talks....
  11. Hunt for fraudsters in London ramped up using artificial intelligence, says minister as UK hosts AI summit. https://www.standard.co.uk/news/uk/ai-artificial-intelligence-summit-elo....
  12. Harris and Sunak due to discuss cutting-edge AI risks at UK summit. https://www.msn.com/en-gb/news/uknews/harris-and-sunak-due-to-discuss-cu....
  13. AI Safety Summit| AISS 2023. https://www.aisafetysummit.gov.uk/.
  14. About the AI Safety Summit 2023 - GOV.UK. https://www.gov.uk/government/topical-events/ai-safety-summit-2023/about.
  15. What is the AI Act — and what does it mean for the UK?. https://www.standard.co.uk/news/tech/ai-act-eu-regulation-uk-government-....
  16. The EU AI Act: A Primer - Center for Security and Emerging Technology. https://cset.georgetown.edu/article/the-eu-ai-act-a-primer/.
  17. About | The AI Summit London. https://london.theaisummit.com/ai-summit-london.
  18. Generative AI: Friend Or Foe For The Translation Industry? - Forbes. https://www.forbes.com/sites/forbestechcouncil/2023/08/11/generative-ai-....