Chartered Institute
of Linguists

Uses of AI Translation in UK Public Service Contexts

Published by the Chartered Institute of Linguists

Author Lucas Nunes Vieira, University of Bristol 

Reviewers Lynne Bowker, Université Laval, Mary Nurminen, Tampere University


Introduction


Machine or artificial intelligence (AI) translation tools are used in a range of contexts as a communication aid. These tools can provide helpful assistance in the face of a language barrier. Their benefits may include greater linguistic diversity and increased access to information, but machine translation is also risky.

Translation errors are common and may be difficult to identify for users who do not speak both the starting language and the language translated into. Existing research has shown that machine translation is used in contexts where miscommunication can be highly consequential, such as in healthcare and policing. This type of machine translation use has so far tended to go under the radar, with little public discussion and, importantly, little evidence of the extent and nature of the reliance on machine translation tools in these contexts. This report therefore presents preliminary results of a survey of machine translation use in health and social care, legal and emergency services, and the police

The focus of the survey is on uses of unedited machine translations. A sample of 2,520 UK professionals submitted valid responses to this survey. A total of 33% of them had used machine translation at work, most often in contexts involving direct communication with others in a shared physical space. The professionals were highly satisfied with the tools they used. They were also confident in their ability to use the tools successfully, even though it was uncommon for machine translation to be mentioned in workplace training. Google Translate was by far the tool used most often. The use of generative AI tools such as ChatGPT also ranked highly. The tools were often accessed on personal devices using an openly available browser interface, practices which pose significant risks to privacy and information security.

While the report does not present the full data collected in the study and is not intended to provide detailed guidance or a best practice model, it offers three basic recommendations aimed at greater transparency and awareness-raising:

1. At a minimum, organisations need to recognise (in training, staff communication, the organisation’s literature) that AI/machine translation exists, and that staff and members of the public may be instinctively inclined to use it. The potential presence of AI/machine translation in the contexts covered by this report cannot be institutionally ignored.

2. The use of AI to overcome language barriers needs to be addressed in policy. Institutional policies need to be sufficiently flexible to keep up with technological developments while also protecting the community from the risks posed by machine translation. Policies ideally need to involve dedicated language access teams, a mechanism for assessing needs and reviewing the policy, as well as protected budgets for professional language services and information on where these services should be prioritised.

3. Organisations need to place more emphasis on education and staff training. AI and machine translation literacy need to be embedded in the workplace culture to equip workers with the skills necessary to make decisions in what are increasingly challenging and technologized working environments.


Highlights


Some key highlights of the report include:

1. Prevalence of AI translation use:

  • At least 33% of 2,520 UK public service professionals surveyed had used machine translation in their work.
  • Over 80% of respondents worked in health and social care settings, but machine translation was used in all sectors targeted for the study, including legal and emergency services, and the police.

2. Context and purpose of use:

  • AI was most often used for public-facing communication, particularly for in-person interactions.
  • AI was most commonly used for translations between English and Polish, with Romanian and Arabic the next most frequent languages translated.

3. Technology and access:

  • Google Translate was by far the most commonly used tool, followed by ChatGPT and other AI chatbots.
  • Most users accessed translation tools via openly available browser interfaces.
  • Over half reported using personal devices for work-related translation tasks.

4. Institutional awareness and training:

  • For 72% of respondents, machine translation had never been mentioned in workplace training.
  • 15% reported that machine translation use was recommended by their employer.

5. User confidence and satisfaction:

  • Most users were at least somewhat confident in their ability to use machine translation successfully.
  • Satisfaction levels with the technology were extremely high across various measures.

6. Key risks identified:

  • The very real potential for miscommunication in critical public services.
  • Privacy and information security concerns due to use of personal devices and open web interfaces.
  • The risk of machine translation becoming a de facto 'standard practice' without proper policy consideration.

The report highlights a significant gap between official policies and on-the-ground practices in AI translation use in UK public services, emphasising the need for more transparent discussions and clearer guidelines in this area.


The Key Recommendations


 

  • Organisations should formally acknowledge the existence and potential use of AI translation.

 

  • Use of AI for language barriers should be addressed in institutional policies.

 

  • Greater emphasis on staff education and training regarding AI and machine translation is needed.

 


CIOL's View


Dom Hebblethwaite, Head of Membership CIOL said:

The Chartered Institute of Linguists (CIOL) is pleased to have partnered with the University of Bristol on the Uses of AI Translation in UK Public Service Contexts. This groundbreaking research, authored by Dr Lucas Nunes Vieira of the University of Bristol, examines a previously unstudied aspect of our public services: the use of machine translation tools by frontline workers.

The findings presented here are both informative and concerning. They reveal significant use of AI-powered translation tools, including Google Translate and ChatGPT, in healthcare, legal, emergency, and police services - a practice that has largely gone unnoticed and unregulated.

The data, from over 2,500 UK professionals, shows that a third of respondents have used machine translation in their work, often in public-facing situations where miscommunication could have serious consequences. Of particular concern is the lack of institutional awareness and acknowledgement of this practice and the absence of appropriate policy frameworks to protect the public and public service workers themselves. The majority of respondents reported that machine translation had never been mentioned in their workplace training, despite its frequent use. This institutional silence means frontline workers are navigating complex linguistic situations with public service users and the public in ad hoc ways without guidance or support.

We must also address the potential for AI to create a false sense of linguistic competence. The language industry’s complexities are already poorly understood by the general public and by frontline workers, and the advent of seemingly capable AI translation tools risks further obscuring the vital importance of human linguistic expertise. This misconception could lead to a further devaluing of language skills, ultimately impoverishing the UK’s linguistic capabilities.

We wholly endorse the recommendations put forth in this report. The call for organisations to acknowledge the existence and potential use of AI/machine translation, to address that use in policies, and to place much more emphasis on staff education and training on AI and machine translation are all crucial steps. However, we believe these recommendations should be seen as a starting point rather than the end state. They should be implemented alongside robust safeguards and a commitment to maintaining human oversight by professional translators and experienced linguists in critical translation tasks.

The risks of getting translation wrong in public service contexts, through mistranslation, cultural insensitivity, or loss of nuance are simply too high to not use appropriately qualified language professionals. Another concern is the potential for AI to perpetuate or even amplify biases present in its ‘training data’, leading to systemic discrimination in translated content.

In light of these concerns, we strongly advocate for maintaining and, where possible, increasing public service budgets for professional translation services. While we recognise that it may not be realistic for human translation to be used in every circumstance, it is crucial that funding for skilled linguists is protected, especially in high-stakes situations where accuracy and cultural sensitivity are paramount.

If facts are misrepresented or key messages are mangled, public services quite simply fail the publics they serve. It is clear that the current situation of unacknowledged and unmanaged use of AI for translation in public services cannot continue.


Download the full report here


To cite this report:

Vieira, LN, 2024. Uses of AI Translation in UK Public Service Contexts: A Preliminary Report. Chartered Institute of Linguists. https://www.ciol.org.uk/ai-translation-uk-public-services


This report is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0): https://creativecommons.org/licenses/by/4.0/

This means you are free to share (copy and redistribute) and adapt (remix, transform, and build upon) this material for any purpose, even commercially, as long as you give appropriate credit to the University of Bristol and CIOL, provide a link to the license, and indicate if changes were made.