Chartered Institute
of Linguists

Uses of AI Translation in UK Public Service Contexts

AI Translation in UK Public Services: Navigating the 'Sea of Chaos'


Key points from the expert panellists at the APPG Modern Languages Meeting held on 23 June 2025.

Key Findings from Frontline Research


Dr Lucas Nunes Vieira from the University of Bristol presented research commissioned by CIOL, revealing the widespread but risky use of AI translation tools across UK public services. His comprehensive study involved 2,500 frontline public service professionals, with detailed follow-up research among 18 social workers in child protection, adult services, and mental health.

The research showed that one-third of frontline workers are already using AI translation tools in their daily work, often on personal devices through openly accessible web interfaces. This practice poses significant data privacy risks.

The research highlighted several high risk uses of AI translation in critical contexts:

  • Medical consent forms: NHS staff using basic machine translation for written consent documents that shape treatment decisions and legal liability
  • Translating prescriptions/Medication dosing information: Particularly risky given AI's poor handling of numerical data
  • Complex medical terminology: A vicious circle where staff use AI translation specifically for the most complex terminology - precisely when accuracy is most crucial and mistakes most common
  • Health record coding: Using AI to guide input information into permanent NHS health records, potentially "cementing" incorrect information that could affect future care

 

The Healthcare Perspective: Real-World Consequences


Dr Stephen Hibbs from Queen Mary University of London shared a sobering personal experience of communication failures in healthcare, describing a patient with a bone marrow transplant who died partly due to misunderstanding medication instructions, complicated by a language barrier. His conversations with other specialist healthcare workers in his centre revealed that most had encountered situations where patients with limited English proficiency came to harm, including undertaking treatments without adequate understanding.

Working with Prof Federico Federici at UCL, Dr Hibbs has been developing linguistically and culturally appropriate communications for Bengali and other communities in a cancer setting.

Their work on translating a nationally approved chemotherapy consent form - currently only available in English and Welsh - demonstrates the urgent need and strong cost/benefit case for professional translation of critical standard documents which are in widespread use.

 

The 'Sea of Chaos' and 'Anchors of Stability'


Dr Hibbs described providing healthcare across language barriers as a "sea of chaos" where the inherently unpredictable, urgent and real-time realities mean frontline workers have to use whatever translation tools are available immediately.

However, he advocated for the power of "anchors of stability" high-quality, professionally translated resources that both workers and patients can trust and reference repeatedly.

The chaos may be inevitable, but preparing high quality professionally checked translations of routinely used patient information and other standard documentation like consent forms can help healthcare professionals, patients and families giving them these vital ‘anchors of stability’. 

Well-translated documents can:

  • Increase patient trust and respect
  • Reduce obvious translation errors which erode confidence, damage credibility and reduce patient engagement
  • Enable patients to explain details of their treatment – in their language - to family members
  • Provide reliable resources for interpreters supporting patients, themselves dealing with complex medical terminology

 

Professional Translation vs Machine Translation


Prof Federico Federici made a crucial distinction between ‘immediate response translation’ and ‘preventative translation’.

While AI may be the only solution for some urgent communication needs, there is no excuse for not having high-quality multi-language professional translation for predictable, high-volume documents such as standard patient information and consent forms.

 

The Communication Challenge


Prof Federici emphasised that machine translation focuses on probabilistic language translation rather than understanding actual communication. Patients can often identify when ‘raw’ machine translation has been used, often responding with dismay, confusion and the feeling of being disrespected. This sends the signal that the health service doesn't care about patients and their understanding.

He stressed that professional translators, with their cultural knowledge and expertise in using AI and machine translation tools appropriately achieve the best results. The goal should always be human oversight of important AI output, not risking raw output with no human linguistic expertise.

 

A Balanced Approach


John Worne, CEO of the Chartered Institute of Linguists, emphasised the need to balance risks and benefits while addressing two key challenges:

  1. Creating ‘anchors of stability’: Developing high-quality, professionally translated resources for frequent use cases
  2. Building confidence: Providing simple checklists and better guidance on appropriate tools, risk mitigation and safer AI translation use for frontline workers

 

Common AI Translation Pitfalls


John flagged three crucial areas for health and public service contexts where AI/machine translation commonly fails:

  1. Proper names: AI tends to mix, merge and blend elements of names (affecting the names of people, drugs, places etc.)
  2. Numbers: Different numerical conventions across languages lead to significant AI errors with numbers
  3. Negatives: AI struggles with double negative structures common in many languages, sometimes producing opposite meanings – translating ‘do’ as ‘do not’

A striking example from Prof Federici's research showed the risks of unchecked machine translation of cancer information, with "Don't stay at home if you develop a fever" being translated as "Do stay at home" - a potentially life-threatening error.

 

Recommendations


Dom Hebblethwaite from CIOL summarised the dilemma: advances in AI are inevitable and increasingly rapid, however, significant risks exist from both inconsistent AI use and failure to use AI appropriately.

The key recommendations emerging from the meeting include:

For Policy Makers

  • Recognise the dynamic nature of AI risks and benefits
  • Create 'enabling' guidance that highlights highest-risk uses
  • Support professional translation for regularly used complex documents
  • Avoid undermining frontline workers while providing better, more secure tools and greater awareness of AI risks

For Healthcare and Public Services

  • Focus on "preventative translation" for predictable situations
  • Invest in advance in professional translation of high-frequency, high value, critical documents
  • Implement proper governance around the use of AI translation tools
  • Provide training on when and how to use AI translation more appropriately

 

The Path Forward


While AI translation tools can offer significant benefits, their deployment without proper oversight and professional guidance poses serious risks to public safety and service quality.

The research demonstrates that frontline workers are already using these tools extensively, making it crucial to provide proper guidance, training, and invest in some ‘anchors’ in terms of professionally translated resources; but it is impractical and undesirable to attempt to prohibit AI use entirely.

Prof Federici's closing observation remains highly relevant: "When everybody is responsible, nobody is."

Clear ownership and accountability for language policy implementation within pubic services is essential for addressing these challenges effectively.


Dr Lucas Nunes Vieira's slides from the APPGML event are available here


     

    The Research - Uses of AI Translation in UK Public Service Contexts

     


    Published by the Chartered Institute of Linguists

    Author Lucas Nunes Vieira, University of Bristol 

    Reviewers Lynne Bowker, Université Laval, Mary Nurminen, Tampere University

     

     


    Introduction


    Machine or artificial intelligence (AI) translation tools are used in a range of contexts as a communication aid. These tools can provide helpful assistance in the face of a language barrier. Their benefits may include greater linguistic diversity and increased access to information, but machine translation is also risky.

    Translation errors are common and may be difficult to identify for users who do not speak both the starting language and the language translated into. Existing research has shown that machine translation is used in contexts where miscommunication can be highly consequential, such as in healthcare and policing. This type of machine translation use has so far tended to go under the radar, with little public discussion and, importantly, little evidence of the extent and nature of the reliance on machine translation tools in these contexts. This report therefore presents preliminary results of a survey of machine translation use in health and social care, legal and emergency services, and the police

    The focus of the survey is on uses of unedited machine translations. A sample of 2,520 UK professionals submitted valid responses to this survey. A total of 33% of them had used machine translation at work, most often in contexts involving direct communication with others in a shared physical space. The professionals were highly satisfied with the tools they used. They were also confident in their ability to use the tools successfully, even though it was uncommon for machine translation to be mentioned in workplace training. Google Translate was by far the tool used most often. The use of generative AI tools such as ChatGPT also ranked highly. The tools were often accessed on personal devices using an openly available browser interface, practices which pose significant risks to privacy and information security.

    While the report does not present the full data collected in the study and is not intended to provide detailed guidance or a best practice model, it offers three basic recommendations aimed at greater transparency and awareness-raising:

    1. At a minimum, organisations need to recognise (in training, staff communication, the organisation’s literature) that AI/machine translation exists, and that staff and members of the public may be instinctively inclined to use it. The potential presence of AI/machine translation in the contexts covered by this report cannot be institutionally ignored.

    2. The use of AI to overcome language barriers needs to be addressed in policy. Institutional policies need to be sufficiently flexible to keep up with technological developments while also protecting the community from the risks posed by machine translation. Policies ideally need to involve dedicated language access teams, a mechanism for assessing needs and reviewing the policy, as well as protected budgets for professional language services and information on where these services should be prioritised.

    3. Organisations need to place more emphasis on education and staff training. AI and machine translation literacy need to be embedded in the workplace culture to equip workers with the skills necessary to make decisions in what are increasingly challenging and technologized working environments.


    Highlights


    Some key highlights of the report include:

    1. Prevalence of AI translation use:

    • At least 33% of 2,520 UK public service professionals surveyed had used machine translation in their work.
    • Over 80% of respondents worked in health and social care settings, but machine translation was used in all sectors targeted for the study, including legal and emergency services, and the police.

    2. Context and purpose of use:

    • AI was most often used for public-facing communication, particularly for in-person interactions.
    • AI was most commonly used for translations between English and Polish, with Romanian and Arabic the next most frequent languages translated.

    3. Technology and access:

    • Google Translate was by far the most commonly used tool, followed by ChatGPT and other AI chatbots.
    • Most users accessed translation tools via openly available browser interfaces.
    • Over half reported using personal devices for work-related translation tasks.

    4. Institutional awareness and training:

    • For 72% of respondents, machine translation had never been mentioned in workplace training.
    • 15% reported that machine translation use was recommended by their employer.

    5. User confidence and satisfaction:

    • Most users were at least somewhat confident in their ability to use machine translation successfully.
    • Satisfaction levels with the technology were extremely high across various measures.

    6. Key risks identified:

    • The very real potential for miscommunication in critical public services.
    • Privacy and information security concerns due to use of personal devices and open web interfaces.
    • The risk of machine translation becoming a de facto 'standard practice' without proper policy consideration.

    The report highlights a significant gap between official policies and on-the-ground practices in AI translation use in UK public services, emphasising the need for more transparent discussions and clearer guidelines in this area.


    The Key Recommendations


     

    • Organisations should formally acknowledge the existence and potential use of AI translation.

     

    • Use of AI for language barriers should be addressed in institutional policies.

     

    • Greater emphasis on staff education and training regarding AI and machine translation is needed.

     


    CIOL's View


    Dom Hebblethwaite, Head of Membership CIOL said:

    The Chartered Institute of Linguists (CIOL) is pleased to have partnered with the University of Bristol on the Uses of AI Translation in UK Public Service Contexts. This groundbreaking research, authored by Dr Lucas Nunes Vieira of the University of Bristol, examines a previously unstudied aspect of our public services: the use of machine translation tools by frontline workers.

    The findings presented here are both informative and concerning. They reveal significant use of AI-powered translation tools, including Google Translate and ChatGPT, in healthcare, legal, emergency, and police services - a practice that has largely gone unnoticed and unregulated.

    The data, from over 2,500 UK professionals, shows that a third of respondents have used machine translation in their work, often in public-facing situations where miscommunication could have serious consequences. Of particular concern is the lack of institutional awareness and acknowledgement of this practice and the absence of appropriate policy frameworks to protect the public and public service workers themselves. The majority of respondents reported that machine translation had never been mentioned in their workplace training, despite its frequent use. This institutional silence means frontline workers are navigating complex linguistic situations with public service users and the public in ad hoc ways without guidance or support.

    We must also address the potential for AI to create a false sense of linguistic competence. The language industry’s complexities are already poorly understood by the general public and by frontline workers, and the advent of seemingly capable AI translation tools risks further obscuring the vital importance of human linguistic expertise. This misconception could lead to a further devaluing of language skills, ultimately impoverishing the UK’s linguistic capabilities.

    We wholly endorse the recommendations put forth in this report. The call for organisations to acknowledge the existence and potential use of AI/machine translation, to address that use in policies, and to place much more emphasis on staff education and training on AI and machine translation are all crucial steps. However, we believe these recommendations should be seen as a starting point rather than the end state. They should be implemented alongside robust safeguards and a commitment to maintaining human oversight by professional translators and experienced linguists in critical translation tasks.

    The risks of getting translation wrong in public service contexts, through mistranslation, cultural insensitivity, or loss of nuance are simply too high to not use appropriately qualified language professionals. Another concern is the potential for AI to perpetuate or even amplify biases present in its ‘training data’, leading to systemic discrimination in translated content.

    In light of these concerns, we strongly advocate for maintaining and, where possible, increasing public service budgets for professional translation services. While we recognise that it may not be realistic for human translation to be used in every circumstance, it is crucial that funding for skilled linguists is protected, especially in high-stakes situations where accuracy and cultural sensitivity are paramount.

    If facts are misrepresented or key messages are mangled, public services quite simply fail the publics they serve. It is clear that the current situation of unacknowledged and unmanaged use of AI for translation in public services cannot continue.

     

    For more you can watch the CIOL Roundtable on Machine Translation in Public Services here or by clicking on the image below:


    Download the full report here


    To cite this report:

    Vieira, LN, 2024. Uses of AI Translation in UK Public Service Contexts: A Preliminary Report. Chartered Institute of Linguists. https://www.ciol.org.uk/ai-translation-uk-public-services


    This report is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0): https://creativecommons.org/licenses/by/4.0/

    This means you are free to share (copy and redistribute) and adapt (remix, transform, and build upon) this material for any purpose, even commercially, as long as you give appropriate credit to the University of Bristol and CIOL, provide a link to the license, and indicate if changes were made.