AI will never replace English teachers - it needs them
Artificial intelligence (AI) is transforming classrooms worldwide – and English language learning is no exception. With AI-powered tools offering round-the-clock access to powerful resources...
Future of English
AI-detection tools are now common in schools, universities, and training organisations. The aim is often well-intentioned: protect academic integrity and maintain trust in assessment results. But these systems can introduce a problem that’s becoming hard to ignore: AI bias that affects multilingual learners.
For students who use English as an additional language, the patterns in their writing may be incorrectly flagged as 'AI-generated,' even when the work is entirely their own. The result? Frustration, mistrust and in some cases, unfair academic penalties.
Many AI-detection tools are trained on vast collections of first-language English writing. As highlighted in the British Council’s Artificial intelligence and English language teaching: Preparing for the future, such limited datasets can lead to algorithmic bias in AI, where systems fail to recognise legitimate variation in language use.
This is one of several types of AI bias that can appear in educational technology. Another example is when models are inadvertently tuned to expect certain sentence structures or vocabulary choices, making them more likely to misinterpret authentic writing from second language speakers as machine-generated.
Fluency and authenticity aren’t the same thing. A second-language student might use simpler vocabulary or repeat certain structures, not because they’re cheating, but because they’re still developing proficiency. Yet the system may treat those features as suspicious, reinforcing patterns of AI bias and discrimination.
Over time, this deepens inequity. Students from certain linguistic backgrounds face higher rates of false positives, adding an unnecessary barrier in environments where they may already be working harder to succeed.
When students feel unfairly targeted, trust between them and their institution begins to erode. They may become more guarded in their interactions with staff, or more reluctant to take creative risks in their work for fear of being flagged.
There’s also the matter of reputation. As discussed in the British Council’s Human-centred AI: lessons for English learning and assessment, institutions that deploy biased tools without safeguards risk public criticism, student dissatisfaction and even legal challenges.
These risks aren’t theoretical. There are already AI bias examples in other sectors, from hiring systems that disadvantage certain applicants to facial recognition tools that misidentify people from specific ethnic backgrounds. In education, the stakes are just as high, especially when a student’s academic record or progression is on the line.
Bias in AI detection isn’t inevitable. There are ways to make these systems fairer, more transparent and more effective.
Detection tools should be one step in a broader review process. Educators who know their students’ work bring essential context that software alone can’t provide.
Clear communication about how detection tools are used, what data they rely on, and how results are interpreted helps build trust. This aligns with the British Council’s multilingual reality research, which calls for approaches that value linguistic diversity.
Regular independent assessments can reveal whether certain student groups are disproportionately flagged. This ensures that tools evolve alongside the diversity of English in use today.
Assignments that encourage personal reflection, in-class work, or original analysis make it harder to misuse AI tools in the first place. This design-based approach is discussed in the British Council’s How AI is reshaping communication in the workplace, which stresses the value of tasks that mirror authentic communication needs.
AI detection will likely remain part of academic life, but it shouldn’t be the sole safeguard against misconduct. By prioritising pedagogy, valuing linguistic diversity, ensuring that all learners are treated fairly and rethinking assessments (relying more on things like face-to-face assessments, practicals, group assignments and portfolio development over time), institutions can protect academic integrity without perpetuating bias.
As the British Council’s work on AI in language education shows, technology works best when it supports effective teaching and learning, not when it acts as a silent judge.
Bias occurs when detection systems flag certain groups’ work as suspicious more often than others, often because the training data reflects only first-language English writing styles.
Their writing may include sentence structures, word choices, or stylistic features influenced by another language, which detection models can misinterpret as machine generated.
Use human review alongside software, run regular bias audits, design fair assessments, be transparent about how detection tools are applied and be aware of what current AI detectors can and cannot do to making informed judgement.
In education, false positives for multilingual students are a key example, with research showing detection accuracy as low as 35-60% for AI-edited content. In other sectors, hiring algorithms and facial recognition systems have also shown measurable bias.
Establish clear due process procedures for students who believe they've been falsely accused, create transparent appeal mechanisms and provide support resources to safeguard students' wellbeing throughout the process.
Teachers
Artificial intelligence (AI) is transforming classrooms worldwide – and English language learning is no exception. With AI-powered tools offering round-the-clock access to powerful resources...
Teachers
Beyond LLMs: How AI-powered data analysis of your language assessments can provide critical learning insights For many of us, artificial intelligence (AI) has become synonymous with large la...
Teachers
Your AI-detection tools may be biased against speakers of other languages – is there a better way? AI-detection tools are now common in schools, universities, and training organisations. The...
Teachers
Artificial Intelligence (AI) is reshaping education, offering tools that personalise learning, enhance accessibility, and support teachers. For neurodivergent learners, particularly those wi...
English for Work
Getting the right fit: Matching language tests to real-world communication needs Guidelines for recruiters and people managers When selecting an English language test for recruitment or proj...
English for Work
Ask any employer what makes a graduate stand out and the answer is rarely just technical knowledge. Just as important are transferable skills like communication, teamwork and adaptability, w...
Teachers
In an age of voice notes, podcasts, and TikTok monologues, it’s worth asking: is speaking becoming more important than writing? As we navigate a world increasingly driven by digital platform...
English as a Medium of Education
A global language for global universities: re-thinking the role of English in international education English is no longer a language tied to a single country, accent, or identity. It has gr...
Assessment
Our concept of English proficiency is evolving, shaped by a globalised workplace, the rise of English medium instruction, and technology’s capacity to connect the world. Effective communicat...
Teachers
Since launching The Future of English: Global Perspectives in April 2023, the Future of English research programme has grown and flourished. New research projects have been commissioned and ...