The rapid adoption of generative AI in higher education has introduced a new and evolving risk: the intersection of AI-assisted writing and increasingly sophisticated detection systems.

While AI tools are widely used to support drafting and editing, universities are simultaneously expanding the use of AI-detection technologies to identify undeclared machine-generated content.

Recent reporting indicates thousands of confirmed AI-related misconduct cases across higher education, prompting institutions to strengthen enforcement and oversight (The Guardian, 2025).

In response, a new category of tools has emerged: AI “humanisers”, designed to rewrite AI-generated text to make it appear more natural and less detectable.

However, current evidence suggests that this approach is unreliable.

AI-detection systems themselves remain imperfect, with documented rates of false positives and false negatives. Importantly, “humanised” AI text is not consistently able to evade detection and may introduce unintended changes in meaning, tone, or emphasis.

This creates a complex and potentially high-risk environment for academics.

Because the consequences of being flagged extend beyond technical assessment. They may include:

As a result, the issue is shifting from capability to accountability.

It is no longer sufficient that AI can produce fluent academic text.

What matters is whether authors retain sufficient intellectual ownership and oversight to defend that text if challenged.

Attempts to obscure AI involvement do not eliminate risk.

They increase it.


#AI #academicwriting #academicintegrity #highereducation #AIethics #editing #researchcommunication #languagematters #publishing

Leave a Reply

Your email address will not be published. Required fields are marked *