Does AI mean more uni students are plagiarising their work?
A new study shows most university students who copy from AI are also plagiarising in other ways.
People using other peoples’ ideas, words and creations without acknowledgement is a widespread problem. Plagiarism occurs everywhere from restaurant menus to political speeches and music.
Within academia, plagiarism is seen as a serious breach of integrity for scholars and students.
It’s easy to find media articles claiming plagiarism is increasing among university students. These claims have intensified with the rise of generative AI – which can quickly produce large amounts of text that students can copy and paste into their assignments.
But while AI certainly poses a range of challenges for academic integrity, is plagiarism increasing as much as we think it is?
My team’s new research, which has tracked students at one university over 20 years, suggests it may even be falling.
What are we comparing?
Precise rates of plagiarism can be difficult to determine. Pre-AI, many claims about increasing plagiarism among students came from cherry picking results of different surveys from different student groups. So they were not comparing apples with apples.
Since AI, we have have a lot of anecdotal reporting of cheating. But we do not have a lot of robust evidence of whether cheating has increased over time.
In a new journal article, my colleagues and I have used a rare longitudinal study of plagiarism to overcome this problem.
My research
Every five years since 2004, our study carried out the same survey on plagiarism with students at Western Sydney University (WSU). This means we have been able to track the same phenomena in the same environment over time.
In our survey students are presented with scenarios representing different forms of plagiarism. For example, a student copying text from a book without citing the book. Students were asked whether the behaviour is plagiarism, to test their understanding of it, and how often, if ever, they have done a similar thing. In 2024, we also also asked students if they used text generated by AI in their university work, without acknowledging it.
We conducted an anonymous survey of mostly undergraduate students, studying in a range of disciplines. The survey started in 2004 on paper and has been fully online since 2014.
The survey was done in the second half of the academic year to ensure students had the opportunity to both learn about and engage in plagiarism.
In 2024, as well as WSU, we included students from five other Australian universities for additional comparison. This gave us sample of more than 2,100 students in total for the latest round.
Plagiarism isn’t increasing
Over 20 years, the survey has found the percentage of students who engage in any form of plagiarism at least once has fallen every five years, from more than 80% in 2004 to 57% in 2024.
This decline corresponds with various measures, such as the use of text-matching software, which can help detect plagiarism. There has also been more training in referencing and citation rules – this reduces unintentional plagiarism.
AI is not turning all students into plagiarists
Although 14% of students in 2024 indicated they had copied from AI without acknowledgement, most of them also engaged in at least one other form of plagiarism. For example, copying from another student’s assignment.
Copying from AI was the sole form of plagiarism for only 2% of students.
Most students don’t plagiarise accidentally
Combining students’ answers to whether they understand plagiarism and whether they engaged in it showed most did so knowingly. For example when it came to verbatim copying from AI, 88% of WSU students who engaged in this knew it was plagiarism.
Interestingly, most plagiarism was accidental 20 years ago when education about academic integrity was less thorough. However, the recent results show students have a better understanding of plagiarism and still do it anyway.
AI detectors don’t stop copying
In the survey, two universities used AI detectors (which aim to assess whether a piece of written work has used AI, with mixed results and four did not.
Rates of plagiarism from AI were similar between the universities with and without detectors.
What does this mean?
Our survey largely looked at only one Australian university. But despite this limitation, we can interpret the results in optimistic and pessimistic ways.
Optimistically, plagiarism has fallen over 20 years. This suggests measures to detect plagiarism and teach students about proper referencing can help.
On top of this, AI has not turned all students into plagiarists – at least not yet. What our study suggests is students who have plagiarised in some other way may now plagiarise from AI as well.
Pessimistically, over half of all students still plagiarise at some time in their university studies. And, because these surveys rely on self-reports, it is likely these figures represent the minimum number of students who plagiarise. Even when surveys, like ours, are anonymous and online, students may still be hesitant to admit to breaking rules.
This means educating students and policing academic conduct remains an ongoing battle.
The Conversation AI
https://theconversation.com/does-ai-mean-more-uni-students-are-plagiarising-their-work-279565Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
study
New Rowhammer attack can grant kernel-level control on Nvidia workstation GPUs
A study from researchers at UNC Chapel Hill and Georgia Tech shows that GDDR6-based Rowhammer attacks can grant kernel-level access to Linux systems equipped with GPUs based on Nvidia's Ampere and Ada Lovelace architectures. The vulnerability appears significantly more severe than what was outlined in a paper last year. Read Entire Article
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Analyst News

Two-Pass LLM Processing: When Single-Pass Classification Isn't Enough
Here's a pattern I keep running into: you have a batch of items (messages, tickets, documents, transactions) and you need to classify each one. The obvious approach is one LLM call per item. It works fine until it doesn't. The failure mode is subtle. Each item gets classified correctly in isolation. But the relationships between items -- escalation patterns, contradictions, duplicate reports of the same issue -- are invisible to a single-pass classifier because it never sees the full picture. The problem Say you're triaging a CEO's morning messages. Three Slack messages from the same person: 9:15 AM : "API migration 60% done, no blockers" 10:30 AM : "Found an issue with payment endpoints, investigating" 11:45 AM : "3% of live payments failing, need rollback/hotfix decision within an hour"







Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!