Skip to main content

Deepfaking emerges as “growing reality” in student interviews

Enroly, a leading platform that streamlines the onboarding and arrival process for universities, students, and agents, has reported growing instances of ‘deepfaking’ being used in credibility interviews.

The company has assured its partners that it remains ahead of the deepfake threat – manipulated content designed to replicate a person’s appearance or voice – and is continually adapting to counter any risk.

Out of approximately 20,000 interviews in the 2025 January intake, 1.3% of cases involved “outright deception”, according to Phoebe O’Donnell, head of services at Enroly.

“Our student interviews have revealed instances of advanced technological manipulation, including lip-syncing, impersonation, and even the use of deepfake technology. Challenges that were once the realm of science fiction but are now a growing reality.”

Enroly breaks down cases of deception in the January 2025 intake, noting that 0.1% cases used “third party support”, 0.6% cases used “lip-syncing” and 0.15% using “deepfake attempts”.

Speaking to The PIE News, a UK Home Office spokesperson highlighted the department’s measures to prevent such fraud in its own processes, and the penalties for those who are caught.

“We have stringent systems in place to identify and prevent fraudulent student visa application,” the spokesperson said.

Any individual attempting to cheat or use deception will not succeed

UK Home Office spokesperson

“Any individual attempting to cheat or use deception will not succeed and may face a ban from applying for UK visas for 10 years. We will also continue to take tough action against any unscrupulous companies and agents who are seeking to abuse, exploit or defraud international students,” they continued.

In 2023, The PIE reported on the threat of deepfakes to English test security. While key stakeholders described the risk as “minuscule”, citing the high technical barrier to entry, companies such as Duolingo English Test were taking proactive measures to safeguard tests, appointing a lead engineer specialising in deepfake technology, while leveraging both AI and human proctors to detect signs of external interference.

The post Deepfaking emerges as “growing reality” in student interviews appeared first on The PIE News.