Reports of widespread cheating are not surprising. When the outcome of an assessment carries real academic, professional or financial consequences, there will always be an incentive to find shortcuts. Institutions are now grappling with how easily plausible answers can be generated, even under time pressure.
While it may be tempting to default to blanket surveillance or a return to paper-based exams, both are over-corrections that introduce their own problems at scale – from accessibility barriers and privacy concerns to operational complexity.
Assessment integrity is best approached as a practical delivery challenge rather than a purely theoretical concern. It requires shared responsibility across academic teams, compliance, operations and technology, and must be shaped by how exams are conducted, the controls applied, and whether those conditions are consistent, robust and defensible.
Digital assessment is no longer the sole concern of IT teams – it now cuts across academic delivery, compliance, operations and student experience. This reflects the growing operational importance of digital assessment infrastructure across the academic system. A recent study of UK institutions found 78% use remote or online exams, highlighting how digital assessment delivery has become a mainstream operational requirement.
At the same time, wider technological change is accelerating, with reports of 88% of UK undergraduates using generative AI in assessments, reinforcing the growing importance of robust digital platforms as universities expand hybrid learning and modernise their technology infrastructure.
Classifying assessment risk in an AI-enabled environment
Not all assessments carry the same level of risk, but many institutions still apply the same controls across all of them. A timed, high-stakes exam that leads to a professional qualification or regulatory outcome carries a very different risk profile, to a piece of formative coursework or an open-book assignment. The incentives to misuse the system are higher, the consequences are greater, and the tolerance for ambiguity is lower.
Institutions need to define, in practical terms, which assessments require controlled conditions and which do not. For high-stakes exams, this typically means identity verification, restricted environments, enforced timing and clear audit trails. For lower-risk assessments, it may be more appropriate to rely on assessment design, academic judgement and post-hoc review.
However, risk is not determined by assessment type alone. It is also shaped by the environment in which the exam is delivered.
Running a tightly controlled, proctored exam in a region with stable infrastructure and high levels of digital access is a very different proposition to delivering the same exam in environments where connectivity is inconsistent or power interruptions are common. Similarly, candidates taking exams on managed corporate networks may encounter restrictive firewalls that block or degrade proctoring software, while large cohorts sitting an exam simultaneously create very different technical demands compared to candidates accessing assessments on demand.
These factors do not remove the need for control, but they do affect how those controls can be applied in practice. The goal of this kind of risk stratification is not to eliminate flexibility, but to apply controls deliberately and proportionately. Applying this approach requires making trade-offs explicit: what level of assurance is needed, what level of friction is acceptable, and what level of risk the institution is willing to carry.
What proportionate security looks like in higher education
Proportionate security is not about maximising control; it is about applying enough control to achieve a defensible outcome.
Overly restrictive systems introduce their own problems. They can be invasive, create accessibility barriers and increase the likelihood of technical failure. At the other extreme, insufficient safeguards leave institutions exposed to appeals, complaints and regulatory scrutiny. The challenge is finding a balance that holds up in practice, not just in policy.
In reality, this means moving away from single-solution thinking. No individual tool is sufficient on its own, but combining multiple layers indiscriminately is equally ineffective. The question is not how many controls are applied, but whether they meaningfully improve confidence in the outcome.
For higher-risk assessments, this typically involves establishing a consistent set of conditions: confirming who the candidate is, limiting access to external resources, enforcing time constraints and capturing enough evidence to review the session if required. How this is achieved will vary, but the objective remains the same – consistency and defensibility, rather than maximum restriction.
It also means recognising when additional controls add little value. Surveillance-heavy approaches can generate large volumes of data without producing clear evidence, while increasing friction for candidates. In many cases, assessment design, such as open-book formats, applied questions, or tighter time constraints, can reduce misuse more effectively than additional monitoring.
Proportionate security, therefore, is not about applying more technology, but about making deliberate choices. Controls should reinforce fairness and integrity, not replace thoughtful assessment design.
Moving beyond the learning management system
The growing complexity of digital assessment is prompting institutions to reconsider the role of the learning management system (LMS). These platforms remain central to teaching and learning, and are unlikely to be replaced. However, they were not designed to handle the demands of high-stakes, synchronous exams or the evolving security requirements that come with them.
As online assessment becomes more widespread and more complex, many institutions are finding that extending the LMS to cover these use cases introduces limitations. Performance under high concurrency, support for controlled exam environments and the need for consistent, auditable conditions all place demands on systems that were originally built for content delivery and coursework management.
There is now a growing shift toward dedicated assessment infrastructure. Instead of relying on a single platform to do everything, institutions are adopting specialised exam systems that address the demands of secure digital assessment, while integrating with their existing LMS and wider technology stack. This allows each system to do what it is designed for, without compromising on performance or flexibility. .
This approach reflects a broader trend beyond education. Organisations are increasingly moving away from monolithic platforms in favour of interoperable, best-in-class tools. Recent findings from the 2025 Gartner CIO and Technology Executive Survey suggest that over a third of higher-education CIOs plan to reduce investment in legacy infrastructure, redirecting resources toward more flexible, integrated systems.
In this context, digital assessment is no longer just a feature of the learning environment. It is a distinct operational capability, with its own infrastructure requirements, performance considerations and risk profile.
Audit trails and institutional oversight
In an AI-enabled environment, the ability to demonstrate oversight is becoming as important as preventing misuse.
It is no longer enough to rely on controls alone. Institutions need to be able to evidence how an assessment was delivered: who took it, under what conditions and what occurred during the session. This requires systems that produce clear, structured records, not just raw data, but information that can be reviewed, understood and acted upon.
This distinction matters. Large volumes of logs, recordings and flags are only useful if they support confident decision-making. In the context of appeals or regulatory scrutiny, institutions need evidence that is coherent and defensible, not ambiguous or open to interpretation.
Well-designed audit trails provide this. They allow institutions to investigate irregularities, respond to challenges and demonstrate that assessments were conducted under consistent conditions.
More broadly, this reflects a shift in responsibility. Assessment integrity is no longer confined to academic design; it is an institutional concern that spans technology, operations and compliance. Systems, processes and policies must align to ensure that decisions are deliberate, documented and capable of withstanding scrutiny.
Students are already using AI. The question is whether assessment governance, infrastructure and operational practices are evolving quickly enough to maintain integrity and relevance.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
