From scarcity to continuous feedback
AI makes feedback abundant and immediate, which changes the economics and psychology of learning. The institution no longer controls when feedback happens, or even who provides it. That redistribution has consequences for curriculum design, assurance, and student expectations.
Early signals from real learning moments
Consider a postgraduate learner drafting a risk memo for a live workplace problem. In the past, formative feedback might arrive days later, often after the learner has mentally moved on. Now a generative AI tool can suggest alternative structures, flag missing assumptions, and prompt the learner to justify a claim in seconds. A similar shift is occurring in quantitative subjects where AI can diagnose misconceptions step-by-step, not just mark answers as right or wrong.
The practical result is a new baseline. When learners experience high-frequency feedback in one module, low-frequency feedback elsewhere can feel less like an academic choice and more like institutional neglect.
Why this matters now
Higher education is simultaneously facing pressure on staff time, rising expectations for personalised support, and increased scrutiny of outcomes and value. AI does not resolve these tensions, but it changes the option set. Once feedback becomes cheap to generate, the hard questions shift towards trust, educational intent, and whether the feedback loop improves judgement rather than merely polishing outputs.
Assumptions that no longer hold
- Feedback must be delayed to be thoughtful.
- Personalisation requires proportional staff effort.
- Formative assessment is separate from evidence used for progression decisions.
- Students only practise when tasks are graded or scheduled.
These assumptions were never universally true, but they shaped policy and workload models. AI makes their limitations more visible.
Evidence of learning in an AI world
The core question is shifting from "did the student produce this" to "what can the student reliably do". That is a subtle but profound change, because formative assessment becomes part of an institutional story about capability, not just support.
What is being assessed when AI is present?
In many disciplines, the desired outcome is not a text artefact. It is the ability to diagnose, reason, decide, and communicate under constraints. If AI can improve the artefact, formative assessment needs to make the underlying capability visible. That can be done, but it requires intentional design.
Concrete examples that make the shift tangible
Example: Case-based reasoning in business and policy. A learner uses an AI assistant to draft an analysis of a supply chain disruption. Formative assessment can focus on the learner’s ability to articulate assumptions, evaluate trade-offs, and revise their position when counter-evidence is introduced, rather than on whether the prose is elegant.
Example: Coding and data work. AI can generate working code quickly. Formative assessment can concentrate on whether the learner can explain why a model fails, identify data leakage, interpret results for non-technical stakeholders, and select tests that expose edge cases.
Example: Clinical or professional judgement. In simulation-based settings, AI-enabled role-play can probe decision-making in ambiguous scenarios and track how a learner updates decisions as new information emerges.
What counts as “good” evidence?
As AI enters the learning process, formative evidence may increasingly come from process signals: revision histories, rationale logs, oral defences, simulated decisions, and reflective commentary that is anchored to observable work. This does not remove the need for written work, but it reduces the burden placed on the written artefact to prove everything.
Counter-arguments worth taking seriously
Some argue that moving away from artefacts risks lowering standards or making assessment subjective. Others worry that continuous data capture creates surveillance and chills intellectual risk-taking. These concerns do not invalidate the shift, but they suggest the need for careful boundary-setting around what is collected, why it is collected, and how it is interpreted.
Science of Adult Education
Traditional educational paradigms often struggle to adapt to rapidly advancing technologies. At the intersection of learning science and AI, this module stands as a beacon for educators seeking to navigate and flourish in this evolving environment....
Learn more
The new formative assessment stack
AI is not a single capability. It is a bundle of functions that can be arranged in different ways. Decisions about which functions to institutionalise, and which to leave to student choice, are now becoming part of academic governance.
Common AI functions in formative assessment
- Feedback generation: suggestions on clarity, structure, argumentation, and completeness.
- Diagnostic tutoring: identifying misconceptions and proposing targeted practice.
- Adaptive pathways: sequencing content based on demonstrated mastery rather than time served.
- Simulation and role-play: practising complex interpersonal and decision contexts with repeatability.
- Metacognitive prompts: asking learners to justify choices and estimate confidence.
Institutional choices hidden inside “AI feedback”
It is tempting to treat AI feedback as a generic service. In reality, the educational impact depends on design decisions: whether feedback is aligned to programme learning outcomes, whether it trains learners to seek disconfirming evidence, and whether it reinforces disciplinary epistemology rather than generic writing norms.
There is also a procurement and assurance dimension. Consumer tools change quickly, may train on user inputs, and may not meet institutional expectations for data protection. Private, course-aligned AI systems can mitigate some risks, but they create responsibilities around model behaviour, auditability, and accessibility. At the London School of Innovation, the working assumption has been that AI tutoring is most valuable when it is private, aligned to curriculum intent, and complemented by human dialogue, because the point is not automated correctness but strengthened judgement.
Where governance often lags
Policies frequently focus on academic misconduct and permitted tool use. That matters, but it is only part of the picture. The more strategic governance question is whether the institution can explain, in plain language, how AI-supported formative activity leads to credible progression decisions and graduate capabilities.
Quality, equity, and trust
Formative assessment shapes who feels they belong and who progresses. AI can reduce barriers, but it can also amplify inequities through uneven access, variable feedback quality, and opaque decision-making. Trust becomes a design variable, not an assumption.
Equity effects that could cut both ways
On the positive side, always-available feedback can support students balancing work, caring responsibilities, and study, and can help those who hesitate to ask questions in class. It can also provide language support for international students when designed responsibly.
On the risk side, unequal access to high-quality AI tools can widen attainment gaps. Even when access is equal, the ability to use AI well may correlate with prior educational advantage. If formative assessment quietly becomes “AI literacy rewarded”, then existing inequalities can be reproduced under a new name.
Trust and transparency in feedback loops
AI feedback can be persuasive even when wrong. This introduces a subtle hazard: learners may outsource epistemic responsibility. Quality assurance needs to consider not only error rates, but also the behavioural effect of feedback on learner confidence and risk-taking.
Policy-aware constraints in UK and international contexts
Data protection and privacy expectations, shaped by GDPR and regulator guidance, matter more when formative assessment becomes continuous and data-rich. Accessibility obligations also tighten when AI becomes a core learning pathway rather than an optional add-on. Internationally, emerging AI governance frameworks, including the EU AI Act and national regulator positions, are likely to influence how institutions evidence due diligence, especially where automated profiling could be inferred.
Interesting empirical work that could clarify the debate
A sector-wide, multi-institution study could test not only whether AI feedback improves grades, but whether it improves transfer: performance on novel tasks weeks later, under time pressure, without AI assistance. A complementary angle would examine equity: do learning gains differ by prior attainment, language background, or disability status when AI feedback is introduced? This kind of evidence would move the conversation from anecdote towards decision-grade clarity.
Institutional redesign choices
The most important decisions are not about which model to adopt, but about what to protect and what to evolve. Formative assessment sits at the intersection of pedagogy, workload, data governance, and the credibility of awards.
Where the system-level implications land
As AI shifts formative assessment, several institutional design questions come into focus. None has a single correct answer, but each requires an explicit stance.
Academic standards and award credibility
If formative activity becomes deeply AI-mediated, then assessment strategies may need to lean more on demonstrations of capability: oral examinations, simulations, supervised problem-solving, and evidence portfolios that show reasoning. This is not a return to older models, but a reframing of what valid evidence looks like when drafting support is ubiquitous.
Workload and the human role
AI can reduce repetitive feedback labour, but it may also raise expectations for responsiveness and pastoral support. The human role could shift towards coaching, calibration of judgement, and designing assessments that elicit thinking. That shift has implications for staff development, role definitions, and recognition systems.
Data governance and vendor dependency
Continuous formative assessment can create a rich learner data exhaust. The governance question is whether the institution can articulate the boundaries: what is collected, retention periods, who can access it, and how it will not be used. Vendor dependency also becomes strategic when the formative layer is integral to programme outcomes.
A decision test for the next 12 months
Rather than asking whether AI should be used in formative assessment, a more revealing test is whether the institution can describe the learning journey without referring to tools: what capabilities are being built, what evidence shows progress, and what failure looks like early enough to intervene. If those answers are unclear, AI will amplify the confusion rather than resolve it.
Difficult questions to sit with
- What forms of evidence will remain credible when most learners can access high-quality drafting, coding, and analysis support?
- Where should the boundary sit between private practice and institution-visible formative data, and who decides?
- How will equity be monitored when “feedback at scale” is introduced, and what would count as unacceptable differential impact?
- What institutional capability is needed to audit AI feedback quality and bias, without turning education into a compliance exercise?
- Which elements of judgement, integrity, and professional identity should be developed with AI present, rather than in artificial AI-free zones?
London School of Innovation
LSI is a UK higher education institution, offering master's degrees, executive and professional courses in AI, business, technology, and entrepreneurship.
Our focus is forging AI-native leaders.
Learn more