Credential as a social contract
The worth of a degree has never been only about learning. It is also about trust: a compact between institutions, employers, and the public that the credential predicts something meaningful about capability and behaviour.
Degree value beyond content knowledge
For much of the modern era, a university degree has acted as a proxy for several hard-to-observe attributes: persistence, baseline literacy, reasoning under pressure, and exposure to disciplinary standards. The curriculum mattered, but so did the institutional promise that someone had been assessed and found capable within recognised norms.
How does the signalling function work?
Most employers cannot directly test all applicants at scale. They therefore rely on signals, and the degree has been among the most portable. In regulated settings, the degree can be a gate into further professional formation. In global labour markets, it can also serve as a conversion tool, translating local talent into internationally legible terms.
Assumptions under pressure
AI challenges the degree’s signalling function in at least two ways. First, the correlation between “having studied” and “being able to produce outputs” can weaken if AI boosts output quality for novices. Second, if assessment is perceived as easily mediated by AI, confidence in the credential’s verification role can decline even when learning is genuine.
Who bears the risk of mistrust?
When the signal weakens, the costs fall unevenly. Students carry debt and opportunity cost. Employers incur hiring and performance risk. Institutions face scrutiny from regulators and the public. The question becomes less “Is the degree obsolete?” and more “What must the degree evidence to remain trusted?”
AI and the new baseline
A more useful starting point than speculation is observing where AI is already changing work. Examples show that AI is not only automating tasks, but reshaping what counts as competent performance.
Work examples that make the shift tangible
Contract review in commercial law
Firms increasingly use drafting and review tools to surface clauses, propose alternatives, and flag risk. Junior staff can produce a passable first cut faster than before. The differentiator moves towards interpretation of risk appetite, client communication, and accountability for advice given under uncertainty.
Policy analysis in government and regulation
Teams can now generate consultation summaries, scenario sketches, and impact narratives quickly. The bottleneck becomes framing: defining the question, selecting evidence, stress-testing assumptions, and defending trade-offs in public. Poorly framed prompts can produce polished but misleading output.
Operations in healthcare and large services
AI-supported triage, scheduling, and documentation reduce administrative burden. Yet errors can carry safety implications. Competence shifts towards recognising edge cases, escalation judgement, and knowing when not to rely on automated assistance.
What assumptions no longer hold?
The older assumption that capability accumulates mainly through time and exposure is being questioned. When a tool can compress early-stage proficiency, time served becomes a weaker proxy. Another assumption is that output quality alone demonstrates understanding. AI can mimic competence, so the evidence must increasingly include process, reasoning, and responsible use.
What becomes visible to employers?
In AI-rich environments, employers notice different things: how people ask questions, how they validate, how they handle exceptions, and whether they can explain decisions. A degree that only certifies familiarity with content may be less legible than one that certifies reliable judgement under realistic constraints.
Science of Adult Education
Traditional educational paradigms often struggle to adapt to rapidly advancing technologies. At the intersection of learning science and AI, this module stands as a beacon for educators seeking to navigate and flourish in this evolving environment....
Learn more
Scarcity shifts to judgement
If AI makes many forms of output easier, scarcity moves to what AI does not reliably guarantee: epistemic discipline, responsibility, and the ability to operate within human systems of trust and consequence.
Why output is a weaker proof
AI can generate essays, code, presentations, and analyses that look convincing. The challenge is that “looks convincing” is not the same as “is correct”, “is safe”, or “is appropriate”. This is not a moral argument about cheating; it is an economic and governance argument about evidence.
Emerging capability categories
- Judgement under uncertainty: deciding when evidence is sufficient, when to seek more, and how to act when data is incomplete.
- Verification habits: triangulating sources, checking calculations, testing outputs against constraints, and documenting reasoning.
- Accountability and professional ethics: understanding consequences, ownership of decisions, and the limits of delegation to tools.
- Problem framing: turning ambiguous goals into tractable questions, selecting methods, and defining success.
- Collaborative sensemaking: aligning stakeholders, negotiating trade-offs, and communicating decisions in ways others can trust.
Where universities still hold an advantage
Universities can create controlled environments where these capabilities are practised, evidenced, and challenged. That requires assessment designs that capture reasoning, not just results. It also requires an institutional stance on how AI is used: not banning it by default, and not treating it as a shortcut, but treating it as a context within which competence must be demonstrated.
Credential worth as “trusted performance”
If the degree evolves from “completion of a syllabus” towards “trusted performance across contexts”, its worth may increase in an AI era. The tension is that trusted performance is harder to measure, more expensive to assure, and easier to contest if standards are unclear.
Possible futures for degree value
Rather than a single trajectory, several plausible futures coexist. Each depends on institutional choices, employer behaviour, regulation, and public trust in assessment. The degree’s worth could erode, polarise, or be re-founded on clearer evidence.
Scenario logic, not prediction
AI does not determine outcomes on its own. The value of degrees will depend on how institutions redesign learning and assessment, how employers update hiring practices, and how regulators interpret quality and integrity in AI-mediated education.
Pathway: erosion through signal dilution
If assessment integrity is perceived as weak, degrees can lose signalling power even when learning occurs. Employers may shift towards proprietary tests, extended probation, or in-house academies. This can disadvantage those without networks or brand-name institutions, with knock-on effects on social mobility.
Pathway: bifurcation of credential markets
Some institutions may double down on highly selective entry, in-person experiences, and network effects, strengthening their positional value. Others may compete on cost and convenience, where AI enables scale. In this world, “degree” becomes a less uniform category, and the sector faces sharper stratification.
Pathway: renewal through demonstrable capability
Degrees can retain or grow worth if they become clearer evidence of applied competence, including AI collaboration. This may involve assessment that is closer to practice: simulations, scenario-based reasoning, oral defence, audited portfolios, and workplace-embedded evaluation. In LSI’s own experiments with AI-supported formative feedback and repeatable role-play simulations, a useful lesson has been that the most valuable data is not the final answer, but the pattern of decisions a learner makes under constraints.
What could change employer demand?
Employer appetite for degrees may rise if AI increases the cost of errors and reputational harm. Conversely, if AI tools become reliable enough to standardise many roles, degrees might matter less for entry and more for advancement, especially into roles that require governance, risk ownership, or public accountability.
Master's degrees
At LSI, pursuing a postgraduate degree or certificate is more than just academic advancement. It’s about immersion into the world of innovation, whether you’re seeking to advance in your current role or carve out a new path in digital...
Learn more
Institutional implications and governance
The question of degree worth is, in practice, a question of institutional design and assurance. AI pressures existing governance: assessment validity, academic standards, student support, and the credibility of claims made to the market.
Assessment validity in AI-rich contexts
Traditional coursework can become less discriminating when AI can produce plausible responses. The issue is not detecting AI use alone. It is whether the assessment still measures the intended construct. If the learning outcome is “can draft a policy brief”, and AI drafts briefs, then the outcome must be re-specified, or the evidence must capture framing, critique, and iteration.
Quality assurance and regulatory interpretation
In the UK, OfS expectations around quality, outcomes, and student protection intersect with AI in complex ways: transparency in assessment, clarity of academic standards, and the reliability of claims about employability. Internationally, recognition regimes and professional bodies will vary in how quickly they accept AI-mediated assessment and online delivery.
Data governance and institutional trust
AI-native learning systems increase the volume of learner data, from engagement traces to performance diagnostics. This can improve support, but it raises questions of consent, vendor dependence, model drift, and explainability. A degree’s credibility can be damaged by a single high-profile data incident or opaque algorithmic decision.
Partnership choices with employers
Where workplace performance becomes a central proof-point, universities may deepen partnerships for authentic assessment and progression pathways. The governance question is how to avoid narrow training while still evidencing capability. Another question is how to ensure fairness when workplace contexts vary widely.
Faculty roles and academic identity
As AI takes on more of the drafting and feedback labour, academic work may shift towards designing high-integrity assessments, mentoring judgement, and curating disciplinary standards. That shift can be energising, but it also challenges workload models and promotion criteria built for older teaching patterns.
Evidence agenda and decision tests
The sector may benefit from clearer empirical signals about what, exactly, degrees are worth in an AI economy. The aim is not certainty, but better navigation through uncertainty, with shared measures that withstand scrutiny.
A genuinely useful empirical study
One high-leverage study would be a multi-institution, multi-employer longitudinal comparison of graduates assessed via different regimes: traditional essays and exams, simulation-based assessment, portfolio defence, and workplace-embedded evaluation. Outcomes would track not only early salary, but error rates, time-to-autonomy, promotion into accountable roles, and employer trust metrics. The key is to control for prior attainment and socio-economic background, so the study tests assessment validity rather than brand effects.
How to interpret results without overreach
Even strong evidence will not settle the debate, because degree value is partly social and partly economic. Still, better evidence can inform programme design, external messaging, and regulator dialogue, especially when public narratives about AI and cheating threaten to crowd out nuance.
Decision tests for the next cycle
Before investing in new AI tools, new credentials, or new delivery models, some questions act as practical tests of readiness:
- What does the credential guarantee? If a transcript claim is challenged, what evidence would be produced, and would an external party accept it?
- Which assessments still discriminate? Where does performance measurement remain robust when learners use AI openly and responsibly?
- What is the institutional stance on AI use? Are expectations explicit enough to preserve trust while reflecting real workplace practice?
- How is judgement developed and evidenced? Where in the programme does accountability, verification, and decision quality get assessed?
- What would make the degree more trusted? Would it be external benchmarking, professional body alignment, audited portfolios, or something else entirely?
- What is the plan for failure modes? If an AI system gives poor feedback, leaks data, or biases evaluation, what governance process responds, and how quickly?
Difficult questions that remain open
Perhaps the most productive reframing is not whether AI erodes degree worth, but whether institutions can keep the degree as a credible public instrument of trust in a world where knowledge production is cheap. The uncomfortable questions sit in the gaps between learning, proof, and legitimacy:
- What level of transparency about assessment methods is needed to sustain public confidence without inviting gaming?
- Where should the boundary sit between academic formation and job-specific performance, especially as employers build their own AI academies?
- What is lost if degrees fragment into stacks of micro-credentials, and what is gained in inclusion and responsiveness?
- How will international recognition evolve if AI-enabled delivery scales faster than mutual trust frameworks?
- What does “academic integrity” mean when responsible AI use becomes part of professional integrity?
London School of Innovation
LSI is a UK higher education institution, offering master's degrees, executive and professional courses in AI, business, technology, and entrepreneurship.
Our focus is forging AI-native leaders.
Learn more