Logo
Logo
  • Courses
    Master's Degrees

    Your accelerated and flexible path to leadership in the age of AI.

    Executive Education

    Lead your organisation into the AI-powered future, with confidence.

    Professional Courses

    Future-proof your career in a rapidly changing world.

    Master's Degrees
    Executive Education
    Professional Courses
  • Fees
    Tuition Fees & Funding

    Flexible financing options to invest in your future.

    Scholarships

    Financial aid programmes to unlock your potential.

    International Students

    Same fees for everyone: Fairness for a global community.

    Tuition Fees & Funding
    Scholarships
    International Students
  • Insights
  • Research
  • About
    Who We Are

    Discover what puts LSI at the forefront of forging leaders for a tech-driven world.

    Our Approach

    Innovative education that blends traditional excellence with modern, future-focused strategies.

    Join Us

    Aspiring to play a key role in shaping the future of education? Join our journey.

    Who We Are
    Our Approach
    Join Us
LSI service

InsightsInsights

Keep informed:
Subscribe
LSI register

Keep informed

LSI register

LSI Insights - Future of Higher Education

The student journey and AI: what should stay human?

Across higher education, AI could move us from simply digitising services to automating judgements across recruitment, admissions, learning support and progression. The strategic risk isn’t automation in itself. It’s that we start to define the student journey mainly by what’s easiest to measure, rather than what matters most. In an AI-rich environment, the real question becomes: which moments should stay human because they shape trust, identity and opportunity, and because someone must be able to stand behind the reasons for what happened?

read time 19 min read publish date 11 Feb 2026 Dr Paresh Kathrani Director of Education

Executive summary

AI could increasingly be used to link what used to be separate parts of the student journey into end-to-end systems that can predict, recommend and intervene. However, that can change what institutions assume about judgement, fairness, evidence and accountability. Done well, it’s an opportunity to redesign the journey so support is timelier and outcomes improve. Done poorly, it can quietly erode trust or widen gaps.

The practical question isn’t “AI: yes or no?” It’s: which decisions are genuinely safe to automate, and which must still require human judgement, with clear records of why a decision was made and who is accountable for it?

In what follows, I look at the student journey as an integrated system shaped by AI, where risks and unintended consequences tend to appear, which roles and moments should remain explicitly human, and what governance and decision tests can help balance innovation with trust.

Student journey as an integrated system

Student journey as an integrated system

Automation no longer sits neatly inside single services. AI can connect outreach, admissions, learning support and employability into one joined-up pathway. That can be powerful, but it also reshapes how we think about student agency, institutional purpose, and a very practical question: when the system nudges or decides, who owns the decision and the reasons for it?

Pipeline logic meets educational mission

Many institutions still manage the student journey through separate teams, each with their own targets and tools. AI can dissolve some of those boundaries. Once a model links enquiry data, application history, learning activity and career aspirations, the journey starts to behave like a feedback system: it learns from what happened before and adjusts what happens next. These systems don’t just speed up decisions; they shape what the institution notices, what it overlooks, and what starts to count as “success”. That can be beneficial, but it also concentrates power in the design of the loop: what the system counts, what it ignores, and what it rewards.

Examples that make the shift visible

You could see these dynamics in the following scenario: a prospective applicant chats with an AI assistant on a course page, receives tailored messages that steer them towards a particular programme, is prioritised in admissions because their profile resembles historic “success” patterns, and then receives automated study plans and early alerts about engagement risk.

None of those steps is unusual on its own. However, this shift unsettles a few long-held assumptions:

  • that data gathered to improve one service stays within that service;
  • that students experience support as discrete interactions, rather than an ongoing, model-driven relationship; and that
  • fairness risks sit mainly at the point of admissions, rather than across the whole lifecycle.

So the question becomes: if outcomes are optimised in this way, who defines those outcomes, and how do we make sure the institution can explain, defend and, when needed, revisit the reasons behind the pathway a student experienced?

Why timing matters right now

The urgency is driven by a convergence of forces: AI capability is accelerating, expectations are changing, and international competition continues to shift. Even if an institution moves cautiously, the surrounding ecosystem, platforms, suppliers, and student expectations, is moving quickly.

Why timing matters right now

Capability is moving from assistance to autonomy

One reason this matters is that generative AI started as a productivity layer, “help me draft”, “help me summarise”. It is now becoming a decision layer, especially through agent-based tools, and is being embedded across enquiry management systems, learning platforms, assessment workflows and careers data. The move from “assist” to “decide and route” is exactly where ethics and trust get tested, because a recommendation starts to function like a decision once it is acted upon at scale.

Regulatory pressure is becoming operational

Alongside these capability shifts, expectations from regulators and quality bodies are becoming more concrete. Most systems now sit under some combination of data protection law, anti-discrimination duties, and formal expectations about academic quality and student outcomes. Internationally, the EU’s AI Act is also setting a direction of travel on risk classification, documentation and oversight, and many large suppliers are likely to standardise their products around those requirements. Once automated systems begin shaping access, the intensity of support, or assessment-related decisions, this stops being abstract compliance. A meaningful human review, with clear accountability and recorded reasons, becomes essential.

Expectations are being set outside the sector

Expectations are also being set outside education. Students and employers already experience AI-enabled personalisation in areas like banking, retail and healthcare. That raises expectations of efficiency and responsiveness, but it also raises sensitivity to manipulation and opaque profiling. A “digital first” journey can still feel human. However, an “automated by default” journey can start to feel transactional, especially if students can’t tell what is influencing them or why.

Science of Adult Education

Science of Adult Education

Traditional educational paradigms often struggle to adapt to rapidly advancing technologies. At the intersection of learning science and AI, this module stands as a beacon for educators seeking to navigate and flourish in this evolving environment....

Learn more

Early signals from AI-enabled journeys

Therefore, setting principles becomes important. However, before setting principles, it is useful to consider what AI makes possible and where it introduces risk.

Early signals from AI-enabled journeys

High-value automation

It’s important to say plainly: there are real, credible gains from AI in areas where volume and consistency matter. Round-the-clock triage can route queries and draft basic responses, reducing waiting times and freeing staff for more complex, human-sensitive cases. Formative feedback tools can give immediate guidance on structure, clarity and basic comprehension checks, helping students iterate more often, particularly where teaching teams are stretched.

Where the risks cluster

Risk, though, clusters where automation touches life chances or identity: admissions decisions, disability support pathways, suspected misconduct, progression and termination, or employability signalling. A model can be statistically accurate and still be unacceptable if it reinforces historic disadvantage, or if the institution cannot explain the decision pathway in terms a student can understand.

A subtle design trap

There’s also a subtle design trap: automation without meaningful human oversight can slide into path dependency. If a system recommends fewer “stretch” modules to someone predicted to struggle, an institution might improve short-term retention but narrow long-term capability. Likewise, nudges designed to increase conversion or attendance can become coercive if they aren’t transparent, contestable, and open to human review.

Human moments that carry symbolic weight

Beyond operational risk, there are moments where students are looking for empathy and recognition rather than information: a first serious academic setback, a shift in career direction, a complaint about fairness, or a decision about whether to pause their studies. Automation can support these moments but it may not be the right face of the institution. In those moments, the student often needs to feel that someone has properly understood their situation and can take responsibility for what happens next.

Human work that should not disappear

Naming these moments matters, because keeping some elements human is not nostalgia. It’s a deliberate strategic choice to preserve functions that depend on legitimacy, judgement and relationship, especially when AI systems operate at scale. Put simply: some work is about more than processing information. It’s about being human, keeping trustworthy records of reasons, and repairing relationships when things go wrong.

Human work that should not disappear

What kind of “human” is needed?

It's true that not every interaction needs a live conversation. However, the human role is to provide accountable judgement, offer empathy when it’s needed, interpret context, and especially take responsibility when rules collide. If AI does its job well, we should see fewer routine interactions, but higher-quality human engagement remains important. In practice, that means explicit human stewardship in key areas:

  • where a decision changes a student’s options (including exceptions, progression decisions, academic integrity outcomes, or decisions with financial consequences);
  • where meaning needs interpreting (turning feedback into a development plan, making sense of conflicting signals, navigating changes in identity such as a career pivot);
  • where relationships need repair (complaints, perceived injustice, moments when trust has been damaged); and
  • where cases don’t fit neat rules (safeguarding concerns, health crises, or complex disability adjustments).

Augmentation rather than replacement

Indeed, this points much more towards augmentation than replacement. In AI-enabled learning models, staff can use AI to surface patterns they might otherwise miss, and then decide how to respond. In my experience of platform design discussions, the strongest approaches are often the least flashy: use AI tools for low-stakes practice and rapid feedback, and reserve live time for what only humans can do well, judgement, coaching, challenge, and accountability.

A reframing worth testing

This reframing helps: the question becomes less about “what can we automate?” and more about “what must remain human in order to preserve trust and student agency, and to ensure decisions remain explainable, reviewable, and owned by someone accountable.”

Future of Higher Education

Future of Higher Education

What might higher education become in a world disrupted by AI? Examining new questions around learning, assessment, institutional purpose, and how universities remain socially meaningful in times of transformation.

Learn more

Governance choices hidden in design

Automating the journey can also turn product design and procurement decisions into strategic choices that require explicit human judgement. Institutions need human oversight that scrutinises product choices, data flows, and lines of accountability.

Governance choices hidden in design

Model objectives as institutional policy

Every automated intervention encodes a goal, whether we say it out loud or not: reduce dropout risk, increase enrolment, improve satisfaction scores, raise average grades. In data-led products, these goals are never neutral because they shape what the system rewards. If a system is tuned heavily for retention, for instance, what happens to academic standards, curiosity, or intellectual risk-taking? The strategic task is to balance efficiency with meaningful human oversight, including clarity about what counts as a “decision”, what gets recorded, and whose reasons stand behind it.

Key questions for oversight frameworks

To make sound design and procurement choices, institutions need a set of practical oversight questions. For instance:

  • which outputs, if any, can be fully automated and which must remain advisory;
  • where accountability sits when a model recommendation contributes to a contested outcome;
  • how bias and unequal impact are tested over time (and how the data for those tests is gathered);
  • what a student’s right to explanation, appeal and human review looks like in day-to-day practice; and
  • how data minimisation is applied when end-to-end journeys make “more data” feel tempting.

Procurement is now capability building

This governance lens must shape procurement. Selecting a supplier is no longer just buying software; it is also buying a set of assumptions about decisions, evidence and accountability. Institutions need to balance operational benefit with risk management and ethical responsibility, and be clear about what they will not delegate, even to a well-performing model.

Decision tests for the next phase of implementation

The next phase won’t be about choosing between human and machine. It will be about designing a holistic student journey where automation genuinely expands capability while preserving ethics, trust and legitimacy, including the very down-to-earth discipline of being able to show what happened, why it happened, and who is answerable for it.

Decision tests for the next phase of implementation

Useful decision tests

Some practical decision tests can keep implementation grounded. For example:

  • would a student accept the pathway as fair if they could see it end to end;
  • is there a real and accessible route to challenge, appeal or request human review;
  • does the intervention build agency over time or create dependency on prompts and predictions;
  • are outcome gaps narrowing across groups and modes of study or being masked by headline improvements; and
  • what happens when the model is wrong, data drifts, or a supplier changes terms.

But there’s also a deeper set of questions that senior leaders need to be willing to hold. They go to the nature of the educational relationship we want to offer in an AI-rich environment:

  • which parts of the journey are more than a service transaction;
  • what would count as unacceptable optimisation even if it improves retention or revenue;
  • where human judgement should be mandatory and where it can be optional but auditable;
  • how students will be informed about automated influence and what meaningful consent looks like in practice; and
  • what evidence would justify expanding automation into higher-stakes decisions or rolling it back.

Conclusion

Automating the student journey is ultimately a design choice about how human judgement and automation work together. If we get the balance right, AI takes the strain in high-volume, low-stakes areas and helps us spot patterns early, while people hold the moments that require interpretation, empathy, and accountable decision-making. If we get it wrong, we don’t just automate processes; we thin out the human relationship that makes education work.

For me, the line to hold is simple: if we cannot explain a pathway in ordinary language, record the reasons for key decisions, and show who is accountable, then we are not ready to scale it, no matter how impressive the dashboard looks.

London School of Innovation

London School of Innovation

LSI is a UK higher education institution, offering master's degrees, executive and professional courses in AI, business, technology, and entrepreneurship.

Our focus is forging AI-native leaders.

Learn more
Explore more
06 Jan 2026 15 min read
AI in university: efficiency opportunity or identity risk?
AI in university: efficiency opportunity or identity risk? AI in university: efficiency opportunity or identity risk?
30 Dec 2025 12 min read
Cost, capability, or differentiation? Where AI in higher education changes the economics of provision
Cost, capability, or differentiation? Where AI in higher education changes the economics of provision Cost, capability, or differentiation? Where AI in higher education changes the economics of provision
08 Oct 2025 16 min read
How AI is shifting formative assessment in higher education
How AI is shifting formative assessment in higher education How AI is shifting formative assessment in higher education
23 Sep 2025 14 min read
Beyond grades: what signals employers will trust in an AI-proliferated market
Beyond grades: what signals employers will trust in an AI-proliferated market Beyond grades: what signals employers will trust in an AI-proliferated market
LSI register

Keep informed

LSI register
Explore more
06 Jan 2026 15 min read
AI in university: efficiency opportunity or identity risk?
AI in university: efficiency opportunity or identity risk? AI in university: efficiency opportunity or identity risk?
30 Dec 2025 12 min read
Cost, capability, or differentiation? Where AI in higher education changes the economics of provision
Cost, capability, or differentiation? Where AI in higher education changes the economics of provision Cost, capability, or differentiation? Where AI in higher education changes the economics of provision
08 Oct 2025 16 min read
How AI is shifting formative assessment in higher education
How AI is shifting formative assessment in higher education How AI is shifting formative assessment in higher education
23 Sep 2025 14 min read
Beyond grades: what signals employers will trust in an AI-proliferated market
Beyond grades: what signals employers will trust in an AI-proliferated market Beyond grades: what signals employers will trust in an AI-proliferated market
London School of Innovation Ltd | © 2026 All rights reserved.
Logo
lsi linked-in lsi instagram
Logo
lsi linked-in lsi instagram
Policies
LSI Policies
LSI Policies Privacy Policy Terms and Conditions Cookie Policy
Institution
LSI Institution
Why LSI UK Careers Partnerships FAQs
Enquiries
LSI Enquiries
Contact LSI +44 (0)203 576 1189 hello@lsi-ac.uk 6 Sutton Park Road, London, SM1 2GD, UK
London School of Innovation Ltd | © 2026 All rights reserved.