LSI Insights - Future of Work

The new entry-level problem: if routine work disappears, how do careers begin?

Entry-level jobs have long acted as the labour market’s training ground: repetitive tasks, close supervision, and low-stakes exposure to real workflows. As automation absorbs more routine work, that scaffolding weakens. The risk is not only fewer junior roles, but fewer safe ways to learn what work feels like, and fewer credible signals that someone is ready.

19 min read August 11, 2025
Executive summary
If routine tasks are automated first, “entry-level” stops being a job title and becomes a design question: where do novices get practice, feedback, and trusted signals of competence? The shift matters now because firms want productivity gains, workers want progression, and education systems still assume early-career learning happens inside routine roles. New on-ramps are possible, but they come with trade-offs around pay, access, quality assurance, and worker protections.
Routine work as career scaffolding

Routine work as career scaffolding

Many careers began with work that was not glamorous but was learnable: tasks that built familiarity, judgement, and confidence. When that layer thins, the first rung of the ladder becomes harder to find and easier to fall from.

Entry-level work has been more than output

Historically, routine tasks did two quiet but essential things. They produced something useful and they created a protected space for novices to build professional habits. The junior paralegal redacting documents, the marketing assistant updating product listings, the trainee finance analyst reconciling spreadsheets, the NHS administrator scheduling clinics: these roles often mixed low discretion work with glimpses of higher discretion decisions.

That mixture mattered because it allowed learning to be gradual. A newcomer could make small mistakes, get corrected, and slowly take on more judgement-heavy work.

Credential as a social contract

Degrees, apprenticeships, and entry-level jobs formed a loose social contract: formal education demonstrated foundational knowledge, and early work demonstrated reliability, pace, and the ability to operate inside a system. When routine tasks are removed or compressed, that second part becomes harder to demonstrate. In response, labour markets tend to reach for substitutes such as higher entry requirements, longer internships, or informal hiring through networks.

The distributional consequences are not subtle. Those with family connections, savings, or access to strong careers advice can navigate “unofficial” routes. Those without these buffers face higher risk, longer time to stable income, and more pressure to accept precarious roles.

What is actually disappearing?

It is rarely a whole job that disappears first. It is a bundle of tasks: drafting first versions, scheduling, basic research, simple reporting, standard customer replies, initial code generation, triage, and quality checks. When those tasks shrink, the role may still exist, but with higher expectations for judgement, communication, or domain knowledge from day one.

Counter-argument worth holding

Some routine work was never good training. It could be monotonous, poorly supervised, and disconnected from progression. Removing it can be an opportunity to design better early-career experiences. The question becomes whether organisations replace that learning function, or quietly offload it onto individuals.

How does task automation block progression?

Automation is often discussed as a productivity story, but its early-career impact is a workflow story. When tasks move from humans to tools, supervision patterns, risk tolerance, and hiring economics change in ways that can squeeze newcomers out.

How does task automation block progression?

Workflow redesign changes who is needed

When AI tools speed up drafting, summarising, classification, or analysis, teams can produce the same output with fewer people, or produce more with the same people. In both cases, the marginal value of a novice can fall if the remaining work is high discretion, client-facing, or regulated. The work does not vanish, but the “safe” work that used to justify a junior hire becomes smaller.

Risk moves upstream

In law, finance, health, engineering, and public services, the cost of error is high. If AI produces a first draft, someone accountable still needs to verify it. That verifier is usually not a newcomer. Paradoxically, automation can increase demand for experienced oversight while reducing opportunities to learn under supervision.

Credential inflation as a defensive response

When employers are uncertain how to assess readiness, they often raise credential requirements, not because the work objectively needs them, but because credentials are a convenient filter. This can push entry-level roles toward graduates only, then toward postgraduates, and sometimes toward “experience required” even for junior titles. The result is a queueing problem: people accumulate credentials to compete for roles that no longer teach the basics.

Platformisation and algorithmic management

Some routine work does not disappear, it migrates into platforms: micro-tasks, content moderation, data labelling, customer support, gig-based administration. These can provide income and exposure, but they often limit learning because tasks are fragmented, feedback is thin, and performance is managed by metrics rather than mentorship. Worker protections, data rights, and appeal processes become central, not peripheral, to career quality.

Geography and class shape the impact

Regions with fewer large employers or fewer professional service clusters may see fewer structured junior roles, making “career starts” more dependent on remote work, informal networks, or relocation. At the same time, remote work can widen access for some, but it can also reduce incidental learning that happens by overhearing, shadowing, and asking quick questions. The same technology that enables access can weaken the apprenticeship-like elements of early career unless deliberately rebuilt.

Advanced AI Prompt Engineering

Advanced AI Prompt Engineering

Large language models, such as ChatGPT, have transformed how we interact with technology. Leveraging their full capabilities, however, requires more than basic queries. This module delves into the art and science of crafting prompts that elicit...

Learn more

Concrete on-ramps for first jobs

If entry-level is a design question, new on-ramps need to offer practice, feedback, and proof. The aim is not to replace work with training, but to blend them in ways that reduce risk for both employer and learner.

Concrete on-ramps for first jobs

Paid supervised practice in smaller units

One emerging pattern is shorter, paid, supervised placements that are closer to clinical rotations than traditional internships. The emphasis is on defined competencies and assessed tasks, rather than “helping out”. This can lower hiring risk while providing real exposure, but it requires clearer standards and prevents the drift into unpaid work.

Simulated work that is close to reality

Simulation is not new, but AI makes it cheaper to create realistic, repeatable scenarios with feedback. In some of our learning design work at the London School of Innovation (LSI), role-play simulations paired with reflective assessment have been useful for making judgement visible, particularly when workplace access is limited. The value is not the technology itself, but the ability to practise decisions, receive critique, and try again without real-world harm.

Portfolio-based entry routes

Where routine tasks were once proof of “basic competence”, portfolios can serve as an alternative signal, if they are credible. A portfolio that includes problem framing, decision logs, iterations, and what changed after feedback is more informative than a polished final output. It allows employers to see how someone thinks, not only what they produced.

Apprenticeships that evolve with tools

Apprenticeships remain a strong mechanism when they are connected to real progression and updated occupational standards. The challenge is that standards can lag behind practice, and some employers treat apprentices as cheap labour. If AI reduces routine tasks, apprenticeships may need explicit requirements for supervised judgement work, not only task completion.

Test-fit pathways before commitment

Given uncertainty, “try before you buy” becomes a rational approach to career choice. Low-cost test-fits can include short project sprints, shadowing days, open-source contributions, community problem-solving labs, or time-bounded paid trials with clear evaluation criteria. These can reduce the cost of a wrong turn for individuals, and reduce hiring risk for employers.

Two short scenarios

Scenario: junior analyst in a bank. AI speeds up data cleaning and first-draft commentary. The remaining work is interpretation, stakeholder communication, and risk awareness. An on-ramp could combine a short rotation with assessed memos, recorded stakeholder meetings for feedback, and a supervised “red team” review process.

Scenario: entry-level role in local government. AI handles routine citizen queries, but escalations involve empathy, policy judgement, and safeguarding. A viable start could include simulation of complex cases, paired work on real escalations with a senior, and explicit training on data handling and accountability.

Implications for job design and governance

Entry-level is where labour markets either widen opportunity or narrow it. The redesign will be shaped by incentives, regulation, and measurement, not only by technology. The details determine who benefits and who is left behind.

Implications for job design and governance

Job design needs a learning budget

If early-career learning no longer happens automatically through routine tasks, it must be budgeted: time for review, structured feedback, and progressive autonomy. Without this, organisations may still hire juniors but set them up to fail, then label the outcome a “skills gap”.

Productivity gains and wage dynamics

Automation can raise productivity, but wages do not automatically follow. If the number of junior roles falls and mid-level oversight becomes more valuable, wage dispersion can increase. Some workers may move into higher value work faster, while others struggle to enter at all. The policy question is not whether productivity rises, but how gains are shared and whether pathways remain open.

Worker protections in AI-mediated workplaces

When performance is tracked through tools, questions of data rights, transparency, and appeal processes matter. New entrants are often the least able to challenge unfair metrics, opaque ranking, or surveillance-heavy management. Minimum standards for algorithmic management and clearer rights to explanation can shape job quality and long-term trust.

Education as a bridge, not a detour

Education cannot fully replace work-based learning, but it can reduce the “cold start” problem by teaching tool literacy, critical thinking, and domain basics in applied contexts. The risk is that education becomes a holding pattern that delays entry while increasing debt or opportunity cost. The opportunity is to integrate learning with real projects, employer input, and assessment that reflects workplace judgement.

Signals that remain robust

As credentials proliferate and tools become easier to use, robust signals are those that are hard to fake: evidence of iteration, feedback incorporation, ethical reasoning, and the ability to communicate trade-offs. These can be demonstrated in work, in structured placements, or in assessment designs that show process rather than polish.

  • For employers: Are junior roles defined by tasks or by learning outcomes, and is there time allocated to reach them?
  • For education providers: Do assessments reveal judgement under constraints, or only content recall and presentation?
  • For regulators and public institutions: Are apprenticeship and skills funding rules rewarding genuine progression, or simply counting participation?
  • For families and individuals: Is a pathway building a credible signal, or only accumulating certificates?

Decision tests for uncertain pathways

In periods of transition, the most expensive mistake is committing to a pathway that cannot produce credible proof of competence. A few decision tests can reduce risk without requiring perfect foresight.

Decision tests for uncertain pathways

A task-first lens on careers

Instead of asking “Which job will exist?”, it can be more practical to ask “Which tasks will still need humans, and what does human value look like there?”. Jobs change through tasks first. A role that retains responsibility for judgement, relationships, accountability, and complex trade-offs tends to be more resilient than one defined mainly by repeatable routines.

Tool proficiency versus career resilience

Learning a tool can be useful, but tools change quickly. Career resilience comes from being able to learn new tools, critique outputs, and apply them inside a domain responsibly. A useful test is whether a learning programme teaches how to verify, contextualise, and communicate, not only how to prompt or automate.

Checklist for evaluating an on-ramp

  • Proof: What evidence will exist at the end that a third party can trust?
  • Feedback: Who gives critique, how often, and with what standards?
  • Access: What costs, prerequisites, and hidden filters shape who can participate?
  • Progression: What is the next role this pathway reliably leads to, and what proportion of learners reach it?
  • Protection: What rights exist around pay, data, working hours, and dispute resolution?

Questions that stay difficult

The entry-level problem is not only about technology. It is about whether society wants careers to remain broadly accessible when the easiest work to allocate is no longer done by humans. That leaves open questions that do not have neat answers yet:

  • What should count as “experience” when AI can generate first drafts, and verification becomes the real skill?
  • Who pays for the learning budget in early career, and how is that cost shared between employer, individual, and the public?
  • Which safeguards are needed when algorithmic management shapes promotion, discipline, and dismissal for those with the least bargaining power?
  • How can apprenticeships and degrees adapt fast enough to remain credible signals without turning into constant credential escalation?
  • What would make a simulated or portfolio-based pathway trusted by employers without excluding those who lack time, networks, or expensive support?
  • How should productivity gains be measured if fewer juniors are hired, and what does “healthy” workforce renewal look like in that context?
London School of Innovation

London School of Innovation

LSI is a UK higher education institution, offering master's degrees, executive and professional courses in AI, business, technology, and entrepreneurship.

Our focus is forging AI-native leaders.

Learn more