Colleague status alters the work contract
A tool speeds up a task. A colleague changes the division of labour, the pace of decisions, and the social expectations around quality. The early signs are already visible in ordinary roles, not just in tech firms.
Everyday scenarios that feel different
Consider a housing association caseworker using an assistant to summarise tenant histories, draft letters, and propose next steps. The output may be good enough to move faster, but it can also narrow attention to what is written down rather than what is lived. Or a small accountancy practice that uses AI to spot anomalies and draft explanations for clients, reducing time spent on routine checks while increasing time spent on interpretation and reassurance.
In a hospital trust, an AI scribe can reduce clinician admin, yet it also changes what gets recorded, what is audited, and how patients experience the consultation. In a secondary school, a teacher using AI feedback on pupil writing may improve turnaround time, while raising questions about what counts as original work, and whether assessment remains a measure of learning or of tool-use.
Why the colleague metaphor matters
When an AI system participates in work, it carries a kind of implied agency. People start to rely on it, delegate to it, argue with it, and use it as cover. This is not a moral claim about machines, but a practical observation about human behaviour around systems that produce plausible outputs at scale.
At LSI, one of the most useful discussions in our learning design work has been less about what AI can generate, and more about what humans start to stop noticing when feedback becomes instant and always available. That question travels well beyond education into any workplace where AI becomes part of how judgement is formed.
A subtle shift in assumptions
The old assumption was that work is paced by human throughput. The newer assumption is that work is paced by interaction between human attention and machine output. That is where speed, scope, and responsibility begin to drift apart.
Speed changes first, then becomes a trap
Speed is the first change most people see because it shows up in cycle time, response rate, and queue length. Yet speed is not the same as productivity, and faster is not always better when errors scale.
How speed actually enters the workflow
Most early gains come from compressing the time spent on drafting, searching, formatting, and first-pass analysis. In many organisations, those tasks sit between meetings and decisions, so accelerating them can make work feel smoother even if the underlying decision quality is unchanged.
Speed also shows up as a shift in expectations. If a procurement team can generate a risk summary in minutes, the organisation may start expecting risk summaries in minutes, regardless of whether the inputs are adequate. This can be a net improvement, but it can also hollow out the time once used for sense-checking and escalation.
Distributional effects of speed
Speed tends to reward organisations that already have clean data, stable processes, and the ability to standardise. That often means larger employers, regulated sectors with mature compliance functions, and well-resourced professional services. Smaller firms can benefit too, but only if they can absorb the hidden work of tool selection, prompt discipline, and quality assurance.
For individuals, speed can be a wage lever or a wage risk. In some roles, faster throughput may justify higher pay because it expands capacity for higher-value work. In others, it may justify increased workload with no pay change, especially where performance management is tied to measurable volume.
The speed trap
Speed becomes a trap when it encourages premature closure: accepting the first plausible answer, skipping second opinions, or letting AI set the agenda for what is discussed. The uncomfortable question is whether the new baseline is “fast and acceptable” rather than “slow and robust”, and who gets to define acceptable.
Advanced AI Prompt Engineering
Large language models, such as ChatGPT, have transformed how we interact with technology. Leveraging their full capabilities, however, requires more than basic queries.
This module delves into the art and science of crafting prompts that elicit...
Learn more
Scope expands through recombined tasks
Scope changes when people start attempting work that was previously out of reach, not because the job title changes, but because the task bundle inside the job is rearranged. This is where job quality can improve or deteriorate.
Task recombination as the real mechanism
Jobs rarely disappear overnight; tasks are peeled off, reshaped, and stitched into new roles. An AI colleague makes it easier to combine tasks across functions. A marketing generalist can do basic analytics; a project manager can draft stakeholder communications; a junior lawyer can explore case law patterns before asking a senior to review.
This scope expansion can be empowering. It can also create role confusion, where people are asked to cover more domains without the authority, time, or training to do so safely.
Examples of scope expansion with mixed outcomes
In local government, an officer might use AI to draft consultation summaries and translate materials for communities, widening engagement. Yet if the tool is not checked for nuance, marginalised groups can be misrepresented. In manufacturing, a supervisor might use AI to interpret machine logs and propose maintenance schedules, reducing downtime. Yet if this becomes an unofficial expectation, safety-critical decisions may be made by staff who were never trained to make them.
Credential inflation and the new signalling problem
As scope expands, employers may raise credential requirements to manage risk, even when the work can be learned through experience and short courses. This can widen inequality by pushing opportunity further towards those who can afford time and tuition, and away from those who relied on work-based progression.
At the same time, micro-credentials and employer-led training can counterbalance this, if they are trusted signals of competence rather than mere participation certificates. The open question is which signals become legible in hiring, and which groups get access to them.
Testing a scope shift before committing
One pragmatic approach is to test-fit: simulate the work before changing jobs or investing in a long qualification. Short project placements, portfolio tasks, apprenticeships, and structured role-play assessments can reveal whether a person enjoys the new task mix and can manage the judgement load that comes with it.
Responsibility becomes the contested terrain
Responsibility is where the colleague metaphor breaks down. AI can contribute to outcomes, but it cannot hold legal or moral accountability. When something goes wrong, responsibility snaps back to humans and institutions.
Accountability gaps appear quickly
If an AI system drafts a safeguarding report that a human signs, who is responsible for a missed risk? If a recruiter uses AI screening and the shortlist is biased, who answers to candidates, regulators, and the organisation’s own values? These are not edge cases. They are common scenarios once AI enters operational decision-making.
Responsibility also includes quieter harms: over-surveillance, performance metrics that become punitive, or the erosion of professional discretion when algorithmic recommendations are treated as default.
Governance choices that shape responsibility
Responsibility is not only about policy documents; it is built into workflow design. Where are the checkpoints? What is logged? What can be audited? Who can override? Are workers trained to disagree with the tool, and do they feel safe doing so?
Regulation is evolving, with the UK taking a principles-based approach and the EU moving through the AI Act. Many employers will effectively operate across both regimes. The practical implication is that data lineage, transparency, and documented human oversight will become everyday management concerns, not specialist topics.
Data rights and power at work
An AI colleague learns from data. That raises questions about whose data it is, who benefits from it, and who is exposed by it. Worker data can be used to improve tools, but it can also be used to monitor and deskill. The boundary between support and control is often decided in procurement clauses, system settings, and managerial habits.
Liability without clarity
In many sectors, liability will remain with the employing organisation, even when a vendor provides the model. That can push responsibility downwards in practice: onto frontline staff asked to “just check it”, without enough time or authority. The most fragile point in many AI deployments is not the model; it is the human review step that is treated as friction instead of as safety.
Master's degrees
At LSI, pursuing a postgraduate degree or certificate is more than just academic advancement. It’s about immersion into the world of innovation, whether you’re seeking to advance in your current role or carve out a new path in digital...
Learn more
Work redesign levers that keep agency
If jobs change through tasks first, then choices about tasks, skills, workflows, incentives, and contracts become the real steering wheel. The aim is not to predict the future, but to reduce avoidable regret.
A simple decision lens for individuals
When assessing a role, course, or career pivot, it can help to separate tool proficiency from career resilience. Tool proficiency is knowing how to use today’s systems. Career resilience is being valuable even when the tool changes.
Questions that can reduce risk:
- Which tasks are likely to be automated, and which tasks become more important because someone must validate, explain, or negotiate?
- Does the role give access to domain depth, not just tool use, so judgement improves over time?
- Is there a pathway to recognised responsibility, such as signing authority, client ownership, or regulated competence?
- What evidence of capability can be built, such as a portfolio, supervised projects, or assessed simulations?
Choices that shape job quality
AI can remove drudge work, but it can also intensify work by raising volume targets. Job quality tends to improve when AI frees time for relational, creative, and complex work, and when performance metrics reward outcomes rather than raw throughput.
It tends to deteriorate when AI is paired with platformisation and surveillance, where task allocation is fragmented and worker voice is weak. The same tool can sit in either environment.
Education and reskilling pathways that match uncertainty
Not everyone needs a full degree, and not every short course is enough. For mid-career adults, blended routes can be sensible: an employer-led programme, a professional qualification, or a modular master’s that can pause and restart. For school leavers, higher and degree apprenticeships can provide income, mentoring, and a clearer line of sight to responsibility.
The strongest programmes, regardless of format, make judgement visible. They assess reasoning, ethics, communication, and the ability to work with tools under realistic constraints.
A decision test for the next decade
The question is not whether speed, scope, or responsibility changes first in the abstract, but which one changes first in a given workplace, and whether the other two are redesigned to match. Misalignment is where disappointment and harm accumulate.
Misalignment patterns worth noticing
If speed increases but responsibility does not shift, errors can scale. If scope expands but skills and pay do not follow, job quality can fall and inequality can widen. If responsibility is pushed onto individuals without authority or support, burnout becomes a governance outcome, not a personal failing.
The more hopeful pattern is alignment: faster work paired with better verification, expanded scope paired with training and progression, and responsibility made explicit through clear accountability and data rights.
Difficult questions that should stay on the table
Before accepting AI as a colleague, it may be worth sitting with questions that do not have tidy answers:
- Where, exactly, does human judgement begin and end in this workflow, and how will anyone know when that boundary has drifted?
- Who benefits from the productivity gain: customers, shareholders, taxpayers, workers, or some combination, and what is the mechanism for sharing it?
- Which groups are most exposed to harm if the system is wrong, and do they have meaningful routes to appeal and redress?
- What new forms of credential, portfolio, or supervised practice will signal competence when output is partly machine-generated?
- When the tool’s recommendation conflicts with professional intuition, what is the expected behaviour, and is it psychologically safe to challenge the system?
- What data about workers is being collected, who controls it, and how long will it be kept?
Those questions may be the real early indicator of what changes first. Speed is obvious, scope is tempting, responsibility is where societies decide what kind of future of work they are willing to legitimise.
London School of Innovation
LSI is a UK higher education institution, offering master's degrees, executive and professional courses in AI, business, technology, and entrepreneurship.
Our focus is forging AI-native leaders.
Learn more