LSI Insights - The AI-Native Organisation
How boards can plan AI investment under uncertainty
Boards are being asked to fund AI while the ground is moving: model capability is improving, regulation is evolving, data liabilities are being discovered late, and competitors are experimenting in public. Traditional investment logic assumes stable costs, predictable benefits, and clear accountability. AI breaks those assumptions, and the result is often stalled pilots, overspend, or avoidable reputational exposure.
14 min read
November 17, 2025
Executive summary
Planning AI investment now means managing uncertainty as a first-class constraint, not an inconvenience. The challenge is that benefits often sit in workflow redesign while costs and risks sit in technology and governance. A more resilient approach treats AI as a portfolio of options with staged commitments, clear decision rights, measurable unit economics, and explicit boundaries for human oversight. Even then, ambiguity remains: capability, compliance expectations, and organisational readiness will keep shifting.
AI investment assumptions are breaking
AI spend is often discussed as if it were another software rollout. The reality is closer to organisational redesign under shifting technical and regulatory conditions, where value emerges unevenly across functions and time.
Many board-level investment playbooks rely on familiar patterns: scope a solution, estimate benefits, fund delivery, then scale. AI disrupts this not because it is mystical, but because the value and the risk are produced by interactions between models, data, and decision-making norms.
Capability curves are not linear
In some processes, AI produces step changes. In others, it produces small gains until a threshold is crossed, such as better data quality or clearer policy definitions. A customer onboarding workflow, for example, may see little improvement until identity checks, document handling, and exception routes are redesigned. The result is that early pilots can look disappointing, even when the longer-run potential is material.
Costs move in both directions
Model usage costs may fall per unit, yet total cost can rise as more work becomes feasible and demand increases. A claims team that cuts handling time from 40 minutes to 25 minutes may see volumes rise as backlogs are cleared and service levels improve. This is welcome, but it changes the economics. Cost control becomes less about unit price and more about demand management and routing rules.
Accountability is harder than it looks
AI adds a new class of operational questions: which decisions are delegable, which remain human-only, and which require human oversight with evidence. This is less about a policy statement and more about designing workflows that make the right behaviour easy under time pressure.
Uncertainty makes capital allocation emotional
Under uncertainty, boards can swing between overconfidence and paralysis. Both are understandable responses to unclear payback periods, rapid vendor change, and public scrutiny of failures.
One pattern is the "pilot museum": dozens of proofs of concept, few production deployments, and little cumulative learning. Another pattern is the single big bet: a platform commitment justified as future-proofing, followed by years of integration work and limited adoption.
Reputational risk behaves like a multiplier
AI-related incidents tend to be narrated as governance failures, even when the underlying issue is messy data or ambiguous policy. A retail bank might tolerate a small error rate in manual complaint triage, yet find a similar error rate unacceptable when linked to an algorithm. The expected value calculation changes because the downside is not only financial but also regulatory and trust-related.
Traditional ROI can be misleading early on
Early-stage AI programmes often show benefits in the form of reduced rework, fewer escalations, or better compliance evidence. These are real, but they do not always translate into headcount reduction or immediate revenue. A pragmatic framing is to treat early deployments as improving unit economics: cost per case, cycle time per transaction, error rates that trigger remediation. A 10 to 20 percent reduction in cycle time can be valuable even before organisational capacity is rebalanced.
Learning has economic value
Some of the most important returns are information returns: discovering which processes are too noisy to automate safely, where data lineage is missing, or where policy is underspecified. At LSI, AI-enabled role-play simulations in executive education tend to surface a similar lesson: uncertainty is reduced less by debate and more by repeated decision practice with feedback, provided the learning is captured and operationalised.
AI in Business: Strategies and Implementation
Artificial intelligence is transforming how modern businesses operate, compete, and grow but its power lies not in blind adoption, but in strategic discernment. This module moves beyond the hype to explore how leaders can make informed...
Learn more
Portfolio logic beats single big bets
A more resilient investment posture treats AI as a portfolio of options with staged commitments. The objective is not to predict the future, but to remain able to act as it becomes clearer.
Boards already manage uncertainty in R&D, market entry, and cyber security. AI can be approached similarly: fund a portfolio that balances near-term productivity, medium-term operating model change, and longer-term capability building, while keeping exit routes open.
Stage-gated funding as an option, not a hurdle
Rather than "approve or reject", funding can be structured as conditional: small initial commitments with explicit criteria to release further investment. A contact-centre augmentation initiative might begin with a narrow scope and measurable outcomes, such as reducing average handling time by 5 to 12 percent without increasing complaint rates, before expanding to more complex intents or channels. The gate is not only performance but also evidence quality: can the organisation explain outcomes, manage exceptions, and demonstrate control?
Choosing where automation is safe
Not all work should be automated. A useful distinction is between decisions that are reversible with low harm, and decisions where error creates lasting damage. In procurement, drafting and summarising can be delegated with light oversight, while supplier exclusion decisions may require documented human judgement. The investment question is not "where can AI be used" but "where can AI be used with bounded downside".
Platform choices as reversible commitments
It is tempting to treat platform selection as a once-in-a-decade decision. Under current volatility, reversibility has value. Architecture that supports substitution, model evaluation, and clear data boundaries can be worth more than marginal performance. The trade-off is that optionality can slow delivery if governance becomes over-engineered. The hard part is finding the minimum viable controls that preserve freedom of movement.
Operating model redesign becomes the real spend
Many AI budgets underestimate the cost of changing how work gets done. The decisive investments are often in roles, governance, incentives, and workflow design rather than in model access.
AI-native transformation is less about adding tools and more about redesigning the organisation so that learning, control, and accountability scale. That redesign has a price tag, even when software licences appear modest.
Decision rights and oversight routes
A recurring failure mode is unclear ownership: technology teams own the model, operations teams own the outcomes, risk teams own the policies, and nobody owns the end-to-end system. Some organisations address this through a product-style accountability model where a named operational owner is responsible for performance, controls, and adoption. The board-level question becomes: where does accountability sit when human and machine collaborate, and how is evidence surfaced when something goes wrong?
Centralise governance, federate delivery
Central teams can set guardrails: model registers, evaluation standards, incident management, and regulatory interpretation. Delivery often needs to sit close to the work, where process knowledge and adoption levers exist. The tension is predictable: too much central control slows learning; too little creates hidden risk and duplicated spend. The practical test is whether a new AI use case can be evaluated for risk and value in weeks rather than quarters, without bypassing controls.
Incentives that reward changed behaviour
AI value is rarely captured if teams keep measuring performance as if nothing has changed. If managers are rewarded for headcount size, productivity tools will not reduce cost. If compliance is punished for blocking rather than for enabling safe throughput, innovation will move into the shadows. Incentive design is not a cultural footnote; it is part of the investment thesis.
Metrics and cadence for industrial scale
Moving beyond pilots requires measurement that is meaningful to the business and credible to regulators. It also requires a cadence that treats AI as ongoing operations rather than a one-off programme.
Industrialising AI is often described as "scaling". In practice it is the discipline of running AI like a managed service with clear unit economics and control signals.
Unit economics that survive scrutiny
ROI becomes more reliable when expressed in operational units. Examples include cost per resolved case, minutes per policy decision, cost per invoice processed, error rate leading to remediation, or revenue per relationship manager hour. A well-run deployment might target a 15 to 25 percent reduction in cost per case while holding quality steady, but the point is not the number. The point is that benefits and harms are measured in the same operational language.
Leading indicators for risk and resilience
Lagging indicators such as cost savings arrive late. Leading indicators include exception rates, override frequency, data drift, staff reliance patterns, and the time taken to detect and resolve incidents. When override rates spike, it may indicate model degradation, process ambiguity, or a training gap. Without these signals, boards receive only good news until something breaks.
A 12-month decision test
A useful way to frame the next year is as a sequence of decisions rather than a single plan: what gets funded now, what evidence must be produced next, what must remain human-controlled, and what can be delegated with monitoring. The insight is that under uncertainty, progress comes from increasing the organisation’s capacity to make high-quality decisions repeatedly, not from finding the perfect forecast.
Uncomfortable question: If an AI-related incident occurred tomorrow, would the organisation be able to show, within days, who approved the design, what controls were in place, how performance was monitored, and why the decision to deploy was considered acceptable?
London School of Innovation
LSI is a UK higher education institution, offering master's degrees, executive and professional courses in AI, business, technology, and entrepreneurship.
Our focus is forging AI-native leaders.
Learn more