Beyond the broken rung: How AI agents redesign work for variability

Author: Jens Eriksvik

AI is automating the tasks that once formed the first step into working life. Entry-level roles, from data entry and scheduling to basic coding and document review, are disappearing as organizations pursue efficiency through automation. The bottom rung of the career ladder is no longer stable.

This shift reshapes how people enter the workforce and progress within it. Without accessible starting points, those without established networks or prior experience are locked out. The anxiety around AI is not abstract. It reflects a structural shift that removes the foundational roles that careers are built on.

Throughout our work with agentic AI, we see an attractive and positive approach to rebuild work around a new logic: one that embraces variability, distributes responsibility across human and AI agents, and designs entries not through repetition, but through contextual contribution. A logic that allows business to embrace variability, accessibility and efficient management of edge cases.

First, the anxiety is real: AI is breaking the first career step across industries

The roles most exposed to automation are also the ones that have historically functioned as stepping stones. Data entry clerks, trainees, junior consultants, junior administrators, paralegals, junior developers,  these positions offered a way in. They gave people time to learn the system, observe how work flows, and build the tacit knowledge required to advance. That step is now being removed.

Today, many efficiency programs, productivity reviews, and headcount rationalizations are increasingly driven by AI capabilities. The result is job displacement, and with it, a growing anxiety, especially among younger workers and recent graduates. The WEF Future of Jobs Report 2025 [1] highlights that clerical roles are among the fastest-declining categories globally. These are not marginal jobs. They are the first rung in the career ladder; the entry-point to the labour market. 

The numbers reflect the scale: up to 800 million jobs may be displaced by 2030 [2], with nearly 375 million workers expected to switch occupations. In advanced economies, 60% of current roles are exposed to AI automation, and in many cases, it’s the bottom of the ladder that goes first. Employers continue to advertise for “independent self-starters,” but the quiet reality is that we are removing the roles where people used to become just that.

This is a structural break in how organizations grow talent. And unless we rethink how work is designed, and rethink focus on operational excellence to “variance value”,  from the entry point forward, the anxiety will not fade,  it will harden into exclusion and value erosion. 

Redesigning work from the bottom up

The concern over disappearing entry-level roles is valid. Many career systems are still built on industrial-era assumptions: that progress is linear, that value comes from consistency, and that advancement follows time served. This model was never neutral. It rewarded conformity and penalized deviation.

Enterprise IT systems reflect the same thinking [3]. Designed to drive standardization, businesses codified process uniformity into software. Variability became something to eliminate, not manage. Repetition became a proxy for learning. Entry-level roles existed to absorb standardized tasks and slowly build tacit knowledge by doing more of the same.

But the price of clinging to uniformity is easy to count. E.g., accounts-payable benchmarks show that 22,5 % of all invoices become exceptions, driving the fully-loaded processing cost beyond the 9,25 USD “happy-path” baseline and soaking up staff time that should go to higher-value work [4]. Gartner traces the same pattern upstream in finance: avoidable re-work consumes roughly 25 000 hours a year, about 878 kUSD in wasted effort for a 40-person team [5]. And when companies finally tackle this complexity, the upside is striking: 3M’s order-to-cash revamp combined process-mining with agentic automation, cutting manual order re-work by 11 %, eliminating 10 million human touches, and saving 15 MUSD [6]. In short, exceptions aren’t edge cases, they’re a margin leak that scales with volume. Variance-ready workflows don’t just protect experience; they recapture profit through variance value. 

AI agents make it possible to handle variability at scale. Instead of enforcing one best way, they can respond to context. They shift the model from rigid control to dynamic adaptation. And when that shift happens, the rationale for the traditional ladder weakens.

The table below highlights how this transformation plays out across sectors. Standardization has delivered clear advantages in cost and consistency. But agentic AI introduces new possibilities, one where adaptability becomes a competitive strength, and early-career roles no longer need to be anchored in repetition.

Managing variability is a competitive strength, imagine a customer support center which just solves your issues - without passing you around across departments or ques. Together with Byggmax and Enkl.ai we managed to reduce the number of human-managed tickets by over 30 % by allowing an digital colleague to take care of the standard, repeat workloads and free up time for its human colleagues. And this agent is still learning each day, enabling more variability without cost increases. 

AI agents shift the foundation: from standardization to variability

AI agents are changing more than task execution, they are redefining how work is organized. Unlike traditional automation tools, agents operate with context, adapt to input, and participate in workflows as collaborators rather than executors. They scale horizontally across systems, not just vertically within roles.

This has direct implications for how organizations structure responsibility and define entry-level roles. When agents take over repeatable steps, the remaining human contributions become harder to script. Tasks like escalation, interpretation, and contextual judgment require workers who can navigate fluidity. They also require systems that clearly define how trust is maintained when automation no longer follows a deterministic path.

As discussed in AI Just Broke Your Trust Flow [7] trust in AI is inherently brittle. Agents fail silently, act unpredictably, and are hard to review. In this environment, human judgment becomes not optional but foundational. The issue is not just what agents do, it’s who is accountable when something goes wrong.

AI agents should be treated as digital colleagues [8] with defined spans of responsibility, review logic, and escalation paths [9]. Organizations that does not put the proper supporting structures in place risk not just productivity loss, but systemic fragility.

Agentic workflows require a different architecture [10]; one based on protocols, not platforms. Instead of aligning business processes to rigid software rules, the goal becomes enabling dynamic coordination across silos. It is in this space - across boundaries, not within them - that agents add real value.

The World Economic Forum’s Future of Jobs Report 2025 [11] underscores the direction: by 2030, only one-third of tasks will be performed by humans alone. The majority will be either fully or partially handled through human–machine collaboration. This shift confirms that variability and coordination, not consistency and repetition, are becoming the new foundations of work.

At the same time, the hiring logic is already shifting. According to Indeed Hiring Lab, only 48% of U.S. job postings now specify a college degree requirement, down nearly 10 percentage points since early 2019 [12]. Employers are increasingly valuing coordination skills, contextual awareness, and agent fluency over traditional credentials. But unless these new expectations are matched with new kinds of entry points, the system will remain exclusionary by default.

This is why redesigning trust, responsibility, and variability at the entry level is not optional. It is the precondition for participation in an AI-native organization.

No more ladders: think in terms of lattices and launchpads

The question is how to design entry points that match the nature of work today. In agentic organizations, entry-level roles won’t disappear, but they will transform. Rather than starting with repetition, new workers begin by managing the flow between systems, validating outcomes, and responding to exceptions. They don’t just observe the process. They shape it.

These are roles like:

  • Orchestrators, who manage hybrid teams of agents and humans.

  • Auditor-like roles, who review AI-generated outputs for accuracy, alignment, or risk.

  • Explorers, who test new approaches, prompt agents across systems, and feed learning back into workflows.

This shift also reframes what entry-level work is for. In the old model, junior roles were built to absorb compliance work,  repeatable, rule-bound tasks where precision and consistency mattered more than perspective. In the new model, that work is done by agents.

What’s left, and what matters, is variance work:

  • Spotting edge cases

  • Coordinating when templates fall short

  • Framing problems where the rules don’t apply

AI agents handle the known. Early-career humans handle the unknown. Not because they’re senior, but because they can ask better questions, make context-sensitive calls, and keep systems grounded when complexity spikes.

This is a design principle. Trust in automated systems doesn't scale through deterministic logic, it flows through humans. That’s what prevents cascading failure. That’s what keeps systems resilient.

So we stop thinking in terms of rungs. Lattices make more sense than ladders. When agents and humans collaborate via context-sharing protocols, progression isn’t a climb, it’s a contribution. And early-career roles become essential not for what they produce, but for how they stabilize, adapt, and advance collective intelligence.

The WEF confirms this: judgment-heavy roles with ethical, empathetic, or coordinative dimensions show the lowest substitution risk from GenAI. And as employers move away from degree requirements, these capabilities become the new gateway. Done right, this new foundation is more inclusive, more resilient,  and far better matched to the work ahead.

Variance dividend: five profit levers

AI is breaking the bottom rung of the career ladder. That much is clear. The real challenge is to design something better.

Organizations now face a choice: continue optimizing for standardization, or build the capability to handle more variability, across workflows, customers, and talent.This unlocks new forms of value, it enables dynamic organizations that embrace variability are better equipped to:

  1. Serve edge cases and unmet customer needs that scripted workflows miss

  2. Expand access to talent by valuing coordination and contextual awareness over pedigree

  3. Create more meaningful roles by aligning early-career work with judgment and responsibility

  4. Build more resilient systems where humans and agents adapt together, not fail apart

  5. Scale productivity not by adding headcount, but by broadening contribution

As we argue in Designing the AI-Native Enterprise [13], this isn’t about bolting AI onto old workflows. It’s about rethinking how coordination, trust, and accountability are distributed in agentic systems. That means redesigning entry points. It means investing in agent fluency. And it means structuring work so that contribution, not repetition, becomes the basis for growth.

The WEF, McKinsey, and Gartner all agree: the coming shift is significant. But it doesn’t have to be exclusionary. If businesses build with variability in mind, they can scale with flexibility, staff with diversity, and grow with durability.

AI is already reshaping work.  The question is whether we build systems that narrow access, or widen it. This is a design opportunity. And the ones who take it seriously will be the ones who stay relevant.

  • From fixed rotations to dynamic missions
    Traditional trainee programs cycle through departments in predefined steps. Replace this with “missions”;  cross-functional challenges where the trainee solves real problems using AI agents, collaborates across silos, and reports learnings into the organization.

    From passive observation to active orchestration
    Instead of shadowing senior staff to learn through proximity, trainees should orchestrate workflows where human and digital colleagues interact. Give them responsibility for managing hybrid teams; AI agents for standard tasks, humans for judgment, and measure how well they coordinate outcomes.

    From tool training to agent fluency
    Most programs teach systems and processes. That’s not enough. Train trainees to prompt, supervise, and improve agents. Let them learn when to trust output, when to escalate, and how to work with model limitations. Build confidence around uncertainty, not just competence in software.

    From hierarchical mentorship to protocol-based collaboration
    Instead of one assigned mentor, build a network of support through peer AI audits, distributed review loops, and structured escalation channels. Trainees learn to work through systems of accountability, not just up chains of command.

    From time-based advancement to impact-based progression
    Graduating from the program shouldn’t depend on months spent, but on demonstrated ability to manage variance, contribute insight, and stabilize workflows. Use real business outcomes and peer-reviewed trust signals as gates, not tenure.

Get going: five moves to redesign for variability

You don’t need to solve everything at once. But to stay competitive, and inclusive, in an AI-native world, businesses must start moving. Here’s how:

  1. Map where repetition still defines entry
    Identify workflows where junior roles are built on standardization, not contribution. That’s where AI agents will hit hardest. Flag these as redesign priorities.

  2. Pilot one role with agentic augmentation
    Take a single entry-level position and add AI agents around it. Let humans handle variance, exceptions, and escalation. Track how responsibilities shift — and what new skills surface.

  3. Define trust patterns
    Make human judgment a design input, not an afterthought. For each agent, establish review logic, spans of responsibility, and escalation paths. This is what separates brittle automation from resilient collaboration.

  4. Treat coordination as a core skill
    Shift hiring, onboarding, and performance metrics to reflect fluency with agents, not just tool proficiency. The future of work isn’t button-clicking — it’s orchestration.

  5. Design new entry points
    Create “launchpad” roles built around learning in context — not time served. Let new talent build tacit knowledge by navigating the unknown, not repeating the known.

The organizations that redesign early will attract the next generation of contributors, and be ready for what the future demands: judgment, coordination, and the ability to work with machines, not just around them.

Previous
Previous

Performance reviews for digital colleagues: incentives without salaries

Next
Next

ESG has a new org chart - and AI agents are on it