Designing the AI-native enterprise: protocols, digital colleagues, and the new stack

Written by: Jens Eriksvik

As digital colleagues join the workforce, the platform-first model of enterprise IT is giving way to a protocol-based architecture that enables intelligence, agility, and coordination across human-AI teams.

Enterprise IT was built around the idea of standardisation. To scale efficiently, organisations invested in platforms that could centralise data, enforce process discipline, and act as the single source of truth. This logic shaped the rise of ERP, CRM and workflow tools - and created a generation of transformation programmes focused on digitising the past.

But that model is starting to collapse.

As explored in Enterprise software is dead(ish) - time to move on, the value creation layer is shifting away from rigid systems and into more fluid, composable architectures. In parallel, the nature of work is changing. With AI now capable of operating as a digital colleague, able to read, ‘reason’ and act across contexts, standardisation is no longer the north star. Instead, as we argued in Enterprise IT was built for standardisation - digital colleagues make that obsolete, adaptability and coordination are becoming the new foundations.

This article builds on those ideas and introduces a new layer in the enterprise stack: protocols. These are not systems or platforms, but lightweight rules that enable humans, AI agents and systems to interact safely, contextually and at speed. As we move toward a hybrid workforce of human and digital colleagues, it is this protocol layer, not the platform, that will determine how work gets done, and where enterprise advantage is created.

The quiet collapse of Enterprise SaaS logic

For more than two decades, enterprise value creation followed a predictable formula:
digitise operations, implement ERP, CRM or HCM, and standardise processes. The assumption was clear: enterprise SaaS would deliver control, consistency and scale. And for a long time, it did. But it came at a cost - rigid workflows, fragmented user experiences, and systems optimised for stability rather than responsiveness.

Now, that logic is collapsing. AI doesn’t sit neatly inside traditional business applications. It moves across tools, reads unstructured inputs, reasons over data, and acts in context. It doesn’t wait for permission or pre-built workflows. It builds its own, on demand.

This shift is not about adding intelligence on top of legacy systems. It’s a deeper change, one that breaks the mental model of enterprise SaaS altogether. Businesses are moving from application-centric logic to interaction-driven environments. The idea that every business function needs its own system, HR needs HCM, finance needs ERP, sales needs CRM, is giving way to something more fluid, more composable, and more intelligent.

Forbes Tech Council reaches the same conclusion: AI is disrupting the SaaS landscape, not by replacing it with new vendors, but by changing how work is done - away from transactional systems and towards context-aware, AI-augmented flows.

Even large-scale infrastructure players like Ericsson now speak of the need for AI-native design, not just AI tools. It’s no longer about embedding intelligence in applications, but about rethinking the architecture of work itself.

At Algorithma, we've already seen this shift firsthand. As we argued in Enterprise software is dead(ish), enterprise value is no longer created by expanding the SaaS footprint. And as we wrote in Digital colleagues make standardisation obsolete, the logic of rigid systems and standard workflows no longer holds. What replaces it is something lighter, more dynamic, and far more powerful.

The AI-native enterprise doesn’t need more SaaS. It needs smarter protocols: flexible ways for people, AI agents and systems to interact safely, contextually and at speed.

From centralised systems to distributed coordination

Enterprise SaaS was designed to consolidate. One data model. One workflow engine. One system of record. The goal was to standardise how work happened, through systems that could be configured, monitored and controlled. But AI doesn’t operate that way. It doesn’t need everything to be centralised. It needs things to be connected.

This shift mirrors a broader architectural change already underway in modern computing. Centralised systems offer control and reliability, but suffer from bottlenecks, rigid dependencies and single points of failure. Distributed systems, by contrast, rely on nodes working independently and collaboratively, offering scalability, resilience and autonomy. But they only function when coordination happens through protocols, not control layers.

AI has evolved along the same trajectory. Earlier models were built to classify, tag or predict inside bounded, system-defined domains. Today’s models - particularly LLMs and tool-using agents - are designed to operate across silos, interpret ambiguous context, reason over multiple sources, and trigger actions dynamically. These agents don’t need to be embedded inside enterprise systems, they need structured ways to interact with them. 

This is where protocols come in. They aren’t systems. They’re flexible coordination rules, ways for tools, APIs, humans and AI agents to engage with each other without relying on a single application or data model. Think of HTTP, Kafka, gRPC, and increasingly: task-level agent protocols like MCP. These protocols make it possible to move from fragmented automation to orchestrated intelligence.

This is what makes modern agents useful beyond the chatbot. They can now:

  • query across domains

  • understand organisational context

  • take informed actions

But only if the environment allows it, only if the enterprise provides protocols that enable distributed, permissioned, contextual collaboration. This is the shift from owning systems to orchestrating intelligence.

  • Model Context Protocol is an emerging open standard that enables AI agents to interact with tools, APIs and data sources in their environment; securely, contextually and dynamically.

    What it is: A protocol for AI agent interaction. Not a tool, not a platform.

    What it enables: Cross-domain reasoning and action by AI agents

    How it works: Agents interact via structured calls (e.g. JSON-RPC) to hosts that expose tools and context

    Security model: Host-controlled. Agents must request access and operate within permissioned scopes

    Designed for: AI-native workflows: reasoning, synthesis, decision support, multi-agent systems

    Used by: Replit, Sourcegraph, Anthropic, and emerging enterprise adopters

    Why it matters: Turns AI from an interface feature into a composable actor across workflows

    MCP makes it possible to move from chatbots and copilots to intelligent agents that operate within the enterprise - without being locked into any single system.

    This shift toward agent-based coordination is gaining momentum across the industry. Google recently introduced A2A (Agent-to-Agent), an open protocol for secure communication between AI agents. Like MCP, A2A reflects a growing consensus: the future of enterprise automation won’t be built on deeper system integrations, but on interoperable agents that can reason, act and collaborate across tools. These protocols are becoming the connective tissue that allows digital colleagues to operate across domains; securely, contextually, and at scale.

    Stay tuned for our deep-dive into this topic!

The rise of digital colleagues

AI is no longer just a tool. It’s a teammate. In earlier phases, digital colleagues were assistants; helping summarise, flag, route and schedule. Today, they are stepping into real operational roles. Agents are already handling triage in customer service, assisting in financial forecasting, and supporting supply chain planning. And they’re doing so across multiple systems, in real time, with increasing autonomy.

  • They don’t sit inside enterprise SaaS. They operate across it.

  •  They don’t follow predefined flows. They adapt to context.

  •  They don’t wait for instructions. They anticipate and act.

This shift challenges how we think about roles, teams and accountability. If AI can perform specialised tasks alongside human employees, it makes sense to ask: why aren’t they formally part of the organisation?

Digital colleagues should be represented, clearly and explicitly. Not buried in infrastructure diagrams, but placed on the org chart with a defined scope, interaction model, and escalation path. This isn’t about symbolism. It’s about coordination. While it may not look exactly the same, like any team member, they need:

  • onboarding

  • boundaries

  • performance feedback

  • access to shared context

  • clear responsibilities

  • pathways to handle complexity or uncertainty

In short, they need careers, and organisations need to treat them as participants in the operating model, not as tools to be configured and forgotten. Simply put, not as an IT system. 

This also requires rethinking the environment they operate in. Digital colleagues don’t thrive in rigid systems. They operate through protocols, i.e. flexible structures that allow secure, contextual interaction across tools and teams. That’s what enables agents to move between domains without becoming brittle or misaligned.

From software hierarchies to work networks

Traditional enterprise stacks were built on hierarchies. Systems owned the process. Data was bound to applications. Users were trained to navigate the complexity. Everything flowed top-down: a core system dictated how work should happen, and people followed it. There was (is?) even an approach; “fit to standard”. 

For years, the industry has promoted the idea of modernising the core, i.e. refactoring ERP, upgrading CRM, or consolidating SaaS into unified platforms. But this misses the point. The real constraint isn’t outdated technology. It’s the assumption that value is created inside systems. As we argued in Enterprise software is dead(ish), value creation now happens in the interactions between people, data and tools, composed in real time, not encoded into rigid workflows.

In a protocol-native architecture, work is orchestrated rather than controlled. Teams and agents interact through lightweight rules that allow them to reason, act and adapt, without needing to conform to a pre-built structure.

What this looks like in practice

In each case, the logic doesn’t live in a system. It lives in the network of interaction, enabled by protocols, executed by agents, and overseen by people.

What this changes for businesses:

  • Processes become adaptive instead of standardised

  • Teams become hybrid instead of siloed

  • Legacy systems become enablers, not bottlenecks

  • Software investment shifts from ownership to orchestration

This isn’t about ripping out old tools. It’s about reducing their gravitational pull. The core becomes infrastructure. The edge becomes intelligent. The question is no longer how to upgrade your ERP. It’s how to design work so that the right agent, human or digital, can make the right move at the right time.

  • Don’t rip them out. Redefine their role. These systems aren’t going away. But they are no longer where differentiation happens. Instead of treating them as the centre of the enterprise stack, treat them as stable infrastructure, critical, but no longer in charge.

    What this could look like:

    • Decentralise logic: Pull decision-making and workflows out of the ERP. Let agents and orchestration layers determine how work happens, and use the ERP to record the outcome, not to drive the process.

    • Expose the data, limit the control: Your CRM and HCM hold valuable data. Use APIs and secure protocols to make that data accessible to agents and other tools, without requiring everything to flow through the original system.

    • Stop customising, start orchestrating: Don’t build new features inside the ERP. Build lightweight agents or flows on top that can adapt as business needs change.

    • Treat core systems as compliance anchors: ERPs still matter for audit, traceability and policy enforcement. Let them govern what must happen, but not how it gets done day to day.

    The shift is from “this is where work happens” to “this is where records are kept.” By repositioning enterprise SaaS as stable infrastructure, rather than a source of innovation, companies can simplify their stack, reduce costs, and move faster where it matters.

Organisational implications: how work must change

You don’t implement AI agents, you hire them. You give it a role, define its scope, connect it to the right data, and embed it in a team. This is already happening in operational functions across industries:

  • A digital colleague in customer service triages issues, drafts first responses, and hands off only what needs human review

  • In finance, an agent pre-screens purchase requests, approves those within policy, and escalates exceptions with full traceability

  • In product, an AI assistant scans feedback, support logs and usage patterns to highlight UX friction before it's manually reported

These agents are not extensions of software. They are digital team members that are deployed across tools, reasoning over context, contributing to outcomes.

They should be treated as such. Not as features embedded inside SaaS, but as key personnel; strategic assets that shape how work gets done. Where they sit, how they learn, what they access, and who governs them should be an intentional design decision, not a technical configuration. Embedding critical agents inside systems you do not own is a dependency, not a strategy.

  • Team structure: Digital colleagues are embedded in real teams, not owned by IT. They operate in sales ops, procurement, customer service and finance, contributing to live workflows and working alongside humans.

  • Roles and escalation: Agents operate within clearly defined scopes. They escalate based on policy, not permissions. Responsibility flows through the task, not the reporting line. A finance ops lead might manage two analysts and three agents, each measured against shared KPIs.

  • Workflow design: Workflows are designed around outcomes, not systems. Access is provisioned dynamically. Escalations are contextual. Coordination happens through agents, not routing layers.

This is not about owning better tech. Tech is a commodity. The differentiator is how AI is applied inside your workflows.

“The organisations that win will not be the ones with the most powerful models.  They’ll be the ones who structure work so human and digital colleagues can operate together; clearly, efficiently and with purpose.”

- Jens Eriksvik, CEO

The work ahead

This shift is underway. Digital colleagues are joining teams. Workflows are being restructured around intent. Coordination is becoming the new system of record. What separates leaders from laggards won’t be the AI model they choose. It will be how clearly they define roles, how they govern decision-making, and how well they design for collaboration between humans and machines.

The impact is measurable. Organisations already working this way are reporting double-digit gains in productivity and efficiency, driven not by tools, but by how they’re applied inside real workflows.

This isn’t a tech decision. It’s an operating model decision. And it starts by treating intelligence - human and digital - as a team design problem, not a systems upgrade.

At Algorithma, we help clients move beyond platform logic. We design agent-ready architectures, structure hybrid teams, and build governance protocols that make AI not just powerful; but safe, useful, and productive.

Previous
Previous

Reinventing the IT management team: from system custodians to architects of intelligence

Next
Next

AI just broke your trust flow: humans are back into the loop