Your AI governance stops at Microsoft 365 — and that is the problem

By Greg Markowski / Apr 10, 2026 / AI & Automation

Every business deploying AI in 2026 is having the same conversation: permissions, governance, acceptable use. And almost all of that conversation is about Microsoft 365.

That makes sense as a starting point. Copilot runs inside M365. It inherits your SharePoint permissions. If those permissions are too broad — and they almost always are — Copilot will surface data that staff were never supposed to see. Fixing that is important. But it is also table stakes.

The real productivity gains from AI do not come from summarising your Outlook inbox. They come from connecting AI to the systems where your actual business processes live: your accounting platform, your CRM, your project management tools, your HR system. The moment you do that, you have left the M365 permissions boundary entirely — and the governance model most businesses are building does not follow you there.

The M365 ceiling

Microsoft Copilot is good at what it does. It can summarise a Teams meeting, draft an email, pull data from a SharePoint document. It operates within the M365 ecosystem and respects the permissions your tenant already has in place.

But M365 is not where most business-critical processes happen. Your finance team reconciles invoices in Xero or MYOB. Your sales team manages pipeline in Salesforce or HubSpot. Your operations team tracks projects in Monday.com or Asana. Your HR team runs onboarding through Employment Hero or BambooHR. Your developers manage work in Jira and deploy through GitHub.

Copilot cannot touch any of that without additional configuration through Copilot Studio, custom connectors, or third-party middleware. Even then, the governance layer that works inside M365 — Conditional Access, sensitivity labels, Purview DLP — does not extend to those external systems. The permissions model breaks at the boundary.

Where the real gains are

The businesses getting transformative value from AI are the ones connecting it across platforms. Not asking AI to summarise a document, but asking it to do actual work that spans multiple systems.

A finance team that uses an AI agent to pull outstanding invoices from Xero, match them against purchase orders in SharePoint, flag discrepancies, and draft follow-up emails in Outlook — that is not a Copilot prompt. That is a cross-platform workflow that touches your accounting system, your document store, and your email in a single operation.

A sales team that uses an AI agent to review a Salesforce opportunity, pull the relevant contract from SharePoint, check the client’s billing history in Xero, and prepare a renewal proposal — that agent is operating across three completely separate permission domains.

A service desk that uses AI to triage incoming requests, check the client’s asset inventory, review their contract entitlements, and assign the right technician based on availability and skills — that workflow spans your ticketing system, your asset management platform, your CRM, and your scheduling tool.

These are not theoretical. Businesses are building these workflows now. And every one of them creates a governance problem that traditional cybersecurity frameworks were not designed to handle.

The governance gap nobody is talking about

When an AI agent connects to Xero via API, it does not authenticate as a user governed by your M365 Conditional Access policies. It authenticates with an API key or OAuth token that has its own permission scope — one that was probably configured once during setup and never reviewed again.

When that same agent pulls data from Salesforce and combines it with data from M365, there is no unified permission model governing what the agent can see across both systems. The Salesforce permissions might allow access to every contact record. The M365 permissions might restrict access to certain SharePoint sites. But the agent can see both simultaneously and combine data in ways that neither system’s governance anticipated.

This is the gap. M365 governance is a solved problem — or at least a well-understood one. Cross-platform AI governance is not. And this is where most businesses are flying blind.

The specific risks include API tokens and OAuth grants that were configured with broad permissions during initial setup and never scoped down. Agent identities that operate outside your identity provider, meaning Conditional Access policies do not apply. No unified audit trail across systems — your Xero logs show one thing, your Salesforce logs show another, and nobody has a consolidated view of what the AI agent did across both. Permission creep as agents are connected to additional systems over time without a formal review process. And no clear accountability when an agent combines data from multiple sources and takes an action based on the combined view.

Why this matters now

Twelve months ago, cross-platform AI agents were an engineering project. Today, platforms like Salesforce Agentforce and Microsoft Copilot Studio are making it possible for non-technical teams to build multi-system agents using low-code tools. Salesforce reported 29,000 Agentforce deals in Q4 alone and processed 2.4 billion agentic work units last fiscal year. Microsoft is rolling out Agent Mode across Word, Excel, and PowerPoint, with multi-agent orchestration in Copilot Studio.

The tooling has outpaced the governance. Businesses can now build an AI agent that reads from Xero, writes to Salesforce, and sends emails through Outlook — in an afternoon. The question of who governs that agent, what it can access, and who is accountable when it makes a mistake has not caught up.

This is not a future problem. It is happening in businesses right now. And if your managed service provider is only talking about M365 permissions, they are solving last year’s problem.

What a cross-platform AI governance framework looks like

The governance model for cross-platform AI agents needs to go beyond M365 permissions. At a minimum, it needs to address four areas.

Agent identity and authentication. Every AI agent should have a managed identity — not a shared API key buried in a config file. That identity should authenticate through your identity provider where possible, and where it cannot, the credentials should be stored, rotated, and audited just like any other privileged account. The days of a developer setting up an API connection with their personal account and forgetting about it are over.

Scoped permissions per system. Each connection an agent has to an external system should follow least-privilege principles. An agent that pulls invoice data from Xero does not need write access to your bank feeds. An agent that reads Salesforce opportunities does not need access to every contact’s personal details. Permissions should be scoped at setup, documented in a register, and reviewed quarterly.

Cross-platform audit trail. If an agent reads from Xero, processes data in your AI platform, and writes to Salesforce, you need a single audit log that captures the full chain — not three separate logs in three separate systems that nobody will ever correlate. This is where strategic IT planning matters: the audit architecture needs to be designed before the agents are deployed, not bolted on after an incident.

Deployment gates and security review. No agent should go live without a structured review of what systems it connects to, what data it can access, what actions it can take, and what happens when it encounters an error. This is not bureaucracy. It is the same rigour you would apply to granting a new employee access to your financial systems — except the agent works faster, never sleeps, and does not ask for clarification when something looks wrong.

What you should do now

Audit your existing API connections. Most businesses have more API integrations than they realise. Xero alone has an app marketplace with hundreds of connected services. Salesforce AppExchange has thousands. Find out what is connected to what, who set it up, what permissions it has, and whether anyone is reviewing those connections. If nobody can answer those questions, you have a governance gap.

Start with M365, but do not stop there. Getting your Microsoft 365 permissions right is the necessary first step. Block unsanctioned AI tools, review your SharePoint sharing, establish sensitivity labels. But treat that as the foundation — not the finish line. The next conversation should be about what happens when AI starts reaching into the systems outside Microsoft.

Ask your technology partner about cross-platform governance. If your IT provider can articulate how they govern Copilot permissions but goes quiet when you ask about AI agents that connect to Xero or Salesforce, that is a gap. The governance framework needs to cover every system an AI agent touches, not just the Microsoft ones.

At Epic IT, we have built our AI governance framework to work across the full technology stack — not just M365. From deny-by-default enforcement to cross-platform agent auditing, we help Perth businesses adopt AI safely across every system they use. Contact us on 1300 EPIC IT to talk about where AI fits in your business.

Frequently asked questions

Can Microsoft Copilot access data outside Microsoft 365?

Not natively. Copilot operates within the M365 ecosystem and respects M365 permissions. To access external systems like Xero or Salesforce, you need Copilot Studio with custom connectors or third-party middleware. When you do, those external connections fall outside M365’s built-in governance controls.

What is cross-platform AI governance?

Cross-platform AI governance is the practice of managing how AI agents access, combine, and act on data across multiple business systems — not just Microsoft 365. It covers agent identity management, scoped permissions per system, unified audit trails, and deployment review gates for any AI workflow that spans more than one platform.

Why is M365 governance not enough for AI?

M365 governance covers permissions within Microsoft’s ecosystem — SharePoint, Outlook, Teams. But most businesses run critical processes in external platforms like Xero, Salesforce, and HubSpot. When an AI agent connects to these systems, M365 Conditional Access, sensitivity labels, and DLP policies do not apply. A separate governance layer is needed.

What are the risks of connecting AI agents to Xero or Salesforce?

The main risks are overly broad API permissions, unmanaged OAuth tokens, no unified audit trail across systems, permission creep over time, and unclear accountability when an agent combines data from multiple sources. These risks grow as more systems are connected and more agents are deployed.

How does Epic IT govern AI agents that work across multiple platforms?

We apply managed agent identities, scoped permissions per connected system, cross-platform audit logging, and a deployment gate process that reviews every agent before it goes live. This sits on top of our M365 governance foundation — deny-by-default blocking, permissions governance, and ongoing monitoring — so the full technology stack is covered.

Want the full cross-platform AI governance framework?

Download our white paper for the detailed methodology — including the tools, the audit architecture, and the deployment review process. Or contact us on 1300 EPIC IT to talk about your specific environment.

Download the White Paper

About the Author
Written by Greg Markowski, Founding Director of Epic IT — a CRN Fast50-recognised, Microsoft Solutions Partner managing IT and cybersecurity for Perth businesses since 2003. Greg holds a Degree in Computer Science and a Diploma in Computer Systems Engineering from Edith Cowan University, and is ITIL certified.

Further Reading

Previous

ChatGPT vs Copilot vs Claude: which AI should your business actually use in 2026?

Return to News
Back to News
Next

Does the new Privacy Act apply to your small business?