Goldman just stood up a $1.5 billion service company to deploy AI inside operating businesses. Australian mid-market will not be reached. Three questions every CEO should ask their IT partner this Friday to know whether they are ready.
Last Monday, Goldman Sachs, Blackstone, Hellman & Friedman and a group of investors put $1.5 billion behind a new kind of company. Not a fund and not a consulting firm. A service company that embeds Anthropic engineers inside operating businesses to wire AI agents into the systems those businesses already run.
Anthropic’s own job listings tell you what those engineers actually deliver: MCP servers, sub-agents and agent skills. That is the architecture. It is being industrialised at the top of global enterprise right now.
The interesting question for Australian mid-market is what happens next.
Last week’s piece traced the structural shift. The events of the last six days made it concrete.
JPMorgan, Goldman Sachs, Citi, AIG and Visa are already production Claude customers. Last Tuesday, Jamie Dimon shared a stage in New York with Anthropic’s CEO and disclosed he had built himself a markets dashboard covering asset swaps and Treasury bid-ask spreads in Claude Code over the weekend, in twenty minutes. By Tuesday close, FactSet was down 8.1% and Morningstar was off more than 3%. The market priced what had happened before most readers did.
The $1.5 billion service company is the answer to the question last week’s piece left open. How does this architecture get into operating businesses? Forward-deployed engineers, embedded inside the customer, wiring agents into the systems already running. Same playbook Palantir wrote at the intelligence agencies.
The thing the announcement does not say out loud is which businesses get reached. The service company will work through PE-portfolio companies. The largest US enterprises first. The ASX 50 second, if at all. The 99% of Australian mid-market sits two or three tiers below that. The architecture will reach you. The service company will not.
I spent five years inside Group Treasury at Macquarie and Westpac, working under APRA’s prudential framework on interest rate risk in the banking book, liquidity coverage ratio and capital transformation. Before that, front-office work on multi-strategy volatility funds. The Bloomberg Terminal was the rails of that desk. Every market, every news feed, every piece of analytics, on one screen. The fund manager did not ask their analyst what the position looked like. They asked the screen.
The Terminal cost more than $30,000 a seat per year and was the privilege of firms that could justify the tooling. On 23 February, Bloomberg launched ASKB into beta and the Terminal stopped being a screen and started being an agent. Same architecture pattern Anthropic shipped last week for finance work.
Two and a half months later, the same architecture sits over any 200-staff Australian business’s own systems, against your own data, on your own infrastructure. You do not need Bloomberg’s content. You already own better data for your purposes: sales pipeline, cash position, project margins, client history. What you are unlikely to have ever had is anything walking those rails on your behalf in real time. The APIs are not new. Your finance system, your CRM, your line-of-business tool already expose them. MCP is the protocol that lets an agent walk all of them as if they were one screen, the same shift the Bloomberg Terminal made for market data in the 1980s.
That gap closed at the top of the market last week. It will close at the next layer down through whichever IT partner you already work with, or whichever new one you sign up to.
A trusted partner can answer all three from memory. A vendor can answer one and a half. A salesperson can answer none.
A real answer names four pieces and tells you how they sit together. MCP servers for the hands, exposing every system you already own through a standard interface, scoped per user. Skills for the procedures, the codified instructions for how the work gets done in your voice. A frontier model for the brain. Claude for high-stakes work, an open-source model running on your own hardware for routine volume. A managed runtime for the body, with hosted compute, audit logs, credential vaults and OAuth proxy in one place.
Bad news: you will need to lift your data governance to unlock this. Good news, sort of: the issues you will find are mostly ones you already have. AI just compounds them.
If your partner uses words like “AI assistant” or “intelligent automation” without naming this architecture, they have not deployed one. Anthropic’s own job description for Forward Deployed Engineers names MCP servers, sub-agents, and agent skills as the work, in print, on Greenhouse. There is no excuse for a partner not to know it.
The May and June work that lands inside this architecture for a 200-staff Sydney accounting firm is the mid-year tax planning pack. Two hundred business clients. Each one needs a bespoke pack on Division 7A, trust distributions, super, and FBT. The agent walks each client’s prior year and current YTD, drafts the pack against firm precedent, and surfaces it for partner sign-off. Six weeks of partner time compressed. Same architecture pattern Dimon used, different seat and different data. All the king’s horses and all the king’s men aren’t putting the billable hour back together again.
A real answer is sentence-by-sentence specific. Permissions scoped per session and tied to a real user identity. OAuth tokens in a managed vault, never in a prompt. Every tool call logged, signed and reversible. Data flowing out through the MCP boundary, where it can be filtered, masked or refused. Sensitivity labels travelling with the data. Skills version-controlled, reviewable and signed.
If your partner says “we figure that out during deployment” or “the model handles that,” they are running agents on borrowed time. The same Zero Trust pattern your security team already applies to identity and endpoints has to extend to the agent layer. There is no second version of this where it does not.
The honest answer is that a third of what the leading frontier model is tested on, it gets wrong. Claude Opus 4.7, the model Anthropic shipped last week for finance work, leads Vals AI’s Finance Agent benchmark at 64%. The 2026 Cambridge Centre for Alternative Finance report found 70% of both industry firms and regulators rate model hallucinations as a top concern.
Sarbanes-Oxley certifications cannot be delegated to AI. AHPRA scope of practice cannot be delegated to AI. APRA CPS 234 obligations cannot be delegated to AI. A partner who promises full automation has not read the standards.
Closer to home, WA’s Privacy and Responsible Information Sharing Act lands on 1 July 2026. IPP 10 governs automated decision-making, and if you touch WA government data your AI agents are in scope. You will need to explain how a decision was reached. Deterministic scripts show that on demand. Probabilistic LLM agents cannot, not without an audit layer built in from day one.
A productivity partner gets your team out of the two-thirds the agent does well, and into the third it does not. They know when to use a probabilistic model, when to use a deterministic script, and when to keep the work in human hands.
That is the unlock: reallocation, not replacement.
A senior operator named the next phrase under last week’s LinkedIn post. Oliver Gohl from AvePoint framed it well: managed service providers reshaping themselves into productivity partners to the SMB and SME across Australia.
Forward-deployed engineering is the Palantir name for the work, which Anthropic borrowed. What the term describes is what a senior MSP architect already does, embedded inside one customer instead of rotating across many, building agent systems on the client’s actual data instead of integrating vendor product. The capability gap that separates a current architect from a forward-deployed engineer is AI training, not professional pedigree. That gap closes.
The engagement is small. Two or three of your sharpest people, six to twelve weeks alongside the partner’s architects, on the workflow that matters most. They build the agent and learn the architecture. The squad goes back to the business with the pattern in their head for the next workflow. Two routes from here. Build internally with your own AI engineering team, or work with a productivity partner. The question is not whether you can afford to build. It is whether you want to own the risk.
The talent is already moving. Senior engineers are walking out of Big Tech and the Big Four into the mid-tier, some into partner firms, some into mid-market customers as internal AI leads, all working alongside each other on real business problems. The productivity partner is the channel. Not in 18 months. This Friday.
If your three questions don’t get the answers above, the team you are working with is not ready to run agents for your business. An AI assessment will tell you, honestly, whether you are five steps away from operating differently or five years.
Goldman Sachs, Blackstone, Hellman & Friedman and a coalition of investors including Apollo, General Atlantic, Leonard Green, GIC, and Sequoia Capital put $1.5 billion behind a service company that embeds Anthropic engineers directly inside operating businesses to wire AI agents into the systems those businesses already run. The model resembles Palantir’s forward-deployed engineering at intelligence agencies. The first cohort of customers is PE-portfolio companies. Australian mid-market sits two or three tiers below that.
Four pieces. MCP servers expose your business systems through a standard interface, scoped per user. Skills are folders of instructions and procedures that codify how the work gets done in your voice. A frontier model like Claude Opus 4.7 decides which skill to load and which system to call, in what order. A managed runtime gives you hosted compute, audit logs and credential vaults. Anthropic’s public Forward Deployed Engineer job listing names MCP servers, sub-agents, and agent skills as the work.
Small and focused. Two or three of your sharpest people, six to twelve weeks alongside the partner’s architects, working on the single workflow that matters most to your business. The agent gets built and stays. The squad returns to running the business with the architecture pattern in their head for the next workflow. First measurable productivity inside 90 days on the workflow you piloted. Material change to how operations staff spend their hours at six months. Behaviour change across the whole organisation at 12 to 18 months. Anyone compressing that timeline is skipping permissions, credentials or audit, and that path ends in a Privacy Act remediation.
You can. Hiring your own AI engineering team is a real option for any Australian business with the budget. The harder question is whether you want to own the risk that comes with it: hiring in a tight market, retaining people who are mobile, making architectural calls before you have seen many deployments, and carrying the compliance and audit obligations on your own. A productivity partner absorbs most of that risk and brings the pattern recognition that comes from running these deployments at many other businesses. The model that fits most mid-market situations is one strong internal operator who owns the relationship and direction, plus a productivity partner that brings the architecture, the volume, and the bench.
Sarbanes-Oxley certifications, AHPRA scope of practice, APRA CPS 234 obligations, WA’s PRIS Act IPP 10 obligations on automated decision-making, legal professional privilege, and Privacy Act health-information determinations. The current frontier model gets roughly one in three test items wrong on the leading finance benchmark. None of those regulatory accountabilities survive that error rate. The unlock is reallocation of human time, not replacement.
Permissions scoped per session. Credentials in a vault, never in a prompt. Every call audited and reversible. Data flowing through a controlled boundary where it can be filtered, masked or refused. Procedures version-controlled and reviewable. Same Zero Trust pattern your security team already applies to identity and endpoints, extended to the agent layer. For WA government work and contracted service providers, PRIS Act IPP 10 obligations on automated decision-making apply from 1 July 2026, which sharpens the case for deterministic automation alongside probabilistic agents.
Microsoft Copilot is a productivity overlay on Microsoft 365 that inherits whatever permissions already exist in the M365 estate. For most Australian mid-market businesses, those permissions are far broader than they should be. The architecture described here sits across every business system, not just Microsoft 365, and is governed by an explicit boundary, not by inherited M365 permissions.