Your AI strategy is now a compliance strategy. WA goes first.

By Moe Chizari / May 13, 2026 / AI & Automation

PRIS commences 1 July. The way Australian businesses plan their AI work just changed. WA goes first.

An AI agent built for an Australian business has four moving parts. MCP servers expose the systems. Skills encode the procedures. A frontier model produces the inference. A managed runtime carries the compute, the credentials and the logs. That architecture turns a chat product into something operational, and we covered it in detail recently. The question Australian businesses have been asking is which of the four to deploy first. Seven weeks from today, that becomes the wrong question. The right question is whether those four pieces alone can meet what the law requires sitting alongside them.

Western Australia’s first dedicated privacy law, the Privacy and Responsible Information Sharing Act 2024, commences on 1 July 2026. One of its eleven Information Privacy Principles, IPP 10, sits directly on top of automated decision-making. If your business touches WA government data, including as a contracted service provider one tier removed, your AI strategy is now also a compliance strategy. That is true whether you have deployed an agent yet or not.

The Australian mid-market has not deployed many AI agents. We work with businesses across the country every week and the honest read is that most are still planning, not running. That is exactly the right moment for this regulation to land. The decisions made in the next two quarters about AI architecture, vendor selection, internal governance, and what work a model is allowed to touch are the decisions PRIS will measure. Building it right is straightforward. Retrofitting compliance to an agent already shipping is not.

WA goes first, and the rest of the country sits on the same trajectory. The federal Privacy Act tier 2 reforms, the NSW privacy regime review, and the Victorian Information Privacy Act review are all moving in the same direction on automated decision-making. Whatever Western Australia ships in 2026, the others ship inside the following eighteen to thirty months. For any Australian business with a meaningful AI program planned for the next two years, this stops being a Western Australia story very quickly.

What PRIS actually does

The Privacy and Responsible Information Sharing Act 2024 received Royal Assent on 6 December 2024. Most operative provisions commence on 1 July 2026. The notifiable information breach scheme follows on 1 January 2027. The Information Commissioner and Privacy Deputy Commissioner have been in role since 1 July 2025, preparing the regulatory infrastructure that lands next year.

The Act introduces eleven Information Privacy Principles that bind IPP entities. IPP entities are WA government departments, statutory authorities, local and regional governments, government trading enterprises, the WA Police Force, courts and tribunals, universities, and certain contracted service providers handling personal information on behalf of those entities. Unlike the Commonwealth Privacy Act, PRIS does not have a small business exemption and it does not have an employee records exemption.

That is the structural part, and it is the part the law firms have been writing about. The substantive part, the part that has not been written about as much, is that PRIS is the first Australian privacy regime to put automated decision-making inside the act itself. Other Australian jurisdictions are reviewing equivalent provisions. WA shipped them first.

Why IPP 10 is the principle that changes your AI plan

Ten of the eleven IPPs cover ground that any privacy professional already knows. Collection notices, use and disclosure, data quality, security, transparency, access and correction, unique identifiers, anonymity, transborder flow, sensitive information. The Commonwealth Privacy Act covers most of the same ground through the Australian Privacy Principles. A business that has done the work for APP compliance has done most of the work for IPP compliance.

IPP 10 is the one that does not have a federal equivalent. It governs automated decision-making, and it imposes three obligations that the standard AI architecture does not deliver on its own.

First, notification. The IPP entity must notify an individual when an automated decision is being made about them. Not in a buried privacy policy. In the moment, or as close to it as practicable.

Second, human intervention. The individual must have a real pathway to request a human decision-maker. Not a chatbot escalation. An actual person with the authority to overturn what the model produced.

Third, explainability. The IPP entity must be able to explain how a decision was reached. The basis for the inference. The factors that were weighed. The data that informed the outcome.

A probabilistic large language model does not produce that explanation natively. It produces a token sequence. The same prompt does not reliably return the same output. The model cannot articulate which training data informed which inference, because that information is not exposed to the model itself. The 2026 Cambridge Centre for Alternative Finance report found 70% of both industry firms and regulators rate model hallucinations a top concern. The regulator and the practitioner agree on this one. The model gets things wrong, and when it does, the model cannot tell you why.

IPP 10 closes that gap by statute.

The fifth piece

Those four pieces describe what the agent does. IPP 10 adds a fifth piece that describes what the agent has to show.

An explainability and audit layer that sits over the top of the four.

The closest familiar example is a flight data recorder. Every commercial aircraft has carried one since 1958. The recorder is not optional. It captures every input the pilot made and every output the aircraft produced, in a form that can be retrieved and read after the fact. The reason the industry rebuilt around the black box was not safety culture. It was the Comet disasters of 1954. Three Comet airliners broke apart at altitude inside twelve months and the industry could not explain why. After the recorder, every incident generated an evidence trail that could be examined, learned from, and corrected. The aircraft kept flying. The system around it changed.

AI agents in Australian businesses are at the Comet moment. The model is the cockpit. The fifth piece is the recorder.

In practical terms the fifth piece is four things sitting alongside the agent. A pre-decision layer that classifies the sensitivity of the input, captures consent where needed, and surfaces a notification banner to the affected individual. A during-decision layer, almost always deterministic code wrapping the probabilistic model call, that logs the exact input, the model’s exact output, the prompt template version, and the path through the system. A post-decision layer that exposes the output to a human reviewer, records their acceptance or override, and writes the result back to the audit log. And a retention layer that holds those logs in a form that can survive an Information Commissioner investigation, which means tamper-evident storage and explicit retention periods.

None of that is exotic. The pattern is the same Zero Trust pattern your security team already applies to identity and endpoints, extended to the agent layer. What is missing in most AI deployments shipping in Australia right now is not the capability. It is the discipline of treating the agent layer as a system that needs governance designed in from the start, not retrofitted after the regulator knocks.

What the agent gives you versus what IPP 10 requires

This is the gap most AI rollouts have not yet seen laid out. The left column is what the four-piece architecture produces by default. The right column is what IPP 10 requires sitting alongside it. The distance between the two columns is the work.

What the four-piece agent gives youWhat IPP 10 requires
A decisionA decision plus an explanation of how it was reached
Inference at speedInference at speed plus notification to the affected individual
Probabilistic outputProbabilistic output plus a pathway for human intervention
A model call returning a responseA model call plus structured logs that survive a regulator’s investigation
Vendor SDK and an API keyVendor SDK, API key, and a recorded chain of who saw what data and why
A trained modelA trained model plus a record of which inputs informed the inference
Output deliveredOutput delivered plus a reversal pathway if the decision is challenged

Three places this lands in real businesses

The abstract version of this is uninteresting. The specific version is where most operators recognise their own environment.

AI-assisted shortlisting. A WA accounting firm under panel contract to a state agency uses an applicant tracking system with built-in AI shortlisting. Resumes are rated. The top ten are surfaced to the recruiter. The system has been in place for two years. From 1 July, every applicant whose resume is rated by that system has a right to be notified, a right to request a human review, and a right to ask how the rating was reached. The firm has not asked the ATS vendor any of those questions yet. The vendor probably cannot answer them either.

AI-driven service triage. A local government uses a model to prioritise incoming requests, complaints, and maintenance work orders. The model assigns a score. The score determines who gets called back this week and who waits a month. From 1 July, the resident whose request was de-prioritised by the model has a right to know it was scored by an automated system, a right to a human review, and a right to an explanation. The vendor SaaS dashboard does not currently produce that explanation.

AI-generated client communications. An IT consultancy under CSP contract drafts client emails, response templates, and meeting summaries using an LLM. Some of those communications go to citizens on behalf of a WA agency. The agency, not the consultancy, is on the hook for IPP 10 obligations on the resulting decisions. The consultancy has not had that conversation with the agency yet. The agency probably has not had it internally either.

Each one is a small operational fact. Together they describe most of the Australian mid-market.

What to do in the next two quarters

Five things, in order. Skip any of them and the rest do not work.

Map the automated decision-making already in your business. You cannot govern what you have not catalogued. Most operators underestimate how much ADM is already running. Recruiting tools, support prioritisation, marketing personalisation, fraud scoring, document classification. Pull the list. Be honest about it.

Identify your WA Government exposure. Direct contracts with WA public entities. Tier 2 exposure through other CSPs. State funding pathways. Local council contracts. If your annual revenue from any WA public entity is meaningful, assume scope and read the contracting clauses.

Designate a privacy officer. Section 151 of the PRIS Act requires a senior officer to hold this function. The right person is not necessarily the most senior. The right person is the one who can actually make changes happen across the systems involved.

Run a Privacy Impact Assessment on any high privacy impact activity. PRIS makes the PIA a statutory step for high-impact activities, not a nice-to-have. Get the methodology in place now, because the first one will take the longest.

Plan the fifth piece alongside the agent architecture, not after it. Choose AI vendors that produce structured logs and explainability hooks by default. Test the audit trail before the model touches production data. The cheap way to do this is during the pilot. The expensive way is during the regulator’s investigation.

Why this stops being a Western Australia story

The Commonwealth Privacy Act has been in tier 2 reform since the Attorney-General’s 2023 response to the Privacy Act review. Recommendations 19 and 20 specifically address automated decision-making. The reforms are progressing through Parliament in stages. NSW is reviewing the Privacy and Personal Information Protection Act 1998 with ADM provisions on the table. Victoria opened its Information Privacy Act review in 2024. The GDPR Article 22 has been law in Europe since 2018, and the regulatory direction in Australia tracks Europe more often than not.

WA is not an outlier. WA is the leading edge of a trajectory that runs through 2027 and 2028. The Western Australian businesses that build IPP 10-grade architecture in the next two quarters have eighteen to thirty months of regulatory clear air before their interstate peers face the same obligations. Most regulatory windows do not work in favour of the regulated business. This one does, for the businesses that act before the deadline.

An AI assessment will tell you, honestly, where your current architecture sits against what IPP 10 will require, and which decisions you are about to make can be made differently. That is the conversation we are having with WA clients this quarter, and the one we expect to be having with the rest of the country through 2027.

Frequently asked questions

When does the PRIS Act commence?

Most operative provisions of the Privacy and Responsible Information Sharing Act 2024 commence on 1 July 2026. The notifiable information breach scheme commences on 1 January 2027. The Information Commissioner and Privacy Deputy Commissioner have been in role since 1 July 2025, preparing the regulatory infrastructure.

Who is in scope?

IPP entities under the Act include WA government departments, statutory authorities, local and regional governments, government trading enterprises, the WA Police Force, courts and tribunals, universities and colleges, and contracted service providers handling personal information on behalf of those entities where a state services contract explicitly imposes the obligation. Unlike the Commonwealth Privacy Act, the PRIS Act does not have a small business exemption and does not have an employee records exemption.

What is IPP 10?

IPP 10 is the Information Privacy Principle governing automated decision-making. It requires IPP entities to notify individuals when ADM is in use, provide a pathway for human intervention in decisions that affect them, and explain how a decision was reached. It is the first Australian regulatory provision to specifically govern ADM and is ahead of the equivalent provisions being considered under the Commonwealth Privacy Act tier 2 reforms.

Does PRIS apply if my business is not a WA Government supplier?

Directly, no. Indirectly, very likely. CSP obligations flow down through contracting chains, and a tier 2 supplier to a CSP can fall within scope depending on the contract language. Any Australian business with revenue from WA public entities, including local councils, should review the contracting position. The broader signal matters more than the technical scope: PRIS is the leading edge of an Australia-wide trajectory toward ADM regulation.

We use Microsoft 365 Copilot. Are we in scope?

Copilot is a productivity overlay that inherits the permissions and data governance of your existing Microsoft 365 environment. The scope follows the data, not the tool. If Copilot is being used to make or assist decisions about individuals whose personal information is held under a WA Government contract, the use is in scope under IPP 10. If Copilot is being used to draft a sales email to a private business, it is not.

Where should an Australian business start?

Map automated decision-making across the business first. You cannot govern what has not been catalogued. Most operators underestimate how much ADM is already running in their environment through recruiting tools, support triage, marketing personalisation, fraud scoring, and document classification. Once the map is complete, the rest of the work is sequencing: identify WA Government exposure, designate a privacy officer, run a Privacy Impact Assessment on any high privacy impact activity, and plan the explainability and audit layer alongside the agent architecture rather than after it.

Will your AI strategy survive 1 July?

Our AI team can help you map automated decision-making across your business, run a privacy impact assessment, and design the explainability and audit layer your agents will need. Book a free assessment to find out where you stand.

Book a Free Assessment

About the Author
Written by Moe Chizari, Chief Executive Officer of Epic IT, a managed IT, cyber security and AI partner for Australian mid-market businesses, with offices in Perth, Sydney and Brisbane. Moe brings 17 years across financial markets, treasury and technology, including five years at Bravura Solutions running enterprise software delivery and five years inside Group Treasury at Westpac and Macquarie leading APRA-regulated programmes (APS-117 IRRBB, APS-210 LCR & Capital Transformation). He holds a Bachelor of International Business from RMIT University, is a certified Project Management Professional (PMP), and an AFMA Diploma of Financial Markets graduate.

Further Reading

Previous

The three questions Australian CEOs should ask their IT partner by Friday

Return to News
Back to News
Next

Forward Deployed Engineer: what the label actually means