
In 2024, Australia published its voluntary AI Ethics Principles. In 2025, the Australian Government began consulting on mandatory AI guardrails. In 2026, your staff are pasting client data into ChatGPT every day and nobody in your organisation knows about it.
The regulatory framework has not caught up yet. But the risk is already here. And when mandatory AI regulation arrives in Australia — as it inevitably will, following the EU’s AI Act and similar frameworks globally — businesses without existing governance will face the same scramble that many experienced when the Notifiable Data Breaches scheme took effect in 2018: expensive remediation, panicked policy writing, and retroactive controls applied to practices that should have been managed from the start.
This article outlines the AI governance framework that every Australian business should have in place today — not because it is legally required yet, but because the risks of uncontrolled AI usage are real, measurable, and growing.
Shadow AI is the use of artificial intelligence tools by staff without organisational awareness, approval, or oversight. It is the AI equivalent of shadow IT — and it is happening at scale.
In our AI discovery audits across Perth businesses, we consistently find that between 40% and 60% of knowledge workers are using consumer AI tools for work tasks. This includes staff pasting client emails into ChatGPT to draft responses, finance teams uploading spreadsheets containing sensitive financial data to AI analysis tools, HR teams using AI to screen resumes containing personal information, marketing teams generating content by feeding confidential strategy documents into AI platforms, and management using AI to summarise board papers and meeting notes.
In almost every case, the staff member believes they are being productive and innovative. They are correct on both counts. They are also creating data governance risks that the organisation does not know about, cannot control, and may be liable for.
The problem is not that staff are using AI. The problem is that they are using uncontrolled consumer AI tools with no data loss prevention, no acceptable use policy, no audit trail, and no understanding of where the data goes after they press enter.
Australia currently operates under eight voluntary AI Ethics Principles published by the Department of Industry, Science and Resources. These principles — human oversight, contestability, fairness, privacy, reliability, transparency, accountability, and human wellbeing — provide a framework for responsible AI use but carry no enforcement mechanism.
Voluntary principles are useful for guiding policy development, but they do not create obligations. No Australian business has been penalised for violating the AI Ethics Principles because they cannot be violated — they are aspirational, not enforceable.
The Australian Government has been consulting on mandatory AI guardrails since 2025, with a focus on high-risk AI applications in areas like healthcare, financial services, and government decision-making. While the specifics are still being developed, the direction is clear: Australia is moving toward mandatory requirements for AI transparency, accountability, and risk management in certain contexts.
When these requirements arrive, businesses that have already established AI governance frameworks will adapt quickly. Businesses that have not will face a compliance project on top of their existing technology and security obligations.
What many businesses miss is that existing legislation already applies to AI usage. The Privacy Act 1988 and the Australian Privacy Principles (APPs) regulate how personal information is collected, used, disclosed, and stored. When your staff paste client personal information into a consumer AI tool, they may be breaching APPs relating to disclosure, overseas transfer, and purpose limitation — regardless of whether AI-specific regulation exists.
The Notifiable Data Breaches scheme compounds this risk. If personal information entered into an AI platform is subsequently compromised, the breach notification obligations under the Privacy Act may be triggered.
ISO 42001:2023 is the international standard for Artificial Intelligence Management Systems (AIMS). It follows the same management system structure as ISO 27001 (information security) and ISO 9001 (quality management), providing a formal framework for establishing, implementing, and maintaining AI governance within an organisation.
While ISO 42001 certification is not required by Australian law, it provides a structured approach to AI governance that aligns with the direction of Australian regulation. Businesses that align their AI practices with ISO 42001 principles will be well-positioned when mandatory requirements arrive.
Based on our work developing AI governance programmes for Perth businesses, here are the seven components that every organisation should have in place.
You cannot govern what you cannot see. The first step is identifying every AI tool in use across your organisation — approved, conditional, and shadow. This requires both technical scanning (monitoring network traffic, browser activity, and application usage for AI platform connections) and organisational engagement (talking to teams about what tools they are using and why).
Discovery is not a one-time exercise. New AI tools emerge weekly, and staff adopt them without waiting for IT approval. Ongoing monitoring is essential to maintain visibility.
A clear, enforceable policy that defines what AI tools are approved for business use, what data can and cannot be used with AI tools (mapped to your data classification framework), who is responsible for AI-related decisions, how to request approval for new AI tools, consequences for policy violations, and incident reporting procedures for AI-related data exposure.
The policy should be practical, not aspirational. Staff will not read a 30-page governance document. They need clear, specific rules: “You may use Claude for drafting client communications. You may not paste client financial data into any AI tool without manager approval. Consumer ChatGPT is prohibited for work use.”
Your existing data classification framework (if you have one) needs an AI layer. This defines which sensitivity levels of data are permitted to interact with AI platforms and under what conditions.
A practical classification might include: Public data (marketing content, published information) can be used with any approved AI tool. Internal data (operational documents, processes) can be used with approved enterprise AI tools with audit logging. Confidential data (client information, financial records) requires DLP controls and can only be used with enterprise AI platforms that have zero-retention policies. Restricted data (PII, health records, legal privileged information) is prohibited from AI input without a documented privacy impact assessment.
Policies without enforcement are suggestions. The technical controls that make AI governance real include Data Loss Prevention (DLP) policies that prevent sensitive data from being submitted to uncontrolled AI platforms, sensitivity labels in Microsoft 365 that classify documents and restrict AI interactions based on classification, Conditional Access rules that control which AI tools can be accessed from managed devices and corporate networks, and browser policies that block access to unapproved AI platforms from work devices.
These controls use the same Microsoft 365 security infrastructure (Purview, Entra ID, Defender) that underpins your cybersecurity programme. For businesses already on a managed security agreement, adding AI governance controls is an extension of existing infrastructure, not a new platform.
Every AI tool that enters your environment should be assessed against a consistent set of criteria: data sovereignty (where is the data processed and stored?), privacy compliance (does the tool comply with the Australian Privacy Principles?), data retention and training policies (does the provider use your data to train models?), security posture (SOC 2, encryption, access controls), and terms of service (who owns the output? who is liable for errors?).
Each tool receives an Approved (can be used within policy), Conditional (can be used with specific restrictions), or Prohibited (blocked technically and by policy) status.
Your staff are the front line of AI governance. They need to understand what tools are approved, what data is off-limits, how to recognise AI-generated content, what prompt injection and data exposure risks look like, and how to report AI-related incidents.
AI awareness training should be practical and scenario-based, not abstract. Show staff real examples of data exposure through AI tools. Walk through the acceptable use policy with concrete scenarios from their daily work. Make it relevant, not theoretical.
AI governance is not a project — it is an ongoing function. Quarterly reporting should cover shadow AI detection (new tools identified, usage trends), DLP events (attempts to submit sensitive data to AI platforms), policy compliance (training completion, policy acknowledgment rates), tool register updates (new tools assessed, status changes), and incident summary (any AI-related data exposure or policy violations).
This reporting feeds into your broader risk and compliance programme and provides the evidence trail that regulators, insurers, and clients will increasingly expect.
Businesses that delay AI governance face three escalating risks.
Data exposure today. Every day without controls, your staff are putting sensitive data into uncontrolled platforms. This is not a future risk — it is happening now. The data has already left your environment. If a breach is subsequently traced to AI usage, your business faces Privacy Act obligations, client notification, and potential regulatory action.
Compliance scramble tomorrow. When mandatory AI regulation arrives, businesses without existing governance will face a compressed timeline to build what should have been developed over months. The cost of reactive compliance is always higher than proactive governance — as every business that scrambled to meet the Notifiable Data Breaches deadline in 2018 can attest.
Competitive disadvantage ongoing. Enterprise clients and government agencies are beginning to include AI governance questions in their due diligence and procurement processes. Businesses that cannot demonstrate AI governance will be excluded from opportunities that require it — just as businesses without Essential Eight compliance are now excluded from many government tenders.
AI Governance is the foundation tier of our AI services, available to any client with an active Managed IT Services agreement. We built it to follow the same model that works for cybersecurity: we handle the technical implementation, monitoring, and reporting; you get the protection and the evidence.
Every new and renewing MSA client receives a complimentary three-month Shadow AI Discovery — an ongoing scan that maps every AI tool in use across your organisation. This gives you immediate visibility of your AI exposure before committing to a full governance programme.
The ongoing service covers all seven pillars described in this article: discovery, policy, classification, technical controls, tool vetting, training, and ongoing monitoring with quarterly governance reviews. It integrates with your existing Microsoft 365 security infrastructure and maps AI controls against ISO 42001, the Australian AI Ethics Principles, and the Privacy Act APPs.
For businesses that want to go further, our Managed AI and Custom AI Development tiers add a secure AI platform with enterprise integrations, deployment gates, and dedicated engineering capacity for bespoke solutions — giving your team controlled, enterprise-grade AI agents connected to your business systems instead of uncontrolled consumer tools. These tiers require a minimum Bronze+ cybersecurity baseline and include monthly reviews or steering committees to keep your AI strategy aligned with business priorities.
If you want to understand your current AI exposure and what governance looks like for your business, contact us on 1300 EPIC IT to get started.
Epic IT helps Perth businesses develop practical AI governance frameworks that protect your organisation and prepare you for upcoming regulation. Every MSA client starts with a complimentary three-month Shadow AI Discovery.
Or call us on 1300 EPIC IT (1300 374 248)