AI Agents Need Guardrails. Australia’s Cyber Agency Just Said So.
Six intelligence agencies published the first AI agent security guidance. Here’s the three-question checklist every Australian SME should run.
The first AI agent security playbook
That AI answering your phones, scheduling your jobs, or categorising your invoices? Intelligence agencies have a name for it: an agent. And on 1 May 2026, cyber security agencies from Australia, the US, UK, Canada, and New Zealand jointly published the first guidance on how to deploy them safely.
The document — “Careful Adoption of Agentic AI Services” — comes from six agencies including Australia’s own Australian Signals Directorate (ASD). It identifies 23 specific risks and lists over 100 best practices for any organisation using AI systems that plan, reason, and act on their own. This isn’t aimed at Silicon Valley. It’s aimed at every business giving AI tools the keys to customer data, calendars, and financial systems.
23
Security risks identified
Across five categories
100+
Best practices listed
First joint guidance of its kind
6
Intelligence agencies
Including Australia’s ASD
Your AI tools are already agents
Most SME owners don’t think of their software as “agents.” But if the tool reads your data, makes a decision, and takes an action without asking first — that’s agency. And many of the AI tools Australian businesses adopted over the past year fit the description.
AI phone answering services access your customer database and calendar to book jobs. AI scheduling tools write directly to your job management platform. Xero and MYOB’s AI features categorise transactions and reconcile accounts autonomously. We wrote recently about AI handling missed calls for trades businesses and Xero’s agent saving small businesses 22 hours a month. The productivity gains are real. But so is the attack surface.
The Five Eyes guidance puts it bluntly: agentic AI systems will malfunction. Not might — will. Every component in an agentic system “widens the attack surface, exposing the system to additional avenues of exploitation,” according to the joint advisory. And the consequences scale directly with the permissions you’ve granted.
The two risks that hit SMEs hardest
The guidance identifies five risk categories: privilege, design and configuration, behavioural, structural, and supply chain. For small businesses, two stand out.
Privilege risk is the most immediate. Most SMEs grant AI tools broad access to get them working quickly — full read/write to the CRM, unrestricted calendar access, admin-level permissions on the accounting platform. The guidance recommends strict least-privilege access: give each tool only the permissions it needs, only for the duration it needs them. An AI phone agent needs to read your calendar and create bookings. It doesn’t need access to your financial records.
Behavioural risk is the hardest to spot. Agentic systems can take unexpected actions, especially when handling edge cases they weren’t designed for. The guidance describes a scenario where a compromised procurement agent — with excessive financial system access — modifies contracts, approves unauthorised payments, and forges audit logs. Scale it down for an SME: an AI bookkeeper with broad permissions miscategorising a month of expenses before anyone notices is the same class of problem.
Three questions to ask this week
You don’t need to read a 100-point guidance document. Start with three questions about every AI tool in your business.
First: what can it actually do? Map each tool’s permissions — what data it reads, what systems it writes to, what actions it takes without human approval. If you don’t know, that’s the first problem.
Second: what’s the worst case? For each tool, ask: if this malfunctioned or was compromised, what’s the blast radius? Customer data exposed? Payments approved? Jobs rescheduled without notice?
Third: can I undo it? The guidance emphasises “resilience, reversibility and risk containment over efficiency gains.” If your AI scheduling tool reschedules tomorrow’s jobs incorrectly, can you roll back? If your AI bookkeeper miscategorises a quarter’s worth of transactions, how long until you notice?
Most Australian SMEs will find that the answers reveal gaps — not because the tools are bad, but because nobody asked the questions when the tools were set up.
Key takeaways
Sources
ASD ACSC — Careful Adoption of Agentic AI Services (May 2026)
CISA/NSA/Five Eyes — Careful Adoption of Agentic AI Services (PDF)
▶Assumptions & methodology
- The 23 risks and 100+ best practices figures are from the joint advisory “Careful Adoption of Agentic AI Services” published 1 May 2026 by CISA, NSA, ASD ACSC, Canadian CCCS, NZ NCSC, and UK NCSC.
- The compromised procurement agent scenario is taken directly from the guidance document as an illustrative example of privilege and behavioural risk.
- References to AI phone agents, AI scheduling tools, and AI bookkeeping features (Xero, MYOB) reflect tools currently marketed to Australian SMEs. Specific security vulnerabilities of these products are not claimed — the point is that any tool with autonomous access to business data carries the risk categories identified in the guidance.
Next
Two-Thirds of Copilot Licences Go Unused. Check Your AI Spend.
Field Notes are general commentary on AI trends for Australian businesses. They don’t constitute professional advice. Talk to your accountant, lawyer, or IT adviser before acting on anything specific to your situation — or talk to us if you want help working out where AI fits.
Not sure where your AI tools stand?
Book a call and we’ll walk through your current stack, map the permissions, and flag the gaps before they become problems.
Book a call →