If you’ve seen “agentic AI” in your news feed this week, you can blame the Five Eyes. On 1 May, CISA together with Australia’s ASD ACSC and four other national cyber agencies published their first joint guide on the cybersecurity risks of these systems. So what actually is one?
In plain English, an agentic AI is software that doesn’t just answer questions — it takes actions on your behalf. A standard chatbot writes you a reply. An agentic system reads your inbox, drafts a reply, opens your CRM, updates a record, schedules a meeting, and then logs into a third tool to do something else. It chains tool calls together to chase a goal, often with minimal supervision.
That’s also the problem. To be useful, an agent needs access — to your email, your files, your line-of-business apps, sometimes your bank or payment systems. Most organisations have given these agents far more access than anyone is actively monitoring, and that’s exactly what the joint guidance flags as the core risk.
Why it matters for your business: if a vendor offers you an “AI assistant” that can act inside Microsoft 365, your accounting software, or your bookings platform, you’ve just added a new privileged user. The Five Eyes guide is blunt — apply the same controls you’d apply to any privileged account. Least-privilege access. Logging. The ability to pull the plug.
The practical implication this week is simple. Before you turn on the next agentic feature your software vendor offers, ask exactly which systems the agent can touch, what it can do unsupervised, and how you’d know if it went off the rails. If the answer is fuzzy, leave it switched off until it isn’t.
Related Guide
Cybersecurity for Sydney SMBs
Explore our complete guide to protecting your business from cyber threats.
