Home » IT Security & Technology Blog » ChatGPT Had a Hidden Data Leak Channel — What Wealth Managers Should Learn From It
ChatGPT data leak vulnerability affecting wealth management firms

Security researchers at Check Point disclosed on 30 March that ChatGPT contained a vulnerability allowing sensitive conversation data to be silently exfiltrated through DNS requests — a channel OpenAI’s security controls had overlooked. A single malicious prompt or a backdoored custom GPT could turn a normal chat session into a covert data pipeline, leaking uploaded files, messages, and other content to an attacker-controlled server, as reported by The Register. OpenAI patched the flaw on 20 February, before public disclosure.

For wealth management firms, this is a pointed reminder about the risks of feeding client data into AI tools. Financial advisers and support staff are increasingly using ChatGPT and similar platforms for research, drafting client communications, and analysing documents. If any of that activity involves client names, portfolio details, or sensitive financial information, a vulnerability like this could have exposed it — potentially breaching your obligations under the Privacy Act and APRA’s CPS 234 information security standard.

The practical takeaway: establish clear policies on what data staff can and can’t share with AI tools. At minimum, no client-identifiable information should go into any external AI platform without explicit risk assessment. Consider whether your firm needs an enterprise AI agreement with guardrails around data residency and logging. And if you haven’t reviewed your AI usage policies recently, now’s the time — regulators are watching this space closely.

All IT Services works with financial services firms to build practical AI governance frameworks that protect client data without killing productivity.

Related Guide

Cybersecurity for Sydney SMBs

Explore our complete guide to protecting your business from cyber threats.

Read the Full Guide →

Posted in Security