Recently I heard an interesting stance on how employees should use ChatGPT securely, but that actually increases risk:
not logging in when prompting.
On the surface this might seems like the best approach for the below reasons, but it turns out these are…
Risks in disguise
The tool has limited utility
One governance, risk, and compliance (GRC) pro told me that prohibiting logging in would make ChatGPT less attractive from a business perspective (true) and deter use.
But this is a half-measure.
It discourages use through one administrative control (not logging in) while expecting employees to not work around it by violating another (creating a personal account for work purposes).
Users are (kind of) anonymous without account
It's true that if you don't create an account, ChatGPT doesn't immediately know who you are (or what organization you work for).
And if your employees don't enter anything that could identify them or your company (even through sensitive data generation), they might stay anonymous.
But if they did (or even worse, give something confidential), ChatGPT would quickly identify them.
And, in this case, you would get more badly burned because you couldn't leverage the…
Benefits of account creation
You can opt-out of training
In this case, the model would NOT learn from sensitive info (and regurgitate to everyone, as Amazon experienced in late 2022/early 2023).
If you aren't logged in, everything you give ChatGPT is fair game for training.
And I suspect this content would just be stored in a massive data lake that OpenAI might (somewhat reasonably) assume is already anonymized.
Furthermore, this would make finding your data much more difficult through any sort of legal process, if you had some sort of claim.
Temporary Chats are available
This prevents both training AND limits data retention to 30 days, a major security benefit.
It's true there is no way to enforce opt-out or Temporary Chats without ChatGPT Team or Enterprise.
But again, if you are going to rely on administrative controls in one case (asking employees not to log in), it only makes sense to rely on them in another (having employees take advantage of both features).
You can detect account creation using company email accounts
This lets you remind users to follow the above best practices as soon as they start using the tool.
And allows tracking in your inventory.
One clear upside to not logging in to ChatGPT: prevent syncing data to unmanaged devices
This certainly prevents data loss in some ways.
But seems unlikely to deter a malicious insider.
And the only outsider attack this would mitigate would be someone getting physical or remote access the device itself and screenshotting activity.
So I don't think it's that big of a benefit.
Tackle new technology with effective AI governance
Its visibility and data protection features make logging in to ChatGPT the best choice from a security perspective. But it's easy to see how well-intentioned people might go awry and do the opposite.
This type of advice is exactly how StackAware helps AI-powered companies in
Financial services
Healthcare
B2B SaaS
identify and manage cybersecurity, compliance, and privacy risk.
So if you need help staying on top of these things: