Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.
- Elon Musk, June 10, 2024
Strong words from the world’s richest man about Apple’s “Intelligence” announcement. Let’s take a harder look.
What did Apple Announce?
At their Worldwide Developer’s Conference (WWDC), Apple announced the planned release (beta starting in Fall 2024) of Apple Intelligence. Features include generative AI-powered:
Image generation
Text summarization
Improvements to Siri interactions
It looks like there will be three different “levels” of AI available to users:
On-device models
Apple Private Cloud Compute
Integration with OpenAI’s GPT
Apple was careful to include security and privacy guarantees in its announcement:
This is all well and good, but requires drilling down several levels further. Toward that end, I propose 5 questions.
Things security and privacy teams should ask about Apple Intelligence:
1. Will OpenAI retain Apple user data? If so, how long?
[Update 18 June 2024] The most logical approach would be for Apple to hook into GPT-4o with the OpenAI application program interface (API). The API doesn’t train on inputs and retains them for 30 days or less.
Due to Apple’s privacy-focused market appeal, its conceivable they even It appears Apple negotiated Zero Data Retention (ZDR) with OpenAI on behalf of their customers when using Siri and Apple’s writing tools.
Conversely, Apple mentions “ChatGPT subscribers can connect accounts to access paid features.”
ChatGPT Team and Plus retain data indefinitely by default. And the “your data is never stored” bullet point only appears under the Private Cloud Compute heading.
[Update 18 June 2024] So how will data retention work? OpenAI notes “data preferences will apply under ChatGPT’s policies,” which for most users means indefinite retention (except in temporary chat mode).
2. Will OpenAI train on Apple user data?
While Apple says they do not leverage “users’ private personal data or user interactions when training our foundation models,” I couldn’t find anything from Apple about OpenAI’s training policies.
It’s possible Apple and OpenAI have some sort of side deal regarding training GPT models, although I doubt it.
[Update 18 June 2024] While I think it would be basically impossible for OpenAI to train on data it doesn’t retain, I have not been able to find any definitive statement specific to training. Separately, if you integrate ChatGPT Plus or Basic, you are opted into training by default.
3. “You control when ChatGPT is used.” But how, exactly?
This will be a big issue for enterprise Apple customers who want to disable ChatGPT use because of competitive or data subprocessor concerns.
Optimally this can be enforced through mobile device management (MDM) software.
4. How can users get “verifiable privacy promises”?
People will want to know how their data is moving between and stored with (or not) the various “levels” of AI.
Apple’s blog post on its Private Cloud Compute was relatively detailed and represents a good start. The following will be helpful (and Apple has agreed to provide them):
Architectural diagrams. [Update 1 July 2024] drafted up some interesting ones based on what Apple has shared so far. It would be great to see the company confirm these.
Contractual commitments
Independent security researcher reviews
5. How will Apple protect its users again indirect prompt injection?
Integrating more generative AI into Apple operating systems expands the attack surface for indirect prompt injection. Because Siri will look more and more like an AI agent, the things it can accomplish (and damage it can do) will increase.
Cyber criminals are certain to take note.
We’ve already seen hackers do things like extracting user emails with ChatGPT plugins through embedding malicious instructions in web sites. So it will be important that Apple Intelligence be able to mitigate the risk of:
A user attempting to summarize a web site which contains instructions to:
Lookup “Grandma” in contacts.
Draft a frantic iMessage asking her to send money to a new account.
Silence all replies and notifications so she assumes the worst and does it.
Someone setting up an email summarization shortcut to which a stalker sends malicious instructions to:
Email recent contacts saying you are “heads down working.”
Unlock all connected smart home devices.
Go into airplane mode.
I’m confident Apple has red-teamed these types of things, but if they haven’t, this is something to look at now.
No huge security or privacy red flags here, but this warrants follow-up.
AI is infiltrating every aspect of daily (and especially business) life.
And staying on top of the security and privacy considerations is a full-time job (mine).
The good news?
You can let StackAware do the heavy lifting. Our Data Defense Blueprint offering will map all your AI-related security, privacy, and compliance risks in 30 days.