Apple Intelligence took centre stage at this year’s Worldwide Developers Conference (WWDC) 2023, highlighting new artificial intelligence (AI) features that will debut with the upcoming iOS 18, iPadOS 18, and macOS Sequoia. During the event, the tech giant revealed that some of the processing for the AI features will be done on-device and the more complex tasks will be handled by its Private Cloud Compute (PCC) system. Apple has also shared details of its PCC architecture and claimed that there is a heavy focus on data privacy and safety.
Craig Federighi, the Senior Vice President of Software Engineering at Apple, said during the event, “Your data is never stored or made accessible to Apple”. While Apple Intelligence has created a sense of curiosity among many users, some have also appeared sceptical of the company’s ability to fulfil these claims. Among them was Tesla CEO Elon Musk who posted on X (formerly known as Twitter), “It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!”. Notably, Apple has stated that it is using its in-house AI models for both on-device and server-based computing.
Now, Apple has shed more light on how its Private Cloud Compute will function in a blog post. Explaining the data security issues with traditional cloud servers, the tech giant claimed that it is building custom infrastructure with key changes to keep user data secure. There are three important pillars — Stateless computation, Non-targetability, and Verifiable transparency.
Private Cloud Compute’s Stateless computation
Traditionally, cloud servers have a straightforward workflow. Data is pinged to the servers where the cloud computers first log it using the credentials of the user. This enables the servers to ping the information back to the user after running the task. Cloud servers also store some or all of the data to offer it to the user as backup, in case the information is requested again (due to the file being corrupt or accidental deletion). This also helps in cost optimisation as the servers do not have to compute the data again.
In contrast, Apple said that its Private Cloud Compute run “stateless data processing” where the user’s device sends data to PCC for the sole purpose of fulfilling the user’s inference request. It also claimed that the user data remains on the server only till it is returned to the device and “no user data is retained in any form after the response is returned.” The company added that user data is not retained even via logging or for debugging.
It also claimed that even Apple staff with privileged runtime access cannot bypass the stateless computation guarantee.
Private Cloud Compute’s Non-Targetability
Cloud servers also face threats externally from hackers and bad actors who try to find vulnerabilities to breach the system. Apple said it has developed two measures to defend user data from attackers.
First, the tech giant is using the protections of Apple silicon and other connected hardware to ensure that hardware attacks would be rare. Due to Apple’s experience in running cloud operations, it has developed hardware that narrows down the possibility of cyberattacks. Further, it adds that any hardware attack at scale would be both “prohibitively expensive and likely to be discovered.”
For small-scale attacks, Apple claims that its extensive revalidation at data centres (once data arrives and before it reaches cloud computers for processing) ensures that hackers cannot target the data of a specific user.
“To guard against smaller, more sophisticated attacks that might otherwise avoid detection, Private Cloud Compute uses an approach we call target diffusion to ensure requests cannot be routed to specific nodes based on the user or their content,” the tech giant added.
Private Cloud Compute’s Verifiable transparency
Finally, Apple is inviting security researchers to verify the end-to-end security and privacy measures of the Private Cloud Compute system. It claimed that once PCC is launched, it will make software images of every production build of the cloud system publicly available for security research.
To further aid research, Apple will publish every production Private Cloud Compute software image for binary inspection across the OS, applications, and all other executable nodes. Researchers will be able to verify against the measurements in the transparency log. Researchers will be offered rewards for finding flaws in the system.
Affiliate links may be automatically generated – see our ethics statement for details.