Qualcomm on Thursday demonstrated local image generation using artificial intelligence on a smartphone. Ahead of the upcoming Mobile World Congress (MWC) 2023, the chipmaker showed Stable Diffusion 1.5, the AI image generator, running on an Android handset without network access.
Qualcomm is showing off its AI chops on mobile,
demonstrating what it claims is the fastest-ever deployment of AI image
generator Stable Diffusion on a smartphone.
In a demo video, Qualcomm shows version 1.5 of Stable
Diffusion generating a 512 x 512 pixel image in under 15 seconds. Although
Qualcomm doesn’t say what the phone is, it does say it’s powered by its
flagship Snapdragon 8 Gen 2 chipset (which launched last November and has an
AI-centric Hexagon processor). The company’s engineers also did all sorts of
custom optimizations on the software side to get Stable Diffusion running
optimally.
Some context: it takes a lot of computing power to run a
program like Stable Diffusion (which is a staple in AI image generation), and
most apps offering such services on mobile do all their processing in the cloud
rather than burning up your smartphone or tablet. Even generating an image in
this way on a decent laptop will take minutes, so getting a 512 x 512 picture
from a phone in seconds is impressive.
Qualcomm claims this is a speed record, and we’ve no reason to doubt it, though the company also says it’s the first time Stable Diffusion has ever run locally on Android, which doesn’t seem to be true.
After a bit of digging around, we found this blog post from
developer Ivon Huang that shows how they got Stable Diffusion running on a Sony
Xperia 5 II powered by a Qualcomm Snapdragon 865 and 8GB of RAM. Though, as
Huang also notes in a tweet, generating a 512 x 512 image with this setup took
an hour, so Qualcomm certainly wins points for speed if not for achieving a
technical “first.”
Another useful comparison is with iOS. Back in December,
Apple released the optimizations needed to get Stable Diffusion running locally
on its machine learning framework Core ML. So, to test the system today, we got
Stable Diffusion 1.5 running on an iPhone 13 via the Draw Things app with Core
ML acceleration.
With this setup, it took about a minute to generate a 512 x
512 image, so again, Qualcomm wins on speed, though the obvious caveats apply.
Qualcomm is using more recent hardware and a custom optimization package that’s
not publicly available, whereas our iOS test was done on a 2021 phone using a
third-party app.
All of these qualifications aside, this is still impressive
from Qualcomm, even if it is only a demo. Getting big AI models running locally
on mobile devices offers all sorts of advantages over relying on cloud compute.
There’s convenience (you don’t need a mobile connection),
cost (developers won’t charge users when the server bills come due), and
privacy (running locally means you don’t send data to someone else’s computer).
It’s the productization of AI, and it’s happening fast.
0 comments:
Post a Comment