New phones are being launched with features enabled by artificial intelligence (AI). The latest of these was Google’s flagship Google Pixel 9 phone. Samsung’s Galaxy S24 phone, released at the beginning of 2024, also features a range of AI-enabled photo editing features.
The hidden story behind devices like these is how companies have managed to migrate the processing required for these AI features from the cloud to the device in the palm of your hand.
In the Google Pixel 9 phone, a feature called Magic Editor allows users to “re-imagine” their photos using generative AI. What this means in practice is the ability to reposition the subject in the photo, erase someone else from the background, or adjust the gray sky to a blue one. It is done by providing suitable prompts and letting the app do the rest.
The phone’s generative AI features also allows you to add people or objects to your pictures by typing in a text prompt.
Of course, users have always been able to do this using photo editing software, but making the result look natural and not as if it has been obviously edited, takes some skill. Magic Editor promises to use AI to perform these complex photo edits with “simple and intuitive actions.”
Another feature called “Add Me,” allows users to take a group photo without having to hand your phone to a stranger. The phone’s owner simply takes a photo of the group, then hands it to a friend and steps into the same place they’ve just taken a snap of. The phone then stitches the two shots together.
Another feature called “Best Take” can be used to select the best elements from a series of very similar images and combine them all into one picture. Google’s chatbot technology powers a digital assistant and other features on the phone.
Features on phones have come a long way since the first digital phones; or when phones started to have their own integrated cameras.
To the edge
Traditionally, the processing required for such AI-based functions has been too demanding to host on a device like a phone. Instead, it is offloaded to online cloud services powered by large, powerful computer servers.
However, companies are increasingly recognizing the need to perform much of the processing to customer devices, potentially putting greater control in the hands of consumers.
This involves migrating significant amounts of AI computational processing to what companies call the “edge.” The edge describes what are typically consumer devices like phones with reduced processing performance.
The difference between how cloud-based and edge-based AI work:
In order to do this, the power demands for processing need to be reduced. Companies have achieved this migration with specialized microprocessors that are specifically tailored to AI-based processes.
For instance, Google’s Tensor AI processors, referred to as tensor processing units (TPUs), appear to be central to the features available on their Pixel mobiles. The edge based processors are capable of efficiently applying AI models to data acquired or stored on mobile devices using specialized software.
These TPUs include networks of components called systolic arrays, which enable large amounts of data to be processed simultaneously. This efficient design saves power and computation time.
This is crucial because of the huge number of calculations that need to be performed to make a single AI decision. This is something that processors, such as Google’s TPUs, have become much better at in the last few years.
Indeed, the initial TPUs, first designed in 2015, were created to help speed up the computations performed by large, cloud-based servers during the training of AI models. In 2018, the first TPUs designed to be used by computers at the “edge” were released by Google. Then, in 2021, the first TPUs designed for phones appeared—again, for the Google Pixel.
There’s huge competition to integrate greater amounts of AI onto mobile phones grows. That means we’re likely to see even more innovative technology arrive on the market in coming years.
The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
How AI features in smartphones are reducing their dependence on the cloud (2024, September 4)
retrieved 4 September 2024
from https://techxplore.com/news/2024-09-ai-features-smartphones-cloud.html
part may be reproduced without the written permission. The content is provided for information purposes only.
New phones are being launched with features enabled by artificial intelligence (AI). The latest of these was Google’s flagship Google Pixel 9 phone. Samsung’s Galaxy S24 phone, released at the beginning of 2024, also features a range of AI-enabled photo editing features.
The hidden story behind devices like these is how companies have managed to migrate the processing required for these AI features from the cloud to the device in the palm of your hand.
In the Google Pixel 9 phone, a feature called Magic Editor allows users to “re-imagine” their photos using generative AI. What this means in practice is the ability to reposition the subject in the photo, erase someone else from the background, or adjust the gray sky to a blue one. It is done by providing suitable prompts and letting the app do the rest.
The phone’s generative AI features also allows you to add people or objects to your pictures by typing in a text prompt.
Of course, users have always been able to do this using photo editing software, but making the result look natural and not as if it has been obviously edited, takes some skill. Magic Editor promises to use AI to perform these complex photo edits with “simple and intuitive actions.”
Another feature called “Add Me,” allows users to take a group photo without having to hand your phone to a stranger. The phone’s owner simply takes a photo of the group, then hands it to a friend and steps into the same place they’ve just taken a snap of. The phone then stitches the two shots together.
Another feature called “Best Take” can be used to select the best elements from a series of very similar images and combine them all into one picture. Google’s chatbot technology powers a digital assistant and other features on the phone.
Features on phones have come a long way since the first digital phones; or when phones started to have their own integrated cameras.
To the edge
Traditionally, the processing required for such AI-based functions has been too demanding to host on a device like a phone. Instead, it is offloaded to online cloud services powered by large, powerful computer servers.
However, companies are increasingly recognizing the need to perform much of the processing to customer devices, potentially putting greater control in the hands of consumers.
This involves migrating significant amounts of AI computational processing to what companies call the “edge.” The edge describes what are typically consumer devices like phones with reduced processing performance.
The difference between how cloud-based and edge-based AI work:
In order to do this, the power demands for processing need to be reduced. Companies have achieved this migration with specialized microprocessors that are specifically tailored to AI-based processes.
For instance, Google’s Tensor AI processors, referred to as tensor processing units (TPUs), appear to be central to the features available on their Pixel mobiles. The edge based processors are capable of efficiently applying AI models to data acquired or stored on mobile devices using specialized software.
These TPUs include networks of components called systolic arrays, which enable large amounts of data to be processed simultaneously. This efficient design saves power and computation time.
This is crucial because of the huge number of calculations that need to be performed to make a single AI decision. This is something that processors, such as Google’s TPUs, have become much better at in the last few years.
Indeed, the initial TPUs, first designed in 2015, were created to help speed up the computations performed by large, cloud-based servers during the training of AI models. In 2018, the first TPUs designed to be used by computers at the “edge” were released by Google. Then, in 2021, the first TPUs designed for phones appeared—again, for the Google Pixel.
There’s huge competition to integrate greater amounts of AI onto mobile phones grows. That means we’re likely to see even more innovative technology arrive on the market in coming years.
The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
How AI features in smartphones are reducing their dependence on the cloud (2024, September 4)
retrieved 4 September 2024
from https://techxplore.com/news/2024-09-ai-features-smartphones-cloud.html
part may be reproduced without the written permission. The content is provided for information purposes only.