An associate professor at Emory University’s Goizueta Business School, Rajiv Garg has studied artificial intelligence for over 25 years. As the field becomes increasingly sophisticated, he believes AI will soon have the power to make every person more creative, innovative and knowledgeable.
But that outcome is dependent on the world trusting the technology, and using its power for good.
“AI is moving faster and is going to change the way we work much faster,” Garg said. “As a result, we need to start adopting it piece-by-piece every day.”
Earlier this week, Apple unveiled its plan to integrate generative AI technology across its devices and applications. Called Apple Intelligence, it’s the company’s first major foray into the rapidly-evolving AI space, where it has lagged behind both Microsoft and Google. Apple is also partnering with OpenAI to provide ChatGPT—an AI chatbot—into its new suite of writing tools and Siri.
The technology is designed to optimize and streamline the user experience. Apple Intelligence can summarize a user’s emails before they open a message, order notifications from most-to-least important and transcribe voice recordings. It can proofread writing and offer editing suggestions, and allows users to search for photos or videos using natural language.
Apple’s new venture is a step forward in what Garg calls “change management.”
“Once you start embracing and using it, you’re gonna get better in learning how to use this technology more effectively in the future, to essentially optimize your work,” Garg said, who has served as a member of the United States Artificial Intelligence Safety Institute Consortium, which promotes the development and implementation of trustworthy AI. “That’s where we’re headed.”
AI is shaping up to become one of the most influential human innovations in the modern age. Its applications across a number of industries, from health care to manufacturing and education, can increase productivity, save costs and innovate.
That is if the technology is used for good, Garg said.
AI is not without its drawbacks. There are concerns that the rapidly-developing technology could replace human jobs, spread misinformation by manipulating and generating false content or hurt students’ learning by providing platforms that automate essays or answers to assignments. There are also privacy and security concerns, because AI systems process large volumes of user data.
Last year, hundreds of leaders in the field of artificial intelligence signed an open letter warning that the technology could one day pose a threat to humanity. In a one-sentence statement, the collective said mitigating the risk of the technology should be a global priority alongside pandemics, nuclear war and other societal-scale concerns.
Apple is placing privacy and security at the center of its discussions around AI integration. In a press release announcing its latest iOS upgrade, the word “privacy” is mentioned 18 times. The company said that it doesn’t use its users’ data to train models.
Most Apple Intelligence features will run locally, which means that sensitive data will remain on the device. If a user requests a task too complicated for the local AI model to fulfill, the device can pass the request to more sophisticated AI models available on Apple’s cloud servers. After the request is completed, the data is deleted from the cloud.
“We should think of AI as a partner that’s going to help us to do more and to be more creative,” Garg said. “If we see it like an assistant and a tutor at the same time, we learn how to use it better.”
Apple Intelligence will be made available to users in a testing phase this fall.
2024 The Atlanta Journal-Constitution. Distributed by Tribune Content Agency, LLC.
Apple’s new AI technology is a step forward, professor says (2024, June 17)
retrieved 17 June 2024
from https://techxplore.com/news/2024-06-apple-ai-technology-professor.html
part may be reproduced without the written permission. The content is provided for information purposes only.