OpenAI, the company behind ChatGPT, just took its most direct shot at Google yet.
The company has unveiled its take on web search with the launch of SearchGPT, a new temporary search tool it is prototyping in partnership with some of the biggest names in publishing, including The Atlantic, Vox Media and News Corp.
From asking for the day’s weather forecast and a rundown of the latest global headlines to details about upcoming concerts in the area and ingredients for meals, the new tool gives users direct answers to their queries and includes attributions and web links to sources, OpenAI says.
SearchGPT is not an official product available to anyone. OpenAI has said it will be making the tool available to a small number of users and publishers in the weeks ahead as it works to refine the product before eventually integrating it into ChatGPT. Interested users can sign up to join a waitlist.
A Northeastern University artificial intelligence and communications researcher says OpenAI’s approach with SearchGPT is a step in the right direction since it is working with publishers and not using their work without permission and attribution, but it still likely will have the same deep foundational issues all large language model technologies have.
“There’s still no actual intelligence, just context-free pattern matching based on language, so there’s still a pretty good chance that some of what gets spit out is going to be misleading or outright nonsense,” says Michael Ann DeVito, a Northeastern professor of computer sciences and communication studies. “I’d also note that training on news content, while probably better than training just on public internet data, still has its own embedded bias problems.”
The SearchGPT system is similar to Perplexity AI’s search engine, which also provides summarized answers and links to sources, though that company has come under fire by publishers for scraping the work of journalists without permission.
Google also has its own AI products, including its AI chatbot (Gemini) and search feature (AI Overview). The AI Overview service had a bumpy start this spring, when it provided users with inaccurate information in their search results, suggesting people eat rocks and put glue on pizza.
OpenAI had long been rumored to have been working on a search product to take on Google, especially as it began making deals with news publishers and online platforms such as Reddit over the past year.
Many users have already been using ChatGPT as a pseudo-search tool due to its speed and conversational nature. The chatbot, however, is notorious for “hallucinating” and sharing false information.
DeVito says these technologies are coming out at a time when media literacy among the general public has taken a nosedive over the past 10 years, making it more challenging for people to decipher a good source of information from a bad one.
“We’ve backslid quite a bit in the past 10 years, unfortunately,” she says. “We’ve gotten to this model, where we say “You’ll find it on the internet.” But we haven’t taught people how to do that properly.”
For its part, ChatGPT states on the bottom of its web page that ChatGPT can make mistakes and it’s important for people to double check the information. But DeVito says that warning does not go nearly far enough.
“This is very akin to when the little warning on cigarettes used to be tiny. Was that enough? No, absolutely not,” she adds. “We started seeing progress in public health when we made the warning labels huge and graphic. You need to catch people’s attention with the warning just as much as the rest of the interface, and the warnings people are putting on there are equivalent to the fine text that they expect nobody to read.”
At their best, these tools should be seen as novel toys to be played around with and “not for anything mission critical,” she says.
“I would argue that it’s best not to use these,” she says. “If you are going to use these, I would be extremely skeptical of what you’re getting out of them. You need to double check everything. Honestly, the time it will take for you to do that will take more time than just looking it up yourself.”
Northeastern University
This story is republished courtesy of Northeastern Global News news.northeastern.edu.
Can you trust AI-powered search engines like OpenAI’s SearchGPT? Expert is ‘extremely skeptical’ (2024, July 29)
retrieved 29 July 2024
from https://techxplore.com/news/2024-07-ai-powered-openai-searchgpt-expert.html
part may be reproduced without the written permission. The content is provided for information purposes only.
OpenAI, the company behind ChatGPT, just took its most direct shot at Google yet.
The company has unveiled its take on web search with the launch of SearchGPT, a new temporary search tool it is prototyping in partnership with some of the biggest names in publishing, including The Atlantic, Vox Media and News Corp.
From asking for the day’s weather forecast and a rundown of the latest global headlines to details about upcoming concerts in the area and ingredients for meals, the new tool gives users direct answers to their queries and includes attributions and web links to sources, OpenAI says.
SearchGPT is not an official product available to anyone. OpenAI has said it will be making the tool available to a small number of users and publishers in the weeks ahead as it works to refine the product before eventually integrating it into ChatGPT. Interested users can sign up to join a waitlist.
A Northeastern University artificial intelligence and communications researcher says OpenAI’s approach with SearchGPT is a step in the right direction since it is working with publishers and not using their work without permission and attribution, but it still likely will have the same deep foundational issues all large language model technologies have.
“There’s still no actual intelligence, just context-free pattern matching based on language, so there’s still a pretty good chance that some of what gets spit out is going to be misleading or outright nonsense,” says Michael Ann DeVito, a Northeastern professor of computer sciences and communication studies. “I’d also note that training on news content, while probably better than training just on public internet data, still has its own embedded bias problems.”
The SearchGPT system is similar to Perplexity AI’s search engine, which also provides summarized answers and links to sources, though that company has come under fire by publishers for scraping the work of journalists without permission.
Google also has its own AI products, including its AI chatbot (Gemini) and search feature (AI Overview). The AI Overview service had a bumpy start this spring, when it provided users with inaccurate information in their search results, suggesting people eat rocks and put glue on pizza.
OpenAI had long been rumored to have been working on a search product to take on Google, especially as it began making deals with news publishers and online platforms such as Reddit over the past year.
Many users have already been using ChatGPT as a pseudo-search tool due to its speed and conversational nature. The chatbot, however, is notorious for “hallucinating” and sharing false information.
DeVito says these technologies are coming out at a time when media literacy among the general public has taken a nosedive over the past 10 years, making it more challenging for people to decipher a good source of information from a bad one.
“We’ve backslid quite a bit in the past 10 years, unfortunately,” she says. “We’ve gotten to this model, where we say “You’ll find it on the internet.” But we haven’t taught people how to do that properly.”
For its part, ChatGPT states on the bottom of its web page that ChatGPT can make mistakes and it’s important for people to double check the information. But DeVito says that warning does not go nearly far enough.
“This is very akin to when the little warning on cigarettes used to be tiny. Was that enough? No, absolutely not,” she adds. “We started seeing progress in public health when we made the warning labels huge and graphic. You need to catch people’s attention with the warning just as much as the rest of the interface, and the warnings people are putting on there are equivalent to the fine text that they expect nobody to read.”
At their best, these tools should be seen as novel toys to be played around with and “not for anything mission critical,” she says.
“I would argue that it’s best not to use these,” she says. “If you are going to use these, I would be extremely skeptical of what you’re getting out of them. You need to double check everything. Honestly, the time it will take for you to do that will take more time than just looking it up yourself.”
Northeastern University
This story is republished courtesy of Northeastern Global News news.northeastern.edu.
Can you trust AI-powered search engines like OpenAI’s SearchGPT? Expert is ‘extremely skeptical’ (2024, July 29)
retrieved 29 July 2024
from https://techxplore.com/news/2024-07-ai-powered-openai-searchgpt-expert.html
part may be reproduced without the written permission. The content is provided for information purposes only.