• Audio
  • Live tv
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Service
Sunday, May 28, 2023
Morning News
No Result
View All Result
  • Login
  • Home
  • News
    • Local
    • National
    • World
  • Markets
  • Economy
  • Crypto
  • Real Estate
  • Sports
  • Entertainment
  • Health
  • Tech
    • Automotive
    • Business
    • Computer Sciences
    • Consumer & Gadgets
    • Electronics & Semiconductors
    • Energy & Green Tech
    • Engineering
    • Hi Tech & Innovation
    • Machine learning & AI
    • Security
    • Hardware
    • Internet
    • Robotics
    • Software
    • Telecom
  • Lifestyle
    • Fashion
    • Travel
    • Canadian immigration
  • App
    • audio
    • live tv
  • Home
  • News
    • Local
    • National
    • World
  • Markets
  • Economy
  • Crypto
  • Real Estate
  • Sports
  • Entertainment
  • Health
  • Tech
    • Automotive
    • Business
    • Computer Sciences
    • Consumer & Gadgets
    • Electronics & Semiconductors
    • Energy & Green Tech
    • Engineering
    • Hi Tech & Innovation
    • Machine learning & AI
    • Security
    • Hardware
    • Internet
    • Robotics
    • Software
    • Telecom
  • Lifestyle
    • Fashion
    • Travel
    • Canadian immigration
  • App
    • audio
    • live tv
No Result
View All Result
Morning News
No Result
View All Result
Home Tech Computer Sciences

Is this a deer I see? Socially aware AI adapts by asking questions of humans

author by author
November 10, 2022
in Computer Sciences, Machine learning & AI
Reading Time: 6 mins read
0 0
A A
0
0
SHARES
14
VIEWS
Share on FacebookShare on TwitterLinkedinReddit
Is this a deer I see? Socially aware AI adapts by asking questions of humans
Scholars built an agent that learns by asking humans questions and adapts them based on socially aware observations. If “What type of animal is that?” elicits no response, the algorithm might ask instead, “Is that a deer I see?” Credit: DALL-E

As good as they’ve become, artificial intelligence agents are still largely only as good as the data upon which they were trained. They don’t know what they don’t know. In the real world, people faced with unfamiliar situations and surroundings adapt by watching what others around them are doing and by asking questions. When in Rome, as they say. Experts in educational psychology call this “socially situated learning.”

Until now, AI agents have lacked this ability to learn on the fly, but researchers at Stanford University recently announced that they have developed artificially intelligent agents with the ability to seek out new knowledge by asking people questions.

“We built an agent that looks at photos and learns to ask natural language questions about them to expand its knowledge beyond the datasets it was originally trained on,” says Ranjay Krishna, first author of a recent study appearing in the journal Proceedings of the National Academy of Sciences. Krishna earned his doctorate at Stanford and is now on the faculty at the University of Washington.

Uncanny awareness

The new approach combines aspects of computer vision and human cognitive and behavioral sciences to take machine learning in a new direction. They call it “socially situated artificial intelligence.”

The kicker in this research is that when people are unwilling or disinterested in responding to AI’s questions, which can often seem simplistic or mundane, the AI adapts.

For instance, when analyzing a photo of a person and an unfamiliar four-legged animal, the algorithm might first ask, “What type of animal is that?” That might beget ironic or sarcastic answers (“That’s a human.”) Or, worse, it might get no answer at all. Instead, the algorithm might ask, “Is that a dog I see?” Posing the question in this way is more likely to engender a truthful answer: “No, that’s a deer.”

“Much as we’d like to think that people are earnest respondents, willing to answer any question the AI might pose, often they are not,” Krishna says. “Our agent senses this and changes its questions based on its socially aware observations of which questions people will and won’t answer.”

The new agent achieves several goals at once. First, of course, it learns new visual concepts, which is the main goal. But, second, it also learns to read social norms. Additionally, Krishna notes, there is a cumulative effect. After asking questions and learning new information, the AI retrains itself. The next time, it asks different questions because it has learned more things about the world.

“There are only so many ways you can describe a table. A person might understandably not want to answer questions seen as disingenuous, nonsensical, or just plain boring,” Krishna says. “But the agent gets around those challenges with clever questions that become more sophisticated as the agent learns.”

Testing the hypothesis

To test their approach, the research team, including Krishna’s doctoral advisors Stanford HAI Co-Director Fei-Fei Li and Michael Bernstein, professors in Stanford’s Department of Computer Science, engaged in an eight-month experiment where their algorithm viewed images posted on a photography-based social media platform and asked questions of some 236,000 people, many of whom were the photographers themselves.

Over the course of the experiment, the new algorithm more than doubled its ability to recognize new visual information in images posted by its human correspondents.

The potential of social AI

Socially situated AI, the researchers believe, can overcome limitations on how AI learns and push intelligence gathering in new directions. The researchers think the approach creates opportunities for AI agents able to recognize their own anti-social behaviors and adapt questions on the fly to avoid that all-too-human quality: boredom.

“The agent has an iterative learning process where every once in a while, it uses all the new content that it has seen and retrains itself, so that the next time it would ask different questions based on the things it has learned about the world,” Krishna says. “As it learns to ask better questions, the human respondents stay engaged.”

Most important, it allows the AI to learn new information beyond the datasets upon which it was originally trained. Current methods of training are not unlike locking the agent in a room with a stack of books, as the authors note in their paper. Once those pages are learned, all future decisions must be made only using information gleaned from those books and nothing else.

Further complicating matters, whatever books the AI is given to train on must first be annotated by people—a process known in artificial intelligence as “labeling.” That is, human annotators must tell the AI what it is seeing before the AI can learn to see. Unfortunately, getting annotators to label routine content is a challenge. And, without that labeling, the AI cannot learn.

Instead of asking annotators to label data, agents that can learn socially by asking questions about their situations are more likely to garner helpful responses from people.

The new agent effectively achieves labeling by asking questions and, when it senses reluctance on the part of human correspondents, it learns to ask questions in new ways to get earnest, truthful answers.

There are certain potential risks as well. Pointing to Microsoft’s Tay, a similar agent deployed on Twitter that soon began posting anti-social tweets learned from its interactions with people, this socially situated AI does not suffer from the same issues. Users do not initiate interactions and, therefore, cannot coordinate attacks on the agent. The agent decides whom to interact with and asks interesting questions to control what it learns.

The authors also conducted experiments to study how AIs should introduce themselves to people to garner helpful and avoid “troll” responses. Regardless, Krishna says there is still a lot of research to be done to account for the biases that AIs might learn from people and to mitigate the risks that those learned biases might result in.

As for now, the current version of the agent operates digitally on social media. The next step for Krishna is to transfer this approach to real-world situations in which a person might correct robots on the fly when it sees them making a mistake. He foresees a day when people might be able to teach robots, in their own homes, to accomplish new tasks that make their lives easier.

Other potential applications include health care, where robots might ask providers to clarify their medical procedures, technologies that modify their interfaces based on direct user feedback, and culturally aware agents that can learn from diverse communities to improve learning.

“I would be interested in moving into the physical world where people are interacting with robots to get them to solve new tasks. Or, if you see your AI making a mistake, people should be able to quickly provide feedback and correct it immediately,” Krishna says of his next steps.

More information:
Ranjay Krishna et al, Socially situated artificial intelligence enables learning from human interaction, Proceedings of the National Academy of Sciences (2022). DOI: 10.1073/pnas.2115730119

Journal information:Proceedings of the National Academy of Sciences
Provided by
Stanford University

Citation:
Is this a deer I see? Socially aware AI adapts by asking questions of humans (2022, November 10)
retrieved 10 November 2022
from https://techxplore.com/news/2022-11-deer-socially-aware-ai-humans.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Set of 5 Clipper-mate Pocket Combs 5" All Fine Teeth

Avalon Coconut Body Lotion, 7 Ounce, Coconut, 7 ounces, 7 oz

Is this a deer I see? Socially aware AI adapts by asking questions of humans
Scholars built an agent that learns by asking humans questions and adapts them based on socially aware observations. If “What type of animal is that?” elicits no response, the algorithm might ask instead, “Is that a deer I see?” Credit: DALL-E

As good as they’ve become, artificial intelligence agents are still largely only as good as the data upon which they were trained. They don’t know what they don’t know. In the real world, people faced with unfamiliar situations and surroundings adapt by watching what others around them are doing and by asking questions. When in Rome, as they say. Experts in educational psychology call this “socially situated learning.”

Until now, AI agents have lacked this ability to learn on the fly, but researchers at Stanford University recently announced that they have developed artificially intelligent agents with the ability to seek out new knowledge by asking people questions.

“We built an agent that looks at photos and learns to ask natural language questions about them to expand its knowledge beyond the datasets it was originally trained on,” says Ranjay Krishna, first author of a recent study appearing in the journal Proceedings of the National Academy of Sciences. Krishna earned his doctorate at Stanford and is now on the faculty at the University of Washington.

Uncanny awareness

The new approach combines aspects of computer vision and human cognitive and behavioral sciences to take machine learning in a new direction. They call it “socially situated artificial intelligence.”

The kicker in this research is that when people are unwilling or disinterested in responding to AI’s questions, which can often seem simplistic or mundane, the AI adapts.

For instance, when analyzing a photo of a person and an unfamiliar four-legged animal, the algorithm might first ask, “What type of animal is that?” That might beget ironic or sarcastic answers (“That’s a human.”) Or, worse, it might get no answer at all. Instead, the algorithm might ask, “Is that a dog I see?” Posing the question in this way is more likely to engender a truthful answer: “No, that’s a deer.”

“Much as we’d like to think that people are earnest respondents, willing to answer any question the AI might pose, often they are not,” Krishna says. “Our agent senses this and changes its questions based on its socially aware observations of which questions people will and won’t answer.”

The new agent achieves several goals at once. First, of course, it learns new visual concepts, which is the main goal. But, second, it also learns to read social norms. Additionally, Krishna notes, there is a cumulative effect. After asking questions and learning new information, the AI retrains itself. The next time, it asks different questions because it has learned more things about the world.

“There are only so many ways you can describe a table. A person might understandably not want to answer questions seen as disingenuous, nonsensical, or just plain boring,” Krishna says. “But the agent gets around those challenges with clever questions that become more sophisticated as the agent learns.”

Testing the hypothesis

To test their approach, the research team, including Krishna’s doctoral advisors Stanford HAI Co-Director Fei-Fei Li and Michael Bernstein, professors in Stanford’s Department of Computer Science, engaged in an eight-month experiment where their algorithm viewed images posted on a photography-based social media platform and asked questions of some 236,000 people, many of whom were the photographers themselves.

Over the course of the experiment, the new algorithm more than doubled its ability to recognize new visual information in images posted by its human correspondents.

The potential of social AI

Socially situated AI, the researchers believe, can overcome limitations on how AI learns and push intelligence gathering in new directions. The researchers think the approach creates opportunities for AI agents able to recognize their own anti-social behaviors and adapt questions on the fly to avoid that all-too-human quality: boredom.

“The agent has an iterative learning process where every once in a while, it uses all the new content that it has seen and retrains itself, so that the next time it would ask different questions based on the things it has learned about the world,” Krishna says. “As it learns to ask better questions, the human respondents stay engaged.”

Most important, it allows the AI to learn new information beyond the datasets upon which it was originally trained. Current methods of training are not unlike locking the agent in a room with a stack of books, as the authors note in their paper. Once those pages are learned, all future decisions must be made only using information gleaned from those books and nothing else.

Further complicating matters, whatever books the AI is given to train on must first be annotated by people—a process known in artificial intelligence as “labeling.” That is, human annotators must tell the AI what it is seeing before the AI can learn to see. Unfortunately, getting annotators to label routine content is a challenge. And, without that labeling, the AI cannot learn.

Instead of asking annotators to label data, agents that can learn socially by asking questions about their situations are more likely to garner helpful responses from people.

The new agent effectively achieves labeling by asking questions and, when it senses reluctance on the part of human correspondents, it learns to ask questions in new ways to get earnest, truthful answers.

There are certain potential risks as well. Pointing to Microsoft’s Tay, a similar agent deployed on Twitter that soon began posting anti-social tweets learned from its interactions with people, this socially situated AI does not suffer from the same issues. Users do not initiate interactions and, therefore, cannot coordinate attacks on the agent. The agent decides whom to interact with and asks interesting questions to control what it learns.

The authors also conducted experiments to study how AIs should introduce themselves to people to garner helpful and avoid “troll” responses. Regardless, Krishna says there is still a lot of research to be done to account for the biases that AIs might learn from people and to mitigate the risks that those learned biases might result in.

As for now, the current version of the agent operates digitally on social media. The next step for Krishna is to transfer this approach to real-world situations in which a person might correct robots on the fly when it sees them making a mistake. He foresees a day when people might be able to teach robots, in their own homes, to accomplish new tasks that make their lives easier.

Other potential applications include health care, where robots might ask providers to clarify their medical procedures, technologies that modify their interfaces based on direct user feedback, and culturally aware agents that can learn from diverse communities to improve learning.

“I would be interested in moving into the physical world where people are interacting with robots to get them to solve new tasks. Or, if you see your AI making a mistake, people should be able to quickly provide feedback and correct it immediately,” Krishna says of his next steps.

More information:
Ranjay Krishna et al, Socially situated artificial intelligence enables learning from human interaction, Proceedings of the National Academy of Sciences (2022). DOI: 10.1073/pnas.2115730119

Journal information:Proceedings of the National Academy of Sciences
Provided by
Stanford University

Citation:
Is this a deer I see? Socially aware AI adapts by asking questions of humans (2022, November 10)
retrieved 10 November 2022
from https://techxplore.com/news/2022-11-deer-socially-aware-ai-humans.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tags: Health Caremedical proceduresreal world
Previous Post

Mariah Carey Celebrates ‘The Crown’ Season 5’s Release With Lavish At-Home Premiere

Next Post

Abby De La Rosa Confirms She Is Pregnant With Nick Cannon’s 12th Child

Related Posts

Computer Sciences

New ‘traffic cop’ algorithm helps a drone swarm stay on task

May 28, 2023
11
Computer Sciences

A new method to boost the speed of online databases

May 28, 2023
11
Next Post
Abby De La Rosa, Nick Cannon

Abby De La Rosa Confirms She Is Pregnant With Nick Cannon’s 12th Child

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR TODAY

Egypt
Lifestyle

Egypt unveils recently discovered ancient workshops, tombs in Saqqara necropolis

by author
May 28, 2023
0
12

CAIRO (AP) - Egyptian antiquities authorities Saturday unveiled ancient workshops and tombs they say were discovered recently at a Pharaonic...

Alabama quarterback Bryce Young gets a jersey from NFL Commissioner Roger Goodell after being chosen by Carolina Panthers with the first overall pick during the first round of the NFL football draft, Thursday, April 27, 2023, in Kansas City, Mo. (AP Photo/Jeff Roberson)

Panthers take Bryce Young at No. 1 overall in NFL draft

May 28, 2023
12

Toronto police trying to identify victim who was dumped unconscious on sidewalk from trunk of car

May 28, 2023
12

Ottawa River expected to peak by Friday as water floods streets and properties in Ottawa-Gatineau

May 27, 2023
12

Canadian work experience requirement removed for engineers in Ontario

May 28, 2023
12

POPULAR NEWS

Dutch government to restrict sales of processor chip tech

May 15, 2023
33
Here’s what happens to NFTs when you die: Nifty Newsletter, April 12–18

Here’s what happens to NFTs when you die: Nifty Newsletter, April 12–18

May 19, 2023
31

Loans decline after SVB failure, Fed’s Beige Book finds, and add to stress on the economy

May 19, 2023
27
Several travel industry groups said that a travel advisory for Florida issued by the NAACP could harm small businesses in the state, specifically Black-owned ones.

Travel groups say NAACP’s Florida advisory misses the mark

May 23, 2023
22
Paul Edmonds (center) with two healthcare providers from City of Hope.

How a Breakthrough Treatment Helped ‘Cure’ This Man of HIV

May 23, 2023
18

EDITOR'S PICK

Economy

Fed to release postmortem report on Silicon Valley Bank collapse on Friday

by author
May 25, 2023
0
11

The Federal Reserve’s review of its oversight of Silicon Valley Bank will be released on April 28, the central bank...

Read more

‘Very rare’ heist at Toronto Pearson airport leads to $20M in gold, high-value goods stolen

Riley, Hardy win first PGA Tour event at Zurich Classic, Canadians Hadwin, Taylor finish second

Chinese team creates vocal cords on a chip

Johannsson leads after 1st day of JM Eagle LA Championship

Morning News

Welcome to our Ads

Create ads focused on the objectives most important to your business Please contact us info@morns.ca

PBMIY 3 in 1 15W Foldable Fast Wireless Charger Stand Compatible with iPhone 13/12/11Pro/Max/XR/XS Max/X

Modern Nightstand Bedside Desk Lamp Set of 2 for Bedroom, Living Room,Office, Dorm, Gold

Backup Camera for Car HD 1080P 4.3 Inch Monitor Rear View System Reverse Cam Kit Truck SUV Minivan Easy Installation

OPI Natural Nail Base Coat, Nail Polish Base Coat, 0.5 fl oz

  • Home
  • Audio
  • Live tv
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Service

© 2022 Morning News - morns.ca by morns.ca.

No Result
View All Result
  • Home
  • News
    • Local
    • National
    • World
  • Markets
  • Economy
  • Crypto
  • Real Estate
  • Sports
  • Entertainment
  • Health
  • Tech
    • Automotive
    • Business
    • Computer Sciences
    • Consumer & Gadgets
    • Electronics & Semiconductors
    • Energy & Green Tech
    • Engineering
    • Hi Tech & Innovation
    • Machine learning & AI
    • Security
    • Hardware
    • Internet
    • Robotics
    • Software
    • Telecom
  • Lifestyle
    • Fashion
    • Travel
    • Canadian immigration
  • App
    • audio
    • live tv
  • Login

© 2022 Morning News - morns.ca by morns.ca.

Welcome Back!

Sign In with Facebook
Sign In with Google
Sign In with Linked In
OR

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
Go to mobile version