While large language models (LLMs) have demonstrated remarkable capabilities in extracting data and generating connected responses, there are real questions about how these artificial intelligence (AI) models reach their answers. At stake are the potential for unwanted bias or the generation of nonsensical or inaccurate “hallucinations,” both of which can lead to false data.
That’s why SMU researchers Corey Clark and Steph Buongiorno are presenting a paper at the upcoming IEEE Conference on Games, scheduled for August 5-8 in Milan, Italy. They will share their creation of a GAME-KG framework, which stands for “Gaming for Augmenting Metadata and Enhancing Knowledge Graphs.”
The research is published on the arXiv preprint server.
A knowledge graph (KG) is a structured representation of information that captures relationships between entities in a way that is easily interpretable by both humans and machines. It organizes data into nodes (representing entities) and edges (representing relationships between entities). Humans create and maintain knowledge graphs, combining their expertise with automated tools and algorithms.
The framework developed by Clark and Buongiorno modifies explicit and implicit connections in knowledge graphs, which can enhance an LLM’s ability to provide accurate responses.
While knowledge graphs enhance an LLM’s reasoning capabilities and performance, they can be challenging to create due to the complexity of capturing, organizing, and integrating data from various sources. The GAME-KG framework uses video games to collect human feedback to modify and validate knowledge graphs to overcome these difficulties.
“GAME-KG is a way for humans to interact with KGs to either integrate new knowledge or modify misunderstandings,” explains Clark, deputy director of the Guildhall, SMU’s graduate program for video game design. “It makes it easier for humans to correct an AI when we start seeing hallucinations. And when you ask a question, the AI uses our modified knowledge graphs to provide an answer. Then we can actually see how AI came to its conclusion because the knowledge graph allows us to trace the information used.”
Clark and Buongiorno’s research explores GAME-KG’s potential across two demonstrations. The first uses the video game Dark Shadows. This film noir-style mystery game collects player feedback to modify and validate knowledge graphs built from data collected from US Department of Justice press releases on human trafficking.
The second demonstration uses OpenAI’s GPT-4 to answer questions about human trafficking press releases. The AI program is prompted for answers based on an original knowledge graph built from the releases. Then a human modifies the knowledge graph by connecting and adding implicit relationships between entities.
Based on their results, the researchers believe that the GAME-KG framework is an important step towards leveraging gaming to modify knowledge graphs to assist LLMs with production of accurate data.
“We’ve been working on using LLMs for critical types of situations, like human trafficking. Understanding how the AI came up with its answer is crucial,” said Buongiorno, a postdoctoral research fellow at SMU. “We need to be asking: How can we make LLMs more accurate? How can we inspect data and reduce hallucinations? Our research shows that AI is a tool that requires human interaction to guide and direct it. It’s up to us to create the methodology to make LLMs more useful and reliable.”
More information:
Steph Buongiorno et al, A Framework for Leveraging Human Computation Gaming to Enhance Knowledge Graphs for Accuracy Critical Generative AI Applications, arXiv (2024). DOI: 10.48550/arxiv.2404.19729
Southern Methodist University
Researchers to present new tool for enhancing AI transparency and accuracy at conference (2024, July 30)
retrieved 30 July 2024
from https://techxplore.com/news/2024-07-tool-ai-transparency-accuracy-conference.html
part may be reproduced without the written permission. The content is provided for information purposes only.
While large language models (LLMs) have demonstrated remarkable capabilities in extracting data and generating connected responses, there are real questions about how these artificial intelligence (AI) models reach their answers. At stake are the potential for unwanted bias or the generation of nonsensical or inaccurate “hallucinations,” both of which can lead to false data.
That’s why SMU researchers Corey Clark and Steph Buongiorno are presenting a paper at the upcoming IEEE Conference on Games, scheduled for August 5-8 in Milan, Italy. They will share their creation of a GAME-KG framework, which stands for “Gaming for Augmenting Metadata and Enhancing Knowledge Graphs.”
The research is published on the arXiv preprint server.
A knowledge graph (KG) is a structured representation of information that captures relationships between entities in a way that is easily interpretable by both humans and machines. It organizes data into nodes (representing entities) and edges (representing relationships between entities). Humans create and maintain knowledge graphs, combining their expertise with automated tools and algorithms.
The framework developed by Clark and Buongiorno modifies explicit and implicit connections in knowledge graphs, which can enhance an LLM’s ability to provide accurate responses.
While knowledge graphs enhance an LLM’s reasoning capabilities and performance, they can be challenging to create due to the complexity of capturing, organizing, and integrating data from various sources. The GAME-KG framework uses video games to collect human feedback to modify and validate knowledge graphs to overcome these difficulties.
“GAME-KG is a way for humans to interact with KGs to either integrate new knowledge or modify misunderstandings,” explains Clark, deputy director of the Guildhall, SMU’s graduate program for video game design. “It makes it easier for humans to correct an AI when we start seeing hallucinations. And when you ask a question, the AI uses our modified knowledge graphs to provide an answer. Then we can actually see how AI came to its conclusion because the knowledge graph allows us to trace the information used.”
Clark and Buongiorno’s research explores GAME-KG’s potential across two demonstrations. The first uses the video game Dark Shadows. This film noir-style mystery game collects player feedback to modify and validate knowledge graphs built from data collected from US Department of Justice press releases on human trafficking.
The second demonstration uses OpenAI’s GPT-4 to answer questions about human trafficking press releases. The AI program is prompted for answers based on an original knowledge graph built from the releases. Then a human modifies the knowledge graph by connecting and adding implicit relationships between entities.
Based on their results, the researchers believe that the GAME-KG framework is an important step towards leveraging gaming to modify knowledge graphs to assist LLMs with production of accurate data.
“We’ve been working on using LLMs for critical types of situations, like human trafficking. Understanding how the AI came up with its answer is crucial,” said Buongiorno, a postdoctoral research fellow at SMU. “We need to be asking: How can we make LLMs more accurate? How can we inspect data and reduce hallucinations? Our research shows that AI is a tool that requires human interaction to guide and direct it. It’s up to us to create the methodology to make LLMs more useful and reliable.”
More information:
Steph Buongiorno et al, A Framework for Leveraging Human Computation Gaming to Enhance Knowledge Graphs for Accuracy Critical Generative AI Applications, arXiv (2024). DOI: 10.48550/arxiv.2404.19729
Southern Methodist University
Researchers to present new tool for enhancing AI transparency and accuracy at conference (2024, July 30)
retrieved 30 July 2024
from https://techxplore.com/news/2024-07-tool-ai-transparency-accuracy-conference.html
part may be reproduced without the written permission. The content is provided for information purposes only.