• Audio
  • Live tv
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Service
Saturday, January 28, 2023
Morning News
No Result
View All Result
  • Login
  • Home
  • News
    • Local
    • National
    • World
  • Markets
  • Economy
  • Crypto
  • Real Estate
  • Sports
  • Entertainment
  • Health
  • Tech
    • Automotive
    • Business
    • Computer Sciences
    • Consumer & Gadgets
    • Electronics & Semiconductors
    • Energy & Green Tech
    • Engineering
    • Hi Tech & Innovation
    • Machine learning & AI
    • Security
    • Hardware
    • Internet
    • Robotics
    • Software
    • Telecom
  • Lifestyle
    • Fashion
    • Travel
  • App
    • audio
    • live tv
  • Home
  • News
    • Local
    • National
    • World
  • Markets
  • Economy
  • Crypto
  • Real Estate
  • Sports
  • Entertainment
  • Health
  • Tech
    • Automotive
    • Business
    • Computer Sciences
    • Consumer & Gadgets
    • Electronics & Semiconductors
    • Energy & Green Tech
    • Engineering
    • Hi Tech & Innovation
    • Machine learning & AI
    • Security
    • Hardware
    • Internet
    • Robotics
    • Software
    • Telecom
  • Lifestyle
    • Fashion
    • Travel
  • App
    • audio
    • live tv
No Result
View All Result
Morning News
No Result
View All Result
Home Tech Machine learning & AI

New software allows nonspecialists to intuitively train machines using gestures

by author
November 1, 2022
in Machine learning & AI, Software
Reading Time: 4 mins read
0 0
A A
0
0
SHARES
15
VIEWS
Share on FacebookShare on TwitterLinkedinReddit
Machine learning, from you
In each image of the HuTics custom data set, the users’ hands are visualized in blue and the object in green. HuTics is used to train a machine learning model. Credit: ©2022 Yatani and Zhou

Many computer systems that people interact with on a daily basis require knowledge about certain aspects of the world, or models, to work. These systems have to be trained, often needing to learn how to recognize objects from video or image data. This data frequently contains superfluous content that reduces the accuracy of models. So, researchers found a way to incorporate natural hand gestures into the teaching process. This way, users can more easily teach machines about objects, and the machines can also learn more effectively.

You’ve probably heard the term machine learning before, but are you familiar with machine teaching? Machine learning is what happens behind the scenes when a computer uses input data to form models that can later be used to perform useful functions. But machine teaching is the somewhat less explored part of the process, which deals with how the computer gets its input data to begin with.

In the case of visual systems, for example ones that can recognize objects, people need to show objects to a computer so it can learn about them. But there are drawbacks to the ways this is typically done that researchers from the University of Tokyo’s Interactive Intelligent Systems Laboratory sought to improve.

The model made with HuTics allows LookHere to use gestures and hand positions to provide extra context for the system to pick out and identify the object, highlighted in red. Credit: ©2022 Yatani and Zhou

“In a typical object training scenario, people can hold an object up to a camera and move it around so a computer can analyze it from all angles to build up a model,” said graduate student Zhongyi Zhou.

“However, machines lack our evolved ability to isolate objects from their environments, so the models they make can inadvertently include unnecessary information from the backgrounds of the training images. This often means users must spend time refining the generated models, which can be a rather technical and time-consuming task. We thought there must be a better way of doing this that’s better for both users and computers, and with our new system, LookHere, I believe we have found it.”

Zhou, working with Associate Professor Koji Yatani, created LookHere to address two fundamental problems in machine teaching: first, the problem of teaching efficiency, aiming to minimize the users’ time, and required technical knowledge. And second, of learning efficiency—how to ensure better learning data for machines to create models from.

LookHere achieves these by doing something novel and surprisingly intuitive. It incorporates the hand gestures of users into the way an image is processed before the machine incorporates it into its model, known as HuTics. For example, a user can point to or present an object to the camera in a way that emphasizes its significance compared to the other elements in the scene. This is exactly how people might show objects to each other. And by eliminating extraneous details, thanks to the added emphasis to what’s actually important in the image, the computer gains better input data for its models.

“The idea is quite straightforward, but the implementation was very challenging,” said Zhou. “Everyone is different and there is no standard set of hand gestures. So, we first collected 2,040 example videos of 170 people presenting objects to the camera into HuTics. These assets were annotated to mark what was part of the object and what parts of the image were just the person’s hands.

“LookHere was trained with HuTics, and when compared to other object recognition approaches, can better determine what parts of an incoming image should be used to build its models. To make sure it’s as accessible as possible, users can use their smartphones to work with LookHere and the actual processing is done on remote servers. We also released our source code and data set so that others can build upon it if they wish.”

Factoring in the reduced demand on users’ time that LookHere affords people, Zhou and Yatani found that it can build models up to 14 times faster than some existing systems. At present, LookHere deals with teaching machines about physical objects and it uses exclusively visual data for input. But in theory, the concept can be expanded to use other kinds of input data such as sound or scientific data. And models made from that data would benefit from similar improvements in accuracy, too.

The research was published as part of The 35th Annual ACM Symposium on User Interface Software and Technology.

More information:
Zhongyi Zhou et al, Gesture-aware Interactive Machine Teaching with In-situ Object Annotations, The 35th Annual ACM Symposium on User Interface Software and Technology (2022). DOI: 10.1145/3526113.3545648

Provided by
University of Tokyo

Citation:
New software allows nonspecialists to intuitively train machines using gestures (2022, October 31)
retrieved 1 November 2022
from https://techxplore.com/news/2022-10-software-nonspecialists-intuitively-machines-gestures.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Machine learning, from you
In each image of the HuTics custom data set, the users’ hands are visualized in blue and the object in green. HuTics is used to train a machine learning model. Credit: ©2022 Yatani and Zhou

Many computer systems that people interact with on a daily basis require knowledge about certain aspects of the world, or models, to work. These systems have to be trained, often needing to learn how to recognize objects from video or image data. This data frequently contains superfluous content that reduces the accuracy of models. So, researchers found a way to incorporate natural hand gestures into the teaching process. This way, users can more easily teach machines about objects, and the machines can also learn more effectively.

You’ve probably heard the term machine learning before, but are you familiar with machine teaching? Machine learning is what happens behind the scenes when a computer uses input data to form models that can later be used to perform useful functions. But machine teaching is the somewhat less explored part of the process, which deals with how the computer gets its input data to begin with.

In the case of visual systems, for example ones that can recognize objects, people need to show objects to a computer so it can learn about them. But there are drawbacks to the ways this is typically done that researchers from the University of Tokyo’s Interactive Intelligent Systems Laboratory sought to improve.

The model made with HuTics allows LookHere to use gestures and hand positions to provide extra context for the system to pick out and identify the object, highlighted in red. Credit: ©2022 Yatani and Zhou

“In a typical object training scenario, people can hold an object up to a camera and move it around so a computer can analyze it from all angles to build up a model,” said graduate student Zhongyi Zhou.

“However, machines lack our evolved ability to isolate objects from their environments, so the models they make can inadvertently include unnecessary information from the backgrounds of the training images. This often means users must spend time refining the generated models, which can be a rather technical and time-consuming task. We thought there must be a better way of doing this that’s better for both users and computers, and with our new system, LookHere, I believe we have found it.”

Zhou, working with Associate Professor Koji Yatani, created LookHere to address two fundamental problems in machine teaching: first, the problem of teaching efficiency, aiming to minimize the users’ time, and required technical knowledge. And second, of learning efficiency—how to ensure better learning data for machines to create models from.

LookHere achieves these by doing something novel and surprisingly intuitive. It incorporates the hand gestures of users into the way an image is processed before the machine incorporates it into its model, known as HuTics. For example, a user can point to or present an object to the camera in a way that emphasizes its significance compared to the other elements in the scene. This is exactly how people might show objects to each other. And by eliminating extraneous details, thanks to the added emphasis to what’s actually important in the image, the computer gains better input data for its models.

“The idea is quite straightforward, but the implementation was very challenging,” said Zhou. “Everyone is different and there is no standard set of hand gestures. So, we first collected 2,040 example videos of 170 people presenting objects to the camera into HuTics. These assets were annotated to mark what was part of the object and what parts of the image were just the person’s hands.

“LookHere was trained with HuTics, and when compared to other object recognition approaches, can better determine what parts of an incoming image should be used to build its models. To make sure it’s as accessible as possible, users can use their smartphones to work with LookHere and the actual processing is done on remote servers. We also released our source code and data set so that others can build upon it if they wish.”

Factoring in the reduced demand on users’ time that LookHere affords people, Zhou and Yatani found that it can build models up to 14 times faster than some existing systems. At present, LookHere deals with teaching machines about physical objects and it uses exclusively visual data for input. But in theory, the concept can be expanded to use other kinds of input data such as sound or scientific data. And models made from that data would benefit from similar improvements in accuracy, too.

The research was published as part of The 35th Annual ACM Symposium on User Interface Software and Technology.

More information:
Zhongyi Zhou et al, Gesture-aware Interactive Machine Teaching with In-situ Object Annotations, The 35th Annual ACM Symposium on User Interface Software and Technology (2022). DOI: 10.1145/3526113.3545648

Provided by
University of Tokyo

Citation:
New software allows nonspecialists to intuitively train machines using gestures (2022, October 31)
retrieved 1 November 2022
from https://techxplore.com/news/2022-10-software-nonspecialists-intuitively-machines-gestures.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tags: computermachine learningmachinesphysical objectssource codeusers
Previous Post

Doug Ford overstates privilege in attempt to avoid testimony at inquiry: commissioner

Next Post

Lavender Country’s Patrick Haggerty Dies At 78

Related Posts

Machine learning & AI

Top French university bans students from using ChatGPT

January 27, 2023
11
Machine learning & AI

Risk management framework aims to improve trustworthiness of artificial intelligence

January 27, 2023
11
Next Post
Patrick Haggerty, the founder and lead singer of the band Lavender Country

Lavender Country’s Patrick Haggerty Dies At 78

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR TODAY

Sleeping Beauty Castle in Disneyland park is decked out in honor of the Disney100 celebration.
Travel

At Disney100, Disneyland invites travel advisors to join the party

by author
January 27, 2023
0
20

ANAHEIM, Calif. -- Disney100 has officially begun, and Disneyland, its epicenter, is looking to the trade to embrace the celebration...

Hamilton urgent care centre to be closed on Christmas, New Year’s

December 22, 2022
20

B.C. distillery first in Canada to join prestigious Scotch Malt Whisky Society

January 28, 2023
15
US gov’t $1.5T debt interest will be equal 3X Bitcoin market cap in 2023

US gov’t $1.5T debt interest will be equal 3X Bitcoin market cap in 2023

January 28, 2023
13
A sign at the entrance to Surrey Memorial Hospital is seen on Saturday, Feb. 5, 2022. (CTV)

Fewer COVID-19 patients in B.C. hospitals today than at any point in 2022

January 27, 2023
13

POPULAR NEWS

Bloomberg hit with $5 million SEC fine for misleading customers of securities pricing product

January 23, 2023
20
An older man looks at his phone while sitting at a small desk near his kitchen

Multiple Sclerosis: Scientists Uncover a Connection Between MS Lesions and Depression

January 20, 2023
21
Jennifer Lopez Still Wears a Naked Dress Better Than Anyone Else

Jennifer Lopez Still Wears a Naked Dress Better Than Anyone Else 

January 19, 2023
18
"The Neighborhood"

‘The Neighborhood’ Renewed For Season 6 At CBS

January 23, 2023
16
A young mother holds an infant

Mental Health, Financial Stability Among Parents’ Top Hopes for Their Children

January 24, 2023
16

EDITOR'S PICK

Crypto becomes second most widely-owned asset class for women: eToro survey
Crypto

Crypto becomes second most widely-owned asset class for women: eToro survey

by author
January 20, 2023
0
11

While traditional asset classes fail to foster broader adoption among women, crypto seems to have found success in bringing women...

Read more

Niagara Region will hit 50,000th case of COVID-19 before the day is done

‘RHOBH’ Star Kyle Richards Flaunts Her Abs In Black Bikini

Sister of VPD officer Nicole Chan testified she felt aimless before suicide

‘A survival problem’: Canadian-Armenian woman describes life in blockade amid Armenia-Azerbaijan unrest

Morning News

Welcome to our Ads

Create ads focused on the objectives most important to your business Please contact us info@morns.ca

  • Home
  • Audio
  • Live tv
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Service

© 2022 Morning News - morns.ca by morns.ca.

No Result
View All Result
  • Home
  • News
    • Local
    • National
    • World
  • Markets
  • Economy
  • Crypto
  • Real Estate
  • Sports
  • Entertainment
  • Health
  • Tech
    • Automotive
    • Business
    • Computer Sciences
    • Consumer & Gadgets
    • Electronics & Semiconductors
    • Energy & Green Tech
    • Engineering
    • Hi Tech & Innovation
    • Machine learning & AI
    • Security
    • Hardware
    • Internet
    • Robotics
    • Software
    • Telecom
  • Lifestyle
    • Fashion
    • Travel
  • App
    • audio
    • live tv
  • Login

© 2022 Morning News - morns.ca by morns.ca.

Welcome Back!

Sign In with Facebook
Sign In with Google
Sign In with Linked In
OR

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
Go to mobile version