The research groups led by Prof. Kei Sakaguchi from the School of Engineering at Tokyo Institute of Technology and Prof. Walid Saad from Virginia Tech have jointly realized a Smart Mobility Digital Twin that replicates physical space’s traffic conditions in cyber space in real-time.
Using this digital twin, they successfully demonstrated a hybrid autonomous driving system that combines both self-driving and remote operation. The research is published in the journal IEEE Transactions on Intelligent Vehicles.
While digital twin technology, which replicates physical objects and systems in cyberspace, has seen rapid growth in fields like manufacturing and construction, it had not been applied to the dynamic mobility sector until now.
In this research, the Smart Mobility Education & Research Field at Tokyo Tech’s Ookayama Campus was utilized to build a smart mobility digital twin. Furthermore, a demonstration system for hybrid autonomous driving, combining self-driving and remote control, was developed using this digital twin.
In the demonstration, the digital twin was able to identify safer and more efficient routes for autonomous vehicles in real-time and relay this information back to the vehicles. This confirmed that hybrid autonomous driving, integrating both local autonomy and remote guidance, is feasible.
This research enables the fusion of local path planning based on the vehicle’s own sensors and global path planning based on the digital twin’s broader environment view. This is achieved through V2X communication, improving both traffic safety and efficiency simultaneously.
Digital twins, which reproduce physical space’s objects and systems in cyber space, have rapidly developed in secondary industries such as manufacturing and construction. Recently, it has been applied to tertiary industries such as health care, education, and e-commerce, and is now extending to primary industries such as agriculture and fisheries.
The advantages of digital twins include not only visualization using computer vision technology in cyberspace, but also real-time monitoring through sensors and IoT technology, prediction using simulation and AI, and optimal control and anomaly avoidance based on predictions.
The difficulty of constructing digital twins varies with the dynamics of the objects or systems. In manufacturing and construction, where dynamics are low, digital twin implementation is relatively easy, but in mobility, with high dynamics, achieving a digital twin has been challenging.
Against this backdrop, Tokyo Institute of Technology and Virginia Tech have been working since 2022 on a joint research project commissioned by Japan’s National Institute of Information and Communications Technology (NICT) and the U.S. National Science Foundation (NSF).
This project, titled “Research and Development of Wireless Edge Computing Service Platforms for IoFDT (Internet of Federated Digital Twin) to Realize Society 5.0,” aims to construct a Smart Mobility Digital Twin and has successfully implemented the world’s first hybrid autonomous and remote driving using this digital twin.
Tokyo Institute of Technology, in collaboration with members of the Super Smart Society Promotion Consortium, has been constructing the Smart Mobility Education & Research Field at Ookayama Campus since 2019.
This field is equipped with two autonomous vehicles capable of Level 4/5 autonomous driving and four roadside units (RSUs) intended for next-generation ITS (Intelligent Transportation System). The RSUs are equipped with sensors such as LiDAR and cameras, V2X (vehicle-to-everything) communication supporting 760 MHz, 5.7 GHz, and 60 GHz, edge computing (MEC), and backhaul networks to the cloud, enabling infrastructure-coordinated safe driving support.
The Smart Mobility Digital Twin reproduces these physical mobility fields in real-time in cyberspace, allowing for real-time collision prediction and route planning on the digital twin, thereby enabling safe driving support.
The system configuration of the Smart Mobility Digital Twin is shown in fig. 1. It consists of autonomous vehicles and RSUs in the physical space, edge and cloud servers, a virtualization platform orchestrating the entire network, ROS (Robot Operating System) and Autoware software packages for autonomous driving operating in the cyberspace, static information such as Ookayama point-cloud map/3D models, 3D visualization software like Unity, and dynamic smart mobility applications operating on these infrastructures.
Edge servers in autonomous vehicles and RSUs use sensors like LiDAR and cameras to detect surrounding traffic participants such as vehicles, bicycles, and pedestrians, constructing localized digital twins. Information detected by multiple vehicles and RSUs is aggregated in the cloud and superimposed on point clouds/3D maps to construct a wide-area digital twin of the entire field.
By incorporating such a hierarchical structure of local and wide-area digital twins (with any number of layers), it is possible to accommodate various smart mobility use cases with different requirements, such as collision avoidance and delivery optimization.
Fig. 2 shows an example of the Ookayama Smart Mobility Digital Twin. The bottom part displays photos of vehicles and RSUs in the physical space, while the top part shows real-time information of vehicles (blue) and pedestrians (pink) superimposed on a 3D map in cyber space.
The middle part shows detection results superimposed on the point cloud along with the detection range of LiDAR and other sensors. It can be observed that detection results from multiple RSUs are fused together. Despite a delay of approximately 10 ms for local digital twins and 100 ms for global digital twins, the physical and digital twins are almost synchronized in real time.
Hybrid autonomous driving integrates path planning based on local environmental observations by autonomous vehicles with path planning based on global environmental observations provided by the digital twin through V2X communication. This enables simultaneous improvements in both traffic safety and efficiency.
Fig. 3 shows the demonstration system of hybrid autonomous driving. In the demonstration system, a digital twin of the autonomous vehicle is constructed in cyber space, path planning is performed on the global digital twin in cyber space, the optimized path is sent back to the autonomous vehicle in physical space, and the vehicle performs autonomous driving using the selected path and its sensors.
It is the first time in the world that such a hybrid autonomous driving system has been practically implemented. While the view of autonomous driving is limited to the surroundings of the vehicle, similar to human driving, the global digital twin can observe road conditions in real-time and from a bird’s-eye view, allowing the selection of safer and more efficient routes in real time.
During the demonstration experiment, the autonomous vehicle detected a parked vehicle and many pedestrians on its route using the global digital twin in cyber space, which enabled it to change to a safer and more efficient surrounding road, and this change was fed back to the physical autonomous vehicle, confirming the realization of hybrid autonomous driving.
More information:
Kui Wang et al, Smart Mobility Digital Twin Based Automated Vehicle Navigation System: A Proof of Concept, IEEE Transactions on Intelligent Vehicles (2024). DOI: 10.1109/TIV.2024.3368109
Tokyo Institute of Technology
Smart mobility digital twin replicates real-world traffic conditions for hybrid autonomous and remote driving (2024, September 19)
retrieved 22 September 2024
from https://techxplore.com/news/2024-09-smart-mobility-digital-twin-replicates.html
part may be reproduced without the written permission. The content is provided for information purposes only.
The research groups led by Prof. Kei Sakaguchi from the School of Engineering at Tokyo Institute of Technology and Prof. Walid Saad from Virginia Tech have jointly realized a Smart Mobility Digital Twin that replicates physical space’s traffic conditions in cyber space in real-time.
Using this digital twin, they successfully demonstrated a hybrid autonomous driving system that combines both self-driving and remote operation. The research is published in the journal IEEE Transactions on Intelligent Vehicles.
While digital twin technology, which replicates physical objects and systems in cyberspace, has seen rapid growth in fields like manufacturing and construction, it had not been applied to the dynamic mobility sector until now.
In this research, the Smart Mobility Education & Research Field at Tokyo Tech’s Ookayama Campus was utilized to build a smart mobility digital twin. Furthermore, a demonstration system for hybrid autonomous driving, combining self-driving and remote control, was developed using this digital twin.
In the demonstration, the digital twin was able to identify safer and more efficient routes for autonomous vehicles in real-time and relay this information back to the vehicles. This confirmed that hybrid autonomous driving, integrating both local autonomy and remote guidance, is feasible.
This research enables the fusion of local path planning based on the vehicle’s own sensors and global path planning based on the digital twin’s broader environment view. This is achieved through V2X communication, improving both traffic safety and efficiency simultaneously.
Digital twins, which reproduce physical space’s objects and systems in cyber space, have rapidly developed in secondary industries such as manufacturing and construction. Recently, it has been applied to tertiary industries such as health care, education, and e-commerce, and is now extending to primary industries such as agriculture and fisheries.
The advantages of digital twins include not only visualization using computer vision technology in cyberspace, but also real-time monitoring through sensors and IoT technology, prediction using simulation and AI, and optimal control and anomaly avoidance based on predictions.
The difficulty of constructing digital twins varies with the dynamics of the objects or systems. In manufacturing and construction, where dynamics are low, digital twin implementation is relatively easy, but in mobility, with high dynamics, achieving a digital twin has been challenging.
Against this backdrop, Tokyo Institute of Technology and Virginia Tech have been working since 2022 on a joint research project commissioned by Japan’s National Institute of Information and Communications Technology (NICT) and the U.S. National Science Foundation (NSF).
This project, titled “Research and Development of Wireless Edge Computing Service Platforms for IoFDT (Internet of Federated Digital Twin) to Realize Society 5.0,” aims to construct a Smart Mobility Digital Twin and has successfully implemented the world’s first hybrid autonomous and remote driving using this digital twin.
Tokyo Institute of Technology, in collaboration with members of the Super Smart Society Promotion Consortium, has been constructing the Smart Mobility Education & Research Field at Ookayama Campus since 2019.
This field is equipped with two autonomous vehicles capable of Level 4/5 autonomous driving and four roadside units (RSUs) intended for next-generation ITS (Intelligent Transportation System). The RSUs are equipped with sensors such as LiDAR and cameras, V2X (vehicle-to-everything) communication supporting 760 MHz, 5.7 GHz, and 60 GHz, edge computing (MEC), and backhaul networks to the cloud, enabling infrastructure-coordinated safe driving support.
The Smart Mobility Digital Twin reproduces these physical mobility fields in real-time in cyberspace, allowing for real-time collision prediction and route planning on the digital twin, thereby enabling safe driving support.
The system configuration of the Smart Mobility Digital Twin is shown in fig. 1. It consists of autonomous vehicles and RSUs in the physical space, edge and cloud servers, a virtualization platform orchestrating the entire network, ROS (Robot Operating System) and Autoware software packages for autonomous driving operating in the cyberspace, static information such as Ookayama point-cloud map/3D models, 3D visualization software like Unity, and dynamic smart mobility applications operating on these infrastructures.
Edge servers in autonomous vehicles and RSUs use sensors like LiDAR and cameras to detect surrounding traffic participants such as vehicles, bicycles, and pedestrians, constructing localized digital twins. Information detected by multiple vehicles and RSUs is aggregated in the cloud and superimposed on point clouds/3D maps to construct a wide-area digital twin of the entire field.
By incorporating such a hierarchical structure of local and wide-area digital twins (with any number of layers), it is possible to accommodate various smart mobility use cases with different requirements, such as collision avoidance and delivery optimization.
Fig. 2 shows an example of the Ookayama Smart Mobility Digital Twin. The bottom part displays photos of vehicles and RSUs in the physical space, while the top part shows real-time information of vehicles (blue) and pedestrians (pink) superimposed on a 3D map in cyber space.
The middle part shows detection results superimposed on the point cloud along with the detection range of LiDAR and other sensors. It can be observed that detection results from multiple RSUs are fused together. Despite a delay of approximately 10 ms for local digital twins and 100 ms for global digital twins, the physical and digital twins are almost synchronized in real time.
Hybrid autonomous driving integrates path planning based on local environmental observations by autonomous vehicles with path planning based on global environmental observations provided by the digital twin through V2X communication. This enables simultaneous improvements in both traffic safety and efficiency.
Fig. 3 shows the demonstration system of hybrid autonomous driving. In the demonstration system, a digital twin of the autonomous vehicle is constructed in cyber space, path planning is performed on the global digital twin in cyber space, the optimized path is sent back to the autonomous vehicle in physical space, and the vehicle performs autonomous driving using the selected path and its sensors.
It is the first time in the world that such a hybrid autonomous driving system has been practically implemented. While the view of autonomous driving is limited to the surroundings of the vehicle, similar to human driving, the global digital twin can observe road conditions in real-time and from a bird’s-eye view, allowing the selection of safer and more efficient routes in real time.
During the demonstration experiment, the autonomous vehicle detected a parked vehicle and many pedestrians on its route using the global digital twin in cyber space, which enabled it to change to a safer and more efficient surrounding road, and this change was fed back to the physical autonomous vehicle, confirming the realization of hybrid autonomous driving.
More information:
Kui Wang et al, Smart Mobility Digital Twin Based Automated Vehicle Navigation System: A Proof of Concept, IEEE Transactions on Intelligent Vehicles (2024). DOI: 10.1109/TIV.2024.3368109
Tokyo Institute of Technology
Smart mobility digital twin replicates real-world traffic conditions for hybrid autonomous and remote driving (2024, September 19)
retrieved 22 September 2024
from https://techxplore.com/news/2024-09-smart-mobility-digital-twin-replicates.html
part may be reproduced without the written permission. The content is provided for information purposes only.