New Real-time Localization And Mapping Tools For Robotics, VR, And AR

- Sep 10, 2018-

A large group of researchers at Imperial College London, the University of Edinburgh, the University of Manchester, and Stanford University have recently collaborated on a project exploring the application of real-time localization and mapping tools for robotics, autonomous vehicles, virtual reality (VR) and augmented reality (AR).

"The objective of our work was to bring expert researchers from computer vision, hardware and compiler communities together to build future systems for robotics, VR/AR, and the Internet of Things (IoT)," the researchers told Tech Xplore in an email. "We wanted to build robust computer vision systems that are able to perceive the world at very low power budget but with desired accuracy; we are interested in the perception per Joule metric."

The researchers involved in the project combined their skills and expertise to assemble algorithms, architectures, tools, and software necessary to deliver SLAM. Their findings could aid those applying SLAM in a variety of fields to select and configure algorithms and hardware that can achieve optimal levels of performance, accuracy, and energy consumption.

SLAM algorithms are methods that can construct or update a map of an unknown environment while keeping track of a particular agent's location within it. This technology can have useful applications in a number of fields, for instance in the development of autonomous vehicles, robotics, VR, and AR.

"Our research is already having an impact on many fields such as robotics, VR/AR, and IoT, where machines are always-on and are able to communicate and perform their tasks with reasonable accuracy, without interruptions, at very little power consumption," the researchers said.

This comprehensive project has led to several important findings, and to the development of new tools that could largely facilitate the implementation of SLAM in robotics, VR, AR, and autonomous vehicles.

The study also made a number of contributions in the context of hardware design, for instance, developing profiling tools to locate and evaluate performance bottlenecks in both native and managed applications. The researchers presented a full workflow for creating hardware for computer vision applications, which could be applied to future platforms.

New real-time localization and mapping tools for robotics, VR, and AR

Focal-plane Sensor-Processor Arrays (FPSPs) are parallel processing systems, where each pixel has a processing element. 

Previous:Laser-powered-drones May Beat Endurance Hurdles Next:Could AI Robots Develop Prejudice On Their Own?