ANavS showcased its Multi-Sensor RTK Positioning Module with its modular and flexibly configurable sensor fusion software at the recent International VDI Conference Autonomous Trucks virtual conference. The products target autonomous navigation, robotics and automation, unmanned aerial vehicles (UAVs), surveying, hydrology and provision of real-time high-accuracy maps.
The hardware-software combination provides precise position, velocity, and altitude information at any location and environment, as well as a map with semantic information. The sensor fusion combines raw data from up to 3 multi-frequency, multi-GNSS receivers, an inertial measurement unit (IMU), a controller area network (CAN) interface for getting wheel odometry measurements, a camera, and a LiDAR. In addition, patented RTK and PPP algorithms and an artificial intelligence (AI)-based semantic segmentation of LiDAR point clouds are used for data processing.
The GNSS/INS/ODO tightly coupled RTK positioning is performed within the ANavS Multi-Sensor Module. The ANavS camera/ LiDAR simultaneous localization and mapping (SLAM) and the deep learning based semantic segmentation are performed on an Nvidia Jetson AGX Xavier. Pose information obtained from the camera/LiDAR SLAM is fed into the Multi-Sensor RTK module to obtain accurate pose information even in environments without GNSS signal reception, e.g. in garages or tunnels.
ANavS is based in Munich, Germany. Several German automotive suppliers use ANavS products to help develop advanced driver-assistance systems (ADAS), autonomous driving, or simply as a reference system for positioning.