GNSS/Intertial-Aided Location and Mapping Indoors
Imagine walking into a strange place — a shopping mall or an office building — wandering around for 20 minutes or so, and then coming out with a map of the facility that others could use to navigate through it — say, a fire rescue crew or simply someone looking for an office suite. A team of researchers at the German Aerospace Center are working on a “walk-about” solution that uses GPS to initialize the beginning of the traverse and tie the position data to an absolute coordinate frame. Then, foot-mounted inertial sensors increase the map accuracy when a person revisits previous points in the building.
Digital cartography and automated mapping techniques based on GNSS positioning have transformed our relationship to the physical world. The convergence of these complementary technologies are supporting the growth of commercial and consumer location-based applications that benefit from the coupling of real-time information with maps that are more current than ever — at least in environments that have access to radio signals from orbiting GNSS satellites.
Buildings, roads, mobile assets, points of interest, and people can be located outdoors based directly on GNSS-derived geographic coordinates or conventional addresses tied to these coordinates. Such advances, however, are largely denied to us in underground or indoor venues where satellites signals do not reach.
Many prospective location-based service (LBS) applications — including safety-critical needs for emergency and security services — would become feasible if the associated mapping and real-time positioning requirements could be met. Finding alternative technologies that can meet these challenges have drawn the attention of many researchers and system developers.
Recent work has shown remarkable advances in the area of pedestrian indoor positioning aided by low-cost microelectro-mechanical system (MEMS) inertial sensors. At present, however, fully autonomous inertial navigation is still far from the realm of possibilities, due to sensor error–induced drift that causes position errors to grow unbounded within a few seconds.
This article introduces a new pedestrian localization technique that builds on the principle of simultaneous localization and mapping (SLAM). Our approach is called FootSLAM because it depends largely on the use of shoe-mounted inertial sensors that measure a pedestrian’s steps while walking.
In contrast to SLAM used in robotics, our approach does not require specific feature–detection sensors, such as cameras or laser scanners. The work extends prior work in pedestrian navigation that uses known building plan layouts to constrain a location-estimation algorithm driven by a stride-estimation process. In our approach, building plans (i.e., maps) can be learnt automatically while people walk about in a building, either directly to localize this specific person or in a offline fashion in order to provide maps for other people.
We have combined our system with GPS and have undertaken experiments in which a person enters a building from outside and walks around within this building. The GPS position at the entry to the building provides a point of beginning for subsequent positioning/mapping without GPS.
Our experiments were undertaken by recording the raw sensor data and ground truth reference information. Offline processing and comparison with the ground-truth reference information allows us to quantitatively evaluate the achieved localization accuracy.
Building on the Past
. . .
. . .
SLAM for Pedestrian Dead-Reckoning
The main difference from robotic SLAM is that our method uses no visual or similar sensors at all. In fact, the only sensors used are the foot-mounted IMU and, optionally, a magnetometer and GPS receiver. In this article, we show that a pedestrian’s location and the building layout can be jointly estimated by using the pedestrian’s odometry alone, as measured by the foot-mounted IMU.
We have confirmed our approach by using real data obtained from a pedestrian walking in an actual indoor environment. Our experiments involved no simulations, and we will present the results from these in later sections.
. . .
Theoretical Basis of FootSLAM
A person may walk in a random fashion whilst talking on a mobile phone, or they might be following a more or less directed trajectory towards a certain destination. Such phases of motion are governed by the person’s inner mental state and, consequently, cannot be easily estimated.
. . .
Our Model as a Dynamic Bayesian Network
This approach is used in all kinds of sequential filtering problems where noisy observations are used to estimate an evolving sequence of hidden states. Each node in the DBN represents a random variable and carries a time index. Arrows from one state variable to the next denote causation (in our interpretation); so, arrows can never go backwards in time.
. . .
Pedestrian Steps and Step Measurements
In order to separate the process of updating the inertial computer driven by the IMU and the ZUPTs from the overall SLAM estimation, we have resorted to a two-tier processing in which a low-level extended Kalman filter computes the length and direction change of individual steps. This step estimate is then incorporated into the upper level particle filter in the form of a measurement. Note that this is a mathematical model that links the measurements received from the lower level EKF to the modeled pedestrian and his/her movement, as well as a simple representation of errors that affect the measured step.
. . .
Map Representation in Practice
. . .
Summary of the RBPF Algorithm
. . .
Experiments and Data Processing
. . .
Results In order to evaluate the first case we measured the position accuracy over time, during the entire walk. To validate the second application, we show qualitatively the resulting map, created using all the data up to the end of the walk (that is, when we are outdoors again).
In a subset of our evaluations we assumed that we knew a priori the location of the outside building walls to within three meters of the true wall locations. This helps the FootSLAM to converge a little, but it is not a requirement.
. . .
Discussion and Further Work
All results so far were obtained from just a single track or walk-through and assume no further processing to merge tracks. In a real system, efforts must be undertaken to resolve the scale, rotation, and translation ambiguities and errors that are often inherent in SLAM.
In our approach (where we couple with GPS in the outdoor portion and optionally a magnetometer), these ambiguities may not be so pronounced and may be locally confined to the building indoor areas. Future work should address techniques that combine maps from different sources, such as different runs from the same person or runs from different people. We believe that after a few runs the ambiguities will be averaged out.
Furthermore, the partial availability of GNSS indoors — even with large errors at any one time — will over time help to eliminate the ambiguities even further. In both cases the user generated approach will over time improve the quality of the maps and will also adjust to changes in the building layout.
Inspecting the numerical results, we can make the following observations:
Because our maps are probabilistic, we could also estimate pedestrians’ future paths — similar to work for driver intent estimation described in the paper by J. Krumm (Additional Resources). Further work should also integrate more sensors, address 3D issues, as well as collective mapping in which users collect data during their daily lives and maps are combined and improved.
Current work is addressing Place-SLAM: the use of manually indicated markers, which are recognizable places that are flagged by the user each time she comes to this place to further aid convergence. Finally, it is important to point to new developments in sensor technology that are achieving substantial improvements to performance. (See, for example, the article by E. Foxlin and S. Wan in Additional Resources.)
This new work on sensors is important for FootSLAM. First, the more accurate sensors will allow larger areas to be mapped by FootSLAM for any given number of particles in the algorithm; so, a better sensor will allow low complexity implementation. Second, a better sensor will make it more likely that the odometry error will be small before the first FootSLAM loop closure or backtrack, meaning that real-time FootSLAM without any form of prior map will be even more viable.
For the complete story, including figures, graphs, and images, please download the PDF of the article, above.
ManufacturersAn EVK-5 GPS receiver from u-blox AG, Thalwil, Switzerland; an OS5000 digital compass from OceanServer Technology, Inc., Fall River, Massachusetts, USA; an MS55490 baro-altimeter from MEAS Switzerland SA (formerly Intersema Sensoric SA), Bevaix, Switzerland; i-CARD2 RFID reader and tags from Identec Solutions AG, Lustenau, Austria; an MTx-28A53G25 IMU from Xsens Technologies B.V., Enschede, The Netherlands.
Copyright © 2017 Gibbons Media & Research LLC, all rights reserved.