SLAM Dance - Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design

SLAM Dance

Digital cartography and automated mapping techniques based on GNSS positioning have transformed our relationship to the physical world. The convergence of these complementary technologies are supporting the growth of commercial and consumer location-based applications that benefit from the coupling of real-time information with maps that are more current than ever — at least in environments that have access to radio signals from orbiting GNSS satellites.

Digital cartography and automated mapping techniques based on GNSS positioning have transformed our relationship to the physical world. The convergence of these complementary technologies are supporting the growth of commercial and consumer location-based applications that benefit from the coupling of real-time information with maps that are more current than ever — at least in environments that have access to radio signals from orbiting GNSS satellites.

Buildings, roads, mobile assets, points of interest, and people can be located outdoors based directly on GNSS-derived geographic coordinates or conventional addresses tied to these coordinates. Such advances, however, are largely denied to us in underground or indoor venues where satellites signals do not reach.

Many prospective location-based service (LBS) applications — including safety-critical needs for emergency and security services — would become feasible if the associated mapping and real-time positioning requirements could be met. Finding alternative technologies that can meet these challenges have drawn the attention of many researchers and system developers.

Recent work has shown remarkable advances in the area of pedestrian indoor positioning aided by low-cost microelectro-mechanical system (MEMS) inertial sensors. At present, however, fully autonomous inertial navigation is still far from the realm of possibilities, due to sensor error–induced drift that causes position errors to grow unbounded within a few seconds.

This article introduces a new pedestrian localization technique that builds on the principle of simultaneous localization and mapping (SLAM). Our approach is called FootSLAM because it depends largely on the use of shoe-mounted inertial sensors that measure a pedestrian’s steps while walking.

In contrast to SLAM used in robotics, our approach does not require specific feature–detection sensors, such as cameras or laser scanners. The work extends prior work in pedestrian navigation that uses known building plan layouts to constrain a location-estimation algorithm driven by a stride-estimation process. In our approach, building plans (i.e., maps) can be learnt automatically while people walk about in a building, either directly to localize this specific person or in a offline fashion in order to provide maps for other people.

We have combined our system with GPS and have undertaken experiments in which a person enters a building from outside and walks around within this building. The GPS position at the entry to the building provides a point of beginning for subsequent positioning/mapping without GPS.

Our experiments were undertaken by recording the raw sensor data and ground truth reference information. Offline processing and comparison with the ground-truth reference information allows us to quantitatively evaluate the achieved localization accuracy.

Building on the Past
The work of E. Foxlin cited in the Additional Resources section at the end of this article describes how we can use foot-mounted inertial measurement units (IMUs) to provide zero velocity updates — ZUPTs — during the rest phase of a pedestrian’s stride with which to solve the problem of non-linear error growth over time with the aid of a Kalman filter. This is because an inertial navigation system (INS) can use the ZUPTs to accurately compute the displacement of the foot during a single step before errors would start to grow.

. . .

Robotic SLAM
For many years the robotics community has used various sensors, such as laser ranging scanners and cameras, to perform high-precision positioning of robots in buildings. Nearly two decades ago, researchers from this community introduced SLAM as a way of allowing robots to navigate in a priori unknown environments.

. . .

SLAM for Pedestrian Dead-Reckoning
This article builds on both areas of prior work on pedestrian positioning using foot mounted IMUs as well as the just-described SLAM approach used in robotics. Our application is human pedestrian positioning based on SLAM — that is, the difficult case where no map is available beforehand.

The main difference from robotic SLAM is that our method uses no visual or similar sensors at all. In fact, the only sensors used are the foot-mounted IMU and, optionally, a magnetometer and GPS receiver. In this article, we show that a pedestrian’s location and the building layout can be jointly estimated by using the pedestrian’s odometry alone, as measured by the foot-mounted IMU.

We have confirmed our approach by using real data obtained from a pedestrian walking in an actual indoor environment. Our experiments involved no simulations, and we will present the results from these in later sections.

. . .

Theoretical Basis of FootSLAM
Human motion is a complex stochastic process, and we need to model it in a sufficiently simple fashion in order to develop our FootSLAM model and the algorithms that build on this model.

A person may walk in a random fashion whilst talking on a mobile phone, or they might be following a more or less directed trajectory towards a certain destination. Such phases of motion are governed by the person’s inner mental state and, consequently, cannot be easily estimated.

. . .

Our Model as a Dynamic Bayesian Network
Our work is based on a theoretically well-grounded representation of the Dynamic Bayesian Network (DBN) that represents the pedestrian’s location, her past and present motion, the step measurements computed by the lower level EKF, and the “map”.

This approach is used in all kinds of sequential filtering problems where noisy observations are used to estimate an evolving sequence of hidden states. Each node in the DBN represents a random variable and carries a time index. Arrows from one state variable to the next denote causation (in our interpretation); so, arrows can never go backwards in time.

. . .

Pedestrian Steps and Step Measurements
In this section we will show details on how we represent the step transition vector between two steps that a person takes, which is also discussed in further detail in the article by B. Krach and P. Robertson (Additional Resources).

In order to separate the process of updating the inertial computer driven by the IMU and the ZUPTs from the overall SLAM estimation, we have resorted to a two-tier processing in which a low-level extended Kalman filter computes the length and direction change of individual steps. This step estimate is then incorporated into the upper level particle filter in the form of a measurement. Note that this is a mathematical model that links the measurements received from the lower level EKF to the modeled pedestrian and his/her movement, as well as a simple representation of errors that affect the measured step.

. . .

Map Representation in Practice
As discussed earlier, in our model the map is a probabilistic representation of possible human motion based on the subject’s location in a certain part of a building. It can be interpreted in this way: a person’s next step will be determined only by his or her current location in the sense that each future step is drawn from a location-dependent probability distribution. This corresponds to the fictive pedestrian behavior in which a person looks at a probability distribution posted at each location, and “draws” the next step using exactly this governing distribution.

. . .

Summary of the RBPF Algorithm
We can take good advantage of a Likelihood Particle Filter as is described in the article by S. Arulampalam et alia because the step measurement  is very accurate. Weighting with a “sharp” likelihood function would cause most particles outside the measurement to receive very low weight and effectively be wasted. Thus, we sample using the likelihood function rather than sampling from the state transition model.

. . .

Experiments and Data Processing
Our results based on real data are very promising. In several runs lasting up to about 12 minutes each, we collected the data of a sensor-equipped pedestrian walking in office environments as well as in the adjacent area outside.

. . .

Results In order to evaluate the first case we measured the position accuracy over time, during the entire walk. To validate the second application, we show qualitatively the resulting map, created using all the data up to the end of the walk (that is, when we are outdoors again).

In a subset of our evaluations we assumed that we knew a priori the location of the outside building walls to within three meters of the true wall locations. This helps the FootSLAM to converge a little, but it is not a requirement.

. . .

Discussion and Further Work
The true layout of the building was, of course, not used in the processing and only manually inserted over the resulting FootSLAM maps for comparison — applying rotation, scaling, and translation chosen to match each FootSLAM map. The results of the outdoor-indoor-outdoor and indoors-only trials showed a remarkable ability of the RBPF to estimate the reachable areas. Errors of the location of the doors were usually to within about one meter and never more than about two to three meters off.

All results so far were obtained from just a single track or walk-through and assume no further processing to merge tracks. In a real system, efforts must be undertaken to resolve the scale, rotation, and translation ambiguities and errors that are often inherent in SLAM.

In our approach (where we couple with GPS in the outdoor portion and optionally a magnetometer), these ambiguities may not be so pronounced and may be locally confined to the building indoor areas. Future work should address techniques that combine maps from different sources, such as different runs from the same person or runs from different people. We believe that after a few runs the ambiguities will be averaged out.

Furthermore, the partial availability of GNSS indoors — even with large errors at any one time — will over time help to eliminate the ambiguities even further. In both cases the user generated approach will over time improve the quality of the maps and will also adjust to changes in the building layout.

Inspecting the numerical results, we can make the following observations:

  • Observing the particle cloud during processing and also the evolution of the position error, it becomes evident that the estimator diverges at first as the area is being explored, but then begins to converge (at loop closure) closer to the true location and remains reasonably stable. The cloud naturally spreads as new areas of the building are being explored for the first time, only to converge again as the pedestrian revisits familiar ground.
  • The numerical results indicate that the use of rough knowledge of the outer building walls (building perimeter) help to improve the error slightly.
  • The use of perfect building plan information — not surprisingly — gives the best performance. This is because the location of the walls is known with submeter accuracy. The result is that indoor positioning accuracy is usually better than outdoors.
  • When FootSLAM is used, the accuracy cannot be better than the anchor achieved while using GPS before entering the building. This error in our experiments was typically around three to seven meters; so, this is a baseline error onto which the FootSLAM relative errors are essentially added.
  • The extended Kalman filter that does not use FootSLAM diverged after some time, especially in the second data set. (Divergence is a random process and depends on the random occurrence of drifts and angular displacement of the stride estimation at the lower level and is a function of the IMU errors).

Because our maps are probabilistic, we could also estimate pedestrians’ future paths — similar to work for driver intent estimation described in the paper by J. Krumm (Additional Resources). Further work should also integrate more sensors, address 3D issues, as well as collective mapping in which users collect data during their daily lives and maps are combined and improved.

Current work is addressing Place-SLAM: the use of manually indicated markers, which are recognizable places that are flagged by the user each time she comes to this place to further aid convergence. Finally, it is important to point to new developments in sensor technology that are achieving substantial improvements to performance. (See, for example, the article by E. Foxlin and S. Wan in Additional Resources.)

This new work on sensors is important for FootSLAM. First, the more accurate sensors will allow larger areas to be mapped by FootSLAM for any given number of particles in the algorithm; so, a better sensor will allow low complexity implementation. Second, a better sensor will make it more likely that the odometry error will be small before the first FootSLAM loop closure or backtrack, meaning that real-time FootSLAM without any form of prior map will be even more viable.

For the complete story, including figures, graphs, and images, please download the PDF of the article, above.

This research has received funding from the European Community’s FP7 Program [FP7/2007-2013] under grant agreement no. 215098 of the “Persist” Collaborative Project. This article is based in part on a paper presented at the ION GNSS 2009 conference in Savannah, Georgia, USA.

Additional Resources
[1] Angermann, M., and A. Friese, M. Khider, B. Krach, K. Krack, and P. Robertson, “A Reference Measurement Data Set for Multisensor Pedestrian Navigation with Accurate Ground Truth,” in Proceedings of European Navigation Conference ENC-GNSS 2009, Naples, Italy, 2009
[2] Arulampalam, S., and S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/Non-Gaussian Bayesian Tracking,” IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 174–188, February 2002
[3] Beauregard, S., Widyawan, and M. Klepal, “Indoor PDR Performance Enhancement Using Minimal Map Information and Particle Filters,” in Proceedings of the IEEE/ION PLANS 2008, Monterey, California USA, May 2008
[4] “FootSLAM videos and reference data sets download,” <>
[5] Foxlin, E., “Pedestrian Tracking with Shoe-Mounted Inertial Sensors,” IEEE Computer Graphics and Applications, vol. 25, no. 6, pp. 38–46, November 2005
[6] Foxlin, E., and S. Wan, “Improved Pedestrian Navigation Based on Drift-Reduced MEMS IMU Chip,” 2010 ION International Technical Meeting, San Diego, California USA, January 2010.
[7] Khider, M., and S. Kaiser, P. Robertson, and M. Angermann, “A Novel Movement Model for Pedestrians Suitable for Personal Navigation,” in ION National Technical Meeting 2008, San Diego, California, USA, 2008
[8] Krach B., and P. Robertson, “Cascaded Estimation Architecture for Integration of Foot-Mounted Inertial Sensors,” in Proceedings of the IEEE/ION PLANS 2008, Monterey, California USA, May 2008
[9] Krumm, J., “A markov model for driver turn prediction,” in SAE 2008 World Congress, Detroit, MI USA. Springer-Verlag New York, Inc., 2008
[10] Montemerlo, M., and S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM: A factored solution to the simultaneous localization and mapping problem,” in Proc. AAAI National Conference on Artificial Intelligence, Edmonton, Canada, 2002
[11] Robertson, P., and M. Angermann, and B. Krach, “Simultaneous Localization and Mapping for Pedestrians Using Only Foot-Mounted Inertial Sensors,” in Proceedings of UbiComp 2009, Orlando, Florida, USA
[12] Smith, R., and M. Self, and P. Cheeseman, “Estimating Uncertain Spatial Relationships in Robotics,” in Autonomous Robot Vehicles, I. J. Cox and G. T. Wilfong, Eds., Springer Verlag, New York, 1990, pp. 167– 193
[13] Woodman, O., and R. Harle, “Pedestrian Localisation for Indoor Environments,” in Proceedings of the UbiComp 2008, Seoul, South Korea, September 2008.