What’s Next for Practical Ubiquitous Navigation?

Kalman state update equation

Navigation — if you are reading this magazine, you almost certainly have some level of interest in navigation technology, which has seen an incredible explosion in use within many different fields.

Navigation — if you are reading this magazine, you almost certainly have some level of interest in navigation technology, which has seen an incredible explosion in use within many different fields.

Due in large part to GPS, many of us have become addicted to navigation, in the sense that we are used to having it available, and we become unsettled — or even grumpy — when it is not. In the past, when I traveled to a new city I used to rely exclusively on maps to navigate. But ever since I purchased a GPS receiver for this purpose, I have come to depend on it to the point that I almost feel as though I could not find my way around if I happen to forget to bring it along.

This kind of experience has played out in many walks of life, from military operations to location-aware social networking operations. Just as the lights come on when we flip the light switch, we expect to be able to know our position at all times.

Because GNSS does not and cannot work in every circumstance, however, use of non-GNSS sources for navigation has increased significantly. Our addiction to navigation, as well as advances in technology that enable us to do amazing things on very small computational platforms such as smartphones, is driving efforts to develop a wide variety of navigation technologies in addition to GNSS. In fact, this year the world’s oldest and best-known international navigation conference has added a plus sign to its name — ION GNSS+ — to reflect the significant role of non-GNSS technologies.

Figure 1 illustrates the wide variety of current (and past) methods used for navigation, many of which are current areas of research in the Advanced Navigation Technology (ANT) Center at the Air Force Institute of Technology (AFIT) as well as many other organizations. The navigation research community has essentially taken a “shotgun” approach of attempting to get each of these to work for various applications.

Although specific features distinguish each approach from the others, let’s step back a bit and consider the problem in a more generalized sense. Doing so enables us to think about the problem differently and gain valuable insights not possible through a scattershot approach.

I would like to propose an idea that may seem somewhat surprising: Every navigation system works exactly the same way. In the discussion that follows, I will seek to show that every existing navigation system, including all of those listed in Figure 1, essentially perform exactly the same set of operations. When we understand this generalized navigation framework, then we can see where improvements need to be made in order to meet tomorrow’s — or even today’s — navigation challenges.

Many of these prospective improvements are well recognized, but one area of potential gain is usually not considered — the so-called “self-building world model.” I will explain this phrase a little later, but first we must describe our generalized navigation framework.

Every Navigation System Works Exactly the Same Way
All navigation systems follow the process described in Figure 2. Each depends on a sensor that exists in the real world. This sensor detects various physical phenomena and converts what it detects into some form of raw data (voltages, binary data, other signals, and so forth).

Sometimes this raw data will be converted into more useful information almost immediately. For example, a video camera collects images at a high rate (data), but we might also process the video stream to determine that a particular object is now visible to the camera (information).

Next, let’s consider what is called the world model in Figure 2. This represents knowledge about the real world that we need to make use of the sensor data. Included in this model are such things as the locations of navigation beacons, signal characteristics, or a gravity field model. (I will give more examples of world models later).

Another important aspect of the problem is the navigation state, represented in the bottom right portion of Figure 2. In terms of our generalized diagram, this refers to an estimated or calculated navigation solution (which may or may not be in the form of a formal state vector). The navigation state is normally the desired output from a navigation system and includes quantities like position, velocity, attitude, or time.

The prediction algorithm relates the navigation state with the world model. Generally, this algorithm is able to use the world model to predict the measurements for any particular navigation state. Both the world model and the navigation state must be sufficiently detailed in order to generate a valid predicted measurement. For example, if we are considering a VOR/DME beacon for aviation, a world model that knows only the color of the VOR/DME stations would clearly be insufficient.

The final (and very critical) block in Figure 2 is the comparison between the predicted measurements and the actual measurements. The most common use of this comparison is to update the state estimate in such a way that the predicted and actual measurements are in agreement (the arrow extending downward from the comparison block).

A classic example of this is an extended Kalman filter update. In fact, in the Kalman state update equation, we can see elements of the world model/prediction algorithm, the state estimate, the comparison, and the update:

See Equation in the inset photo, above right

Although this well-known extended Kalman filter equation clearly shows many aspects of our generalized navigation diagram, it is by no means the only algorithm that uses this approach. Virtually every estimation algorithm that is used for navigation has these kinds of elements present in one form or another.

One more part of the diagram remains that we have not yet discussed — the arrow labeled World Model Updates pointing upward from the comparison block. On occasion, this comparison between predicted and sensor measurements — in addition to updating the state — can also be used to update the world model.

This world model update is a critical part of a “self-building” world model, which we will come back to later. For now, though, let’s look at two different (and contrasting) examples to demonstrate the applicability of this model to a wide variety of navigation problems.

Example 1: GPS
Figure 3 shows this basic navigation framework applied to GPS. In the real world, GPS satellites emit RF signals that are picked up by a receiver and turned into pseudorange measurements. The world model for GPS consists mainly of satellite position information (in the form of satellite ephemeris) and satellite clock information.

The GPS world model could also incorporate various forms of error modeling, such as tropospheric and ionospheric delay models, relativistic effects, multipath models, differential corrections, and so forth. These more advanced world model components are required for users who desire higher levels of accuracy.

The GPS navigation state consists of an estimate of the user position and clock error, which are combined with the world model to generate predicted pseudoranges in a relatively straightforward manner. The GPS navigation algorithm will then compare the predicted and actual pseudoranges in order to correct the navigation state, often using an extended Kalman filter or an iterative least-squares approach.

Note that a typical GPS user does NOT update the world model. World model updates (i.e., ephemeris and satellite clock estimation) are performed by the GPS system itself — specifically, the 2nd Space Operations Squadron at the GPS Master Control Station.

One of the reasons that GPS works so well for so many users is that we have created a real-world entity (satellites) that can be easily modeled. The satellite position and clock values can be calculated using a set of relatively simple equations that is freely available to any user by accessing the GPS Interface Specification (in the case of satellite position and clock, Table 20-IV and Equation [2] in IS-GPS-200F). This “world model” matches the real world very well. So, because the GPS Operational Control Segment takes on the task of updating satellite ephemeris and clock parameters, the user does not have to be involved in this process, which greatly simplifies the navigation task.

GPS is an example of a “create-the-world” approach to navigation, in which we build and deploy components that generate everything in the real world needed for navigation. This is done in a manner that is optimized for the user. Other examples of a “create-the-world” approach to navigation would include any form of navigation beacons (such as VOR/DME stations for aviation) or radar.

While “create-the-world” approaches to navigation are highly appealing from a user point of view, they do have drawbacks. First of all, resources must be available to deploy and maintain the infrastructure that is needed to support these kinds of approaches. Development, deployment, and maintenance of the GPS system has cost billions of dollars, and while this has generally been considered a good investment — given the huge number of users, the resulting economic growth, and the general benefit to society as a whole — there is a limit to how many systems of similar scale can be deployed by any one nation or group of nations.

Another drawback of “create-the-world” approaches is that they only work in limited environments and situations. GPS is considered a worldwide system, but that does not mean that it works everywhere that a user wants to navigate. Having a GPS receiver will not help you much if you want to navigate under water, underground, in many indoor environments or places with interfering signals.

Our growing addiction to navigation has created demand for ubiquitous navigation — that is, the capability for navigating in any environment at any time. This, in turn, has prompted a great deal of research into approaches that involve natural signals (not man-made) or man-made signals that already exist but for purposes other than navigation (called signals of opportunity). We will now consider a navigation system that falls strongly in the natural signal camp — human visual navigation.

Example 2: Human Visual Navigation
Imagine that you are blindfolded and dropped at an unknown location in the city in which you live. Your goal is to figure out where you are so that you can make your way back home. When the blindfold is removed, the first thing you’ll do is look around, searching for something familiar.

The human brain does an amazing job of taking in visual information (sensor data), and comparing that against an internal database of remembered positions/objects (a world model). For the sake of argument, let’s assume that you do not recognize anything at first when the blindfold is removed. You may see cars, stores, houses, and other visual features, but none of them are familiar. Another way of saying this, in our generalized navigation framework, is that the comparison between the sensor data and the world model does not return any matches.

Since you don’t recognize anything, you start walking, and after several minutes of walking you come to an intersection that “looks familiar,” but you’re still not 100 percent sure of where you are. What does it mean to see things that “look familiar” but not yet know your location? It means that you now have sensor data that is starting to match your mental database, but in such a way that you may not have completely nailed down your exact location. This highlights the fact that as humans, our “world model” (i.e., memory of what objects are where in the world) is not perfect, and we have varying levels of detail in our personal world models.

Back to the finding your way home mission: As you consider this “looks familiar” feeling, you might be saying to yourself, “If I am where I think I am, then around the next corner I will see a Walgreens drug store.”

What are you doing at this point? In the generalized navigation framework, you are making many different guesses of your navigation state (location), using your world model (memory) to predict what we will be seeing, and comparing that with our sensor data (what you see).

You continue walking until you reach a spot where you finally figure out where you are. This means that you now have identified a variety of features that all corroborate a guess of your location.

As humans, we have a very strong ability to know when we have this level of surety. In other words, we know what we know. At some point, we become “sure” of our location, and this is the point where we have so much evidence coming in from our eyes that matches up with our world model and is sufficiently unique, that we know, beyond a shadow of a doubt, exactly where we are.

Figure 4 shows the generalized framework for this human visual navigation case. Note that, unlike the GPS case, with human visual navigation there is an update to the world model (the arrow pointing upward from the comparison block). In fact, this is a very powerful aspect of human navigation, as evidenced by the fact that if you were once again blindfolded and dropped off in the same spot as previously, you would almost immediately know where you are once the blindfold is removed.

What’s different between the first time you were here and this time? If asked, you might say that you “remembered” what you saw last time — in other words, the first time you were here, you were continuously updating your world model with your sensor data, even though you didn’t yet know where you were.

An important point to make is that the world model for human visual navigation is a “self-building” world model, in the sense that the world model is constantly being updated using the same sensor measurements that are used to navigate. Our vision is used both to learn our environment and to navigate. We cannot download a map of the environment into our brains (yet!) — rather, we must build up our world model ourselves.

If we consider the case of human visual navigation, at least three key skills are required, and all three of these have application to automated (non-human) forms of navigation using natural signals like vision. First, the sensor must “observe” the right kind of information, answering the question “What is interesting about this scene?” While we may see a lawn and can pick out individual blades of grass, we inherently recognize that any individual blade of grass is useless to remember; so, we don’t. Humans have an uncanny ability to pick out the objects that are truly salient.

Second (and related to the first point), we must store information efficiently, in a way that captures the most “interesting” characteristics. We remember only that which is salient, and we remember it in a way that it can be easily recalled. Third, especially when you consider the massive amount of visual memory stored in our brains, humans have a very powerful “comparison engine” that relates what we see with our visual memory.

If we compare human visual navigation with something like GPS, we can see a number of benefits of this kind of natural signal approach. First of all, the latter method works in a wide variety of situations, including many in which GPS does not work (such as deep indoors). Also, human vision is not RF-related; so, all of the challenges with RF-based navigation, such as multipath and interference, are avoided (unless we consider being confused by mirrors as multipath!). Moreover, human vision navigation uses very small sensors — our eyes.

However, human navigation has significant drawbacks, most notably its dependence on our individual familiarity with a locale and our personal databases (memory), which means we can get lost. It also depends on the human brain (which is very hard to emulate).

Much recent research has sought to emulate this human ability by developing automated ways to navigate using vision. Although in many ways not as efficient as a human brain, these approaches are able to determine absolute or relative position using cameras and a variety of algorithms.

One distinct advantage that automated, computer-based systems have over biological systems is their ability to share world model information across platforms. It is much easier for computers to share a growing database of information than for biological systems. (Even with our strong ability to communicate, it would be impossible for one person to share all of the contents of their visual memory with another person.)

Better Navigation — What Are Our Options?
With this generalized navigation framework in mind, we can now entertain the question, “How can we improve our ability to navigate?”

There are four primary options available to us for improved navigation:

1. Make a better sensor. Sometimes, the quality of the measurement coming out of the sensor is a limiting factor, and in such cases improving sensor quality will improve navigation ability. Additionally, developing a sensor that can sense a new type of signal can also yield new navigation capability.

2. Create a new navigation signal. Deploying a new “create-the-world” type of navigation system can be used to fill capability gaps with existing systems. As mentioned previously, the world model for such systems is often simple, although complications can arise if the generated signal is significantly altered by the real world.

3. Improve navigation algorithms. Over the years, algorithmic improvements have yielded improved navigation capability. A good example of this is the development of efficient carrier-phase ambiguity algorithms, which have enabled real-time, near centimeter-level navigation using measurements that have been available for many years.

4. Improve our “world model” in order to use natural or existing signals. Natural signals (vision, magnetic field, gravity, odor, and so forth) often require a complicated world model if they are to be used for navigation. This may involve large databases, as well as the ability to store and retrieve relevant data in an efficient manner. Additionally, when working in the realm of natural signals, the need for self-building world models comes into play, such that the navigation sensor is used for both navigation and world model (database) development.

The first three options described above are well known and commonly pursued. The vast majority of technical papers at navigation-related conferences have fallen into these three categories. Continuing to pursue these three options will likely lead to additional improvements in navigation, and I believe they should continue to be pursued wholeheartedly.

However, the fourth category — improving our ability to develop and use advanced world models — has not received much attention but is an area that must be developed in order to take advantage of a number of promising natural signals for navigation. In my opinion, world model development, and in particular the development of self-building world models, has the potential for significant advances over the coming years and is the key that will open up a wide variety of navigation approaches not currently being exploited.

One such approach that we have been working on at the ANT Center is navigation using variations in the Earth’s magnetic field. The remainder of this article will summarize some of this work and describe how self-building world models really are essential for effective, widespread use of such techniques.

Magnetic Field Navigation
While using Earth’s magnetic field for navigation is certainly not a new concept, the use of specific magnetic field information mapped to a geographic position is growing in popularity. The article by J. Wilson et alia (listed in the Additional Resources section near the end of this article) proposes the use of U.S. Geological Survey magnetic field maps and magnetic field variations over a large area to navigate in an aircraft.

The algorithm combines the magnetic field information with the aircraft’s dead-reckoning navigation system to determine the aircraft’s position. Flight test results compare the dead-reckoning solution with the magnetically aided navigation solution to demonstrate the navigation solution improvement, but the position accuracy observed was on the order of 2.5 kilometers.

In two papers also cited in Additional Resources, W. Storms applied a terrain navigation algorithm to the indoor magnetic field environment and achieved sub-meter accuracy positioning results. T. Judd and T. Vu tackled an indoor pedestrian navigation problem, noting interesting correlation in three-axis magnetometer measurements in the indoor environment. While attempting to correct heading estimation indoors, the magnetic field along the route exhibits distinct “fingerprints” at unique locations along the route. The resulting fingerprints allow correlation of previous magnetic field data with measurements during a new route to determine if a specific location is reached.

The approach described in this article is based in large part on the Ph.D. research of Capt. Jeremiah Shockley at the ANT Center at AFIT, which focused on ground vehicle navigation using magnetic field sensors exclusively.

Concept of Operation
Initially, a three-axis magnetometer was mounted in a convenient location in a vehicle and aligned with the body frame, careful to avoid large emitters of electromagnetic interference (EMI) on board. Next, a calibration was performed in order to mitigate the magnetic field distortion caused by the vehicle itself.

Two main stages follow this initial setup: mapping and navigation. In the mapping stage, three-axis magnetic field data is collected from the magnetometer at times when the vehicle position is known (such as when GPS is available). This data is stored along with the corresponding positions, creating a “world-model” or map of the three-dimensional magnetic field over the roads that have been traversed during this stage.

In the navigation stage, the vehicle drives over roads that have previously been mapped with the goal of determining position using only the measurements from the magnetometer. This is accomplished by comparing the magnetometer measurements with the previously generated map using a Gaussian likelihood method. This method assigns a higher likelihood value to places on the map that closely match the collected measurements.

Figure 5 depicts a sample set of normalized likelihoods at a single epoch. The sample set of likelihoods at a single epoch shows the relationship between a single measurement and the entire magnetic field map. The likelihood is near zero for a large portion of the map, but depicts several peaks that are possible locations based on the magnetometer measurement.

The peaks are formed as the magnetometer measurement approaches a potential match in the magnetic field map. In this case, there is not a single location with a high likelihood. Nonetheless, this example reveals only a few locations that are possible and many that are not. Occasionally, features present in the data result in a single large peak.

Field Test
We conducted a field test to demonstrate the feasibility of this kind of magnetic field navigation approach. Three different types of vehicles were used — a 2004 Chevrolet Avalanche truck, a 2003 Pontiac Aztek sports utility vehicle (SUV), and a 2005 Nissan Altima car. Each platform represents a different vehicle type in order to demonstrate portability across vehicles.

A three-axis, smart digital magnetometer able to detect the strength and direction of an incident magnetic field was mounted in each vehicle on a level surface and aligned with the body frame as much as possible. A GPS receiver collected position information for mapping and was also used as a truth reference.

Figure 6 displays the three different road environments used in this test. The left map in Figure 6 shows the initial route and consists of a fairly benign environment around the Air Force Institute of Technology (AFIT). The middle map covers a suburban neighborhood and allows investigation of the ability to discern position on parallel roads in a similar environment. The right map covers a large area and shows the relative locations of the suburban neighborhood and AFIT map areas. The colors are only used to highlight the route and possess no other meaning.

The left frame in Figure 7 shows the GPS-based vehicle track (thick black line) and a “MagNavigate” particle filter solution (green dots that appear like a line). Each green dot represents the weighted particle mean. While the system appears to track quite well, Figure 7 does not convey the “along-track”’ error in the system. The corresponding position error plot at the right of Figure 7 displays the east, north, and horizontal position errors versus time for the same AFIT test.

These results show that the system can drift at times, but frequent corrections bring the error down to very low values. Similar results are seen in Figures 8 and 9, which show the accuracy for the neighborhood and large routes, respectively, shown in Figure 6.

One important observation is that the majority of the error occurs when the magnetic field measurements are not sufficiently different from adjacent measurements, which results in propagation errors. For the results shown here, only magnetometer measurements were used. In general, the time periods between good position fixes is on the order of 10s of seconds. If these magnetometer updates were to be combined with a dead-reckoning capability (such as an odometer), then the results would be significantly improved and would likely stay within several meters of the true position during the majority of the test.

These results demonstrate the potential for very precise ground navigation using magnetometers. However, the test scenario was somewhat unrealistic, because it involved intentionally driving a vehicle over a repeated path — an approach that is fine for a research demonstration, but not for large scale implementation.

To be able to implement this approach on a national scale would require development of a large-scale magnetic field map (i.e., a world model). This could be done a couple of ways: 1) hire a company to drive on every road in the country to collect magnetic field information, or 2) develop a self-building world model approach, in which a collaborative magnetic field map is being continuously developed and maintained incorporating sensor data from cars during their normal course of operation. —Option 1 is likely cost-prohibitive, which leads us toward the self-building world model approach that employs the same navigation sensors used when GPS is not available.

In order to demonstrate this concept, we are developing an iPhone app that is able to collect GPS and magnetometer data on an iPhone 4S and upload it to a central database. We will then install this application on the phones of several researchers within the ANT Center in order to demonstrate the concept of a collaborative, self-building magnetic field database over the roads in Dayton.

Although such an approach clearly will not cover every road in the Dayton area, if expanded to hundreds or thousands of users, it would likely cover almost every area of interest in a relatively short time. We believe that such an approach is the only realistic way to develop a large-scale magnetic field world model. However, once such a world model is developed, near GPS-quality navigation would be possible without the use of GPS.

Conclusion
The generalized navigation framework presented in this article provides a top-level picture of how all navigation systems work and points toward several approaches at improving our ability to navigate. The area that has been most neglected, but which is increasingly necessary for the practical use of natural signals for navigation, is the development of self-building world models in which the same navigation sensor is used both for navigation and for world model development. A good example of this is ground-based magnetic field navigation, which shows potential for highly accurate navigation but requires the development of a collaborative, self-building magnetic field model for practical implementation.

Acknowledgment
Many of the concepts related to the generalized navigation framework were jointly developed during a series of engaging “Friday afternoon” discussions with my former ANT Center Deputy Director, Dr. Mike Veth.

Disclaimer
The views expressed in this paper are those of the author and do not reflect the official policy or position of the U.S. Air Force, Department of Defense, or the United States government.

Additional Resources
[1] Judd, T., and T. Vu, “Use of a New Pedometric Dead Reckoning Module in GPS Denied Environments,” Proceedings of IEEE Position, Location and Navigation Symposium (PLANS), Monterey, California, May 2008
[2] Shockley, J., Ground Vehicle Navigation Using Magnetic Field Variation, Ph.D. thesis, Air Force Institute of Technology, September 2012
[3] Storms, W., Magnetic Field Aided Indoor Navigation, Master’s thesis, Air Force Institute of Technology, March 2009
[4] Storms, W., and J. Raquet, “Magnetic Field Aided Vehicle Tracking,” Proceedings of ION GNSS-2009, Savannah, Georgia, September 2009
[5] Wilson, J., R. Kline-Schoder, M. Kenton, P. Sorensen, and O. Clavier, “Passive Navigation Using Local Magnetic Field Variations,” Institute of Navigation International Technical Meeting, Monterey, California, Jan 2006

IGM_e-news_subscribe