Are there special considerations for dealing with raw GNSS data?

Q: Are there special considerations for dealing with raw GNSS data?

A: Most GNSS users are only interested in position, velocity, and/or time (PVT) information provided by a receiver. In fact, most mass-market GNSS receivers (e.g., those in cell phones or in your vehicle) only provide PVT information along with some supporting data (such as the number of satellites tracked, dilution of precision, course over ground, and so forth).

Q: Are there special considerations for dealing with raw GNSS data?

A: Most GNSS users are only interested in position, velocity, and/or time (PVT) information provided by a receiver. In fact, most mass-market GNSS receivers (e.g., those in cell phones or in your vehicle) only provide PVT information along with some supporting data (such as the number of satellites tracked, dilution of precision, course over ground, and so forth).

However, many — typically higher-cost — receivers also provide access to “raw” measurements including pseudoranges, Doppler shifts/frequencies (i.e., range rates) and, sometimes, carrier phase. Access to these raw measurements allows for powerful new data processing options, including development of a custom data processing engine (e.g., for a specific application), differential processing for higher accuracy, and/or tighter integration with inertial measurement units (IMUs) and other external sensors (see Demoz Gebre-Egziabher’s January/February 2007 “GNSS Solutions” article on different architectures for integrating GNSS and inertial data).

This article will touch on some of the unpublished and undocumented (or, at least, hard to find) differences that exist between receivers’ raw data that may trip up the “uninitiated.”

The Basics
As mentioned, raw measurements include pseudorange (P), Doppler (ϕ̇) and carrier phase (ϕ) data. Their simplified measurement equations are respectively given by the following:

P = ρ + b + εP
ϕ̇ = ρ̇ + ḃ + εϕ̇
ϕ = ∫ϕ̇dt
= ρ + b + λN + εϕ

where ρ is the geometric range between the user and the satellite, ρ̇ is the geometric range rate, b is the receiver clock bias scaled to units of distance by the speed of light, is the clock drift scaled to units of distance per second, ε are the measurement errors associated with the subscripted measurement, λ is carrier wavelength and N is the carrier phase ambiguity. Note that the carrier phase is the time-integral of the Doppler and is therefore sometimes called the accumulated Doppler range (ADR).

The main differences you observe between GNSS user equipment from various receiver manufacturers are: (1) the maximum allowable magnitude of the receiver clock bias and associated adjustments of the clock, and (2) the sign convention of the Doppler and carrier phase measurements.

Clock Effects
The receiver clock bias represents the difference between the receiver’s estimate of GNSS time and the true GNSS time (as maintained/transmitted by the satellites). This offset is theoretically unbounded, but all manufacturers try to limit it to some extent.

Millisecond Jumps. Most receivers will limit the clock bias to be less than some integer number of milliseconds, as determined from the receiver’s estimate of the clock error. Once the clock error exceeds a pre-set threshold, the receiver adjusts its time estimate by the requisite number of integer milliseconds needed to reset the error it to approximately zero—this is a so-called millisecond jump.

As I wrote in the March/April 2011 “GNSS Solutions” column, depending of your application, the magnitude of the clock error may or may not be important, and thus a review of that article is indirectly relevant in this context, too. However, the focus here is on the effect of the jump on your data processing and in this regard, three main issues arise: the magnitude of the millisecond jump, how it impacts the various measurements, and how it may affect your PVT solution. These are discussed in more detail below.

First, although this is the most common way of handling timing errors in raw GNSS data, receiver manufacturers will have different magnitudes of millisecond jumps ranging, in my experience, from 1 millisecond to as much as 100 milliseconds. Once scaled to units of distance by the speed of light, even a one-millisecond jump is relatively easy to identify in the data (because it creates about a 300-kilometer ranging error that, we will discuss later, may adversely affect the position solution). However, your software should to be able to handle a jump of any integer number of milliseconds.

Second, although the clock bias term appears in both the pseudorange and carrier phase equations, we generally only “see” the effect on the pseudorange measurements. To most easily explain this, recall that the carrier phase is the integral of the Doppler, which itself is proportional to the clock drift. Since the clock drift is unaffected by a millisecond jump, the carrier phase is unaffected unless the millisecond jump is somehow “added” to the carrier phase after the fact.

The main effect of only seeing millisecond jumps on the pseudorange will be if you apply carrier smoothing techniques. If your software does not account for differences between the pseudorange and carrier phase data, you may inadvertently introduce large biases into your smoothed pseudoranges. More importantly, if the smoothing filters (one per satellite) are not all working in steady state, the magnitude of the biases would differ between satellites, thus causing a jump in your PVT solution.

The final effect to be considered is how millisecond jumps affect the PVT estimation algorithm itself. To this end, millisecond jumps do not affect a least-squares estimator, precisely because such estimators have no time history. That said, the estimated clock drift would change by the magnitude of the millisecond jump.

In contrast, Kalman filters need to handle these jumps carefully or a large position jump will result. Innovation testing within the Kalman filter algorithm will easy identify these jumps. Unfortunately, innovation testing is usually performed on a per-satellite basis, so blindly apply such algorithms may result in you rejecting all of your measurements, usually for many consecutive epochs! You therefore need to handle the case where all (pseudorange) measurements exhibit the same jump between epochs and that the jump is close to an integer number of milliseconds.

Clock Steering. The alternative to millisecond jumps is when a receiver performs clock steering. In this case, the receiver adjusts the frequency of its internal oscillator to drive the clock drift to zero. In this case, the clock bias term remains very small (typically a few microseconds or less).

Clock steering obviates many of the problems associated with millisecond jumps. However, if you have the fortune of developing and testing data processing software with clock-steered data, beware that the software may not work as well with data from other receivers.

Sign Conventions
The other main difference between receivers is the sign convention of the Doppler and/or carrier phase measurements. Pseudorange measurements are excluded here because these are, by definition, directly proportional to the geometric range regardless of how they are generated with a GNSS receiver.

The Doppler shift is defined as the difference between the transmitted and received frequency, in this case, at the satellite and receiver, respectively. However, whereas one receiver manufacturer may define Doppler as the transmitted-minus-received frequency, another may adopt the opposite convention. This leads to a sign ambiguity. (Note that a sign ambiguity could also arise from mixing the radio frequency to a negative intermediate frequency in the receiver’s front-end — a process known as “high-side mixing”— but this would also require some changes to the design/ implementation of the tracking loops. Ultimately, however, the source of ambiguity is unimportant.)

Furthermore, as the carrier phase is the time-integral of the Doppler, the sign ambiguity usually extends to the carrier phase. In other words, if the pseudorange for a given satellite increases over time, the carrier phase may increase or decrease at the same rate (ignoring ionospheric divergence effects).

Notwithstanding the foregoing, the RINEX (the Receiver INdependent Exchange) data format defines that the carrier phase changes with the negative sign of the Doppler; consequently, further caution must be exercised.

So, what is the effect of these sign differences? As is often the case with GNSS, it depends.

First, the sign of the Doppler might affect a receiver’s computation of its velocity (see Salvatore Gaglione’s “GNSS Solutions” column in the March/April 2015 issue).

Second, the sign of the carrier phase with respect to the pseudorange is important for computing position when using pseudorange and carrier phase data together, and for carrier smoothing of the pseudorange. In both cases, the pseudorange and carrier phase data should increase or decrease at the same time (or, at least, the software should handle the case where they behave oppositely).

Third, the relative sign of the Doppler and carrier phase measurements needs to be handled if using Doppler measurements to identify cycle slips in the carrier phase data.

Finally, any other combination of pseudorange and carrier phase measurements will require that the sign conventions be handled properly. This would include, for example, generation of the pseudorange-minus-carrier (“code-minus-carrier”) combination often used to assess pseudorange noise and multipath effects.

Summary
This article has looked at some of the unpublished and undocumented differences that exist between raw measurements provided by different GNSS receivers. Fortunately, these differences are easy to handle but require modifications to the data processing software; otherwise the resulting PVT solution may contain large, unexpected, errors.

IGM_e-news_subscribe