Survey and Mapping Archives - Page 27 of 27 - Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design

Survey and Mapping

July 1, 2006

Orbital Precession, Optimal Dual-Frequency Techniques, and Galileo Receivers

Q: Is it true that the GPS satellite geometry repeats every day shifted by 4 minutes?

A: It is true that the GPS satellite orbits were selected to have a period of approximately one half a sidereal day to give them repeatable visibility. (One sidereal day is 23 hours, 56 minutes, and 4 seconds long or 236 seconds shorter than a solar day.) However, because of forces that perturb the orbits, the repeat period actually turns out to be 244 to 245 seconds (not 236 seconds) shorter than 24 hours, on average, and changes for each satellite.

Q: Is it true that the GPS satellite geometry repeats every day shifted by 4 minutes?

A: It is true that the GPS satellite orbits were selected to have a period of approximately one half a sidereal day to give them repeatable visibility. (One sidereal day is 23 hours, 56 minutes, and 4 seconds long or 236 seconds shorter than a solar day.) However, because of forces that perturb the orbits, the repeat period actually turns out to be 244 to 245 seconds (not 236 seconds) shorter than 24 hours, on average, and changes for each satellite.

The selection of a half sidereal day orbit causes the satellite ground track and the satellite visibility from any point on earth to be essentially the same from day to day, with the satellites appearing in their positions approximately 4 minutes (236 seconds) earlier each day due to the difference between sidereal and solar days. This was a particularly useful property in the early days of GPS when session planning was important to ensure adequate satellite coverage. With this easily predictable coverage, GPS users could schedule repeatable campaign sessions well in advance just by shifting their experiments forward each day by 4 minutes.

(For the rest of Penina Axelrad and Kristine M. Larson’s answer to this question, please download the complete article using the PDF link above.)

Q: How can dual frequency code and carrier measurements be optimally combined to enhance position solution accuracy?

A: The smoothing of GPS code pseudorange measurements with carrier phase measurements to attenuate code noise and multipath is a well-established GPS signal processing technique. Unlike carrier phase real time kinematic (RTK) techniques, carrier-smoothed code (CSC) positioning solutions do not attempt to resolve carrier phase ambiguities. As a result, they offer a number of design and operational advantages for those applications that do not require RTK accuracies.

Ionospheric effects are a limiting factor in how much smoothing of pseudorange errors can be accomplished with single-frequency measurements. The use of dual-frequency code and carrier measurement combinations in CSC processing to attenuate pseudorange errors and as a precursor for carrier phase ambiguity resolution has gained increasing importance, particularly with the availability of all-in-view dual-frequency GPS receivers in the survey and military markets. Interest in these techniques will increase with the advent of additional GNSS signals as the result of GPS modernization and implementation of Galileo, along with the proliferation of differential services.

(For the rest of Dr. Gary McGraw’s answer to this question, please download the complete article using the PDF link above.)

Q: What is the availability of Galileo receivers?

A: With the launch of the GIOVE-A (Galileo In-Orbit Validation Element – A) Galileo test satellite in December last year, the European Galileo satellite navigation system is making progress. How will we be able to recognize the benefits of Galileo? We will require enough Galileo satellites to make a difference when used with GPS alone, and we will require dual-mode Galileo/GPS receivers.

First, let us recap what Galileo will provide to users. And second, let us summarize what benefits we can expect to see, not only from Galileo alone but from a combined GPS/Galileo constellation of approximately 60 satellites.

Galileo will offer several worldwide service levels, including open access and restricted access for various segments of users.

(For the rest of Tony Murfin’s answer to this question, please download the complete article using the PDF link above.)

By

Uh-Oh, It’s UXO!: A Global Differential GPS Solution for Clearing Unexploded Ordnance

At any given time along a large swath of rural, northern Texas you might witness a loud, dirty ritual. A handful of men standing still in the middle of a field, their expectant eyes fixed on the same point. Just about the time your gaze sets on the same point, it happens: a deep sound like the forceful downbeat of a drum cracks through the air and the dirt-caked ground explodes in a dusty plume of metal and sand. The cloud dissipates and the men, satisfied that the World War II–era munition has been successfully destroyed, move on to their next pin-flagged target.

At any given time along a large swath of rural, northern Texas you might witness a loud, dirty ritual. A handful of men standing still in the middle of a field, their expectant eyes fixed on the same point. Just about the time your gaze sets on the same point, it happens: a deep sound like the forceful downbeat of a drum cracks through the air and the dirt-caked ground explodes in a dusty plume of metal and sand. The cloud dissipates and the men, satisfied that the World War II–era munition has been successfully destroyed, move on to their next pin-flagged target.

It’s an almost daily exercise for the survey and “dig” teams of Parsons Corporation and USA Environmental who together for the past four years have been steadily clearing the once-active infantry and artillery training facility Camp Howze and returning the 59,000 acres of land to its former cattle-grazing condition.

Officially designated as the Former Camp Howze Removal Project (FCHRP), the Texas effort is part of a long-standing U.S. Army Corps of Engineers (USACE) program to clean up unexploded ordnance (UXO) remnants at former military training bases around the globe. And it’s one in which Parsons, an engineering and construction firm based in Pasadena, California, has been heavily involved for the past 15 years. During that time, Parsons survey and explosives teams have located, unearthed, and recycled or destroyed more than a billion pounds of munitions, fragments, and other range-related items.

Unlike the typical short-term UXO removal projects of the past, the FCHRP has already required four years of dedicated labor and doesn’t as yet have a fixed end point. The USACE funding approach enables the teams to stay on site until either the money runs out or all the ordnance is found and cleared, says Terry Willis, Parsons field data manager for FCHRP.

Such unusual circumstances make budgeting for operational costs and developing highly efficient and productive work methods that much more critical, says Willis, because any substantial funding cuts by Congress could mean the end of the project.

An additional motivation for taking an open-ended approach to the FCHRP could be that all of the former Camp Howze land is privately owned and is home to families who have lived and worked there for more than 50 years. Each family must grant consent to the teams to clear individual parcels.

Despite the fact that their house or barn could be sitting on a land mine or other ordnance, many people are rather complacent and don’t see the urgency in having the munitions removed, Willis says. Even though field teams have found artillery rounds eight feet from people’s doorsteps, acquiring the necessary consent to access the property has been a time-consuming process.

The Basis for Going Baseless
The rather atypical FCHRP presented Parsons with the opportunity to arm the project teams with their own atypical survey “weapon” — a global satellite-based augmentation system (GSBAS) that provides corrected GPS positioning without the use of base stations.

Although Parsons had never before employed the GSBAS technology in its numerous UXO removal projects, given the way the GSBAS has performed so far on the Texas project, Willis predicts that similar systems will become as common place in the field as the shovels and the explosives used to remove munitions.

Previously, Parsons’ UXO-removal teams employed real-time kinematic GPS (RTK-GPS) systems to create search grids, find and stake out anomalies for investigation, and record the position of munitions found. RTK techniques require the broadcast of differential corrections to the GPS signals’ carrier phase measurements. These corrections are transmitted via a high-speed data modem from a base station to roving GPS receivers.

“Although RTK-GPS is extremely accurate,” says Willis, “its complexity, bulk, and expense make it less than ideal for Parsons’ purposes.” Since putting the GSBAS system to use in the field, the Camp Howze team has virtually eliminated its need for RTK-GPS in the majority of the fieldwork, obtaining decimeter accuracy for one-third the cost of an RTK-GPS unit.

A departure from local real-time differential GPS systems, the GSBAS relies on a global network of base stations to calculate and compensate for  clock and orbit errors in the satellite transmissions. Broadcast of the DGPS corrections, available globally in real time, eliminates the need for local base stations, which in turn eliminates the struggle to maintain communication links to a source of local corrections. In short, users are no longer tethered to a base station for precise positioning.

Recycling a Metallic Past
For four years, from 1942 to 1946, Camp Howze was the temporary home for thousands of soldiers as they prepared for battle overseas. Located along the Texas-Oklahoma border about 55 miles North of Dallas, the camp offered an immense area for training forces, artillery ranges, libraries, chapels, theaters, banking facilities, and even a camp newspaper.

For the last four years, however, former Camp Howze has been the temporary home of the Parsons and USA Environmental teams as they continually search for the telltale metallic signs of the camp’s previous incarnation.

When their FCHRP activities began in 2002, the Parsons team started out with very scant historical and practical knowledge of the area, having only a few sheets of county property maps and background information on the camp itself provided by the U.S. Army. This information included engineering maps with the approximate locations of artillery ranges, aerial photos of the facility from 1943, and written records from units that trained there.

Parsons then obtained updated aerial photos of the site taken in 1995 and records of what project managers call “phase one properties,” occupied properties or buildings believed to be near or on former range or training areas. All of these data sets were incorporated as layers into a geographical information system (GIS) to begin to identify logical areas to investigate for UXOs. Of critical importance for prioritizing their efforts was the identification of current high-traffic areas where private citizens live, work, and play.

“Once we identify the areas to investigate, we determine which methods for removing ordnance will be the most effective based on many factors such as accessibility, terrain, vegetative cover, time of year and land-owner consent,” says Willis. In the case of Camp Howze, all of those factors led to the decision to couple standard search tools with new technology to improve efficiency.

The two most common investigative tools are what Willis calls “magnetometer (Mag) and Dig” — a thorough, yet costly and time-consuming process of manually clearing smaller areas with the aid of shovels and handheld metal detectors – and “digital geophysics,” a survey technique that uses large, highly sophisticated electromagnetic sensors to detect the presence of buried metal objects. As the latter method can rapidly cover a much larger territory, Parsons first applied the technique to perform a geophysical survey in combination with RTK-GPS to pinpoint suspected unexploded munitions.

To perform the geophysical survey, three computer-controlled electromagnetic sensors are connected together to create a three-meter wide sensing array. The sensors are then physically pulled by an all-terrain vehicle over the area of interest to detect the presence of metal items in the ground and record their positions.

The readings from the electromagnetic sensors coupled with the continuous GPS readings are postprocessed to generate coordinates of anomalies, that is, possible UXOs, which are then added to the GIS. All uploaded position readings are tied to Texas North Central State Plane coordinates.

Through considerable postprocessing of the geophysical survey data, the Parsons’ geophysicists plot positions of the anomalies on digital maps and “flag” them through color-coded points to signify the level of probability of being UXOs.

Following their usual practice, at this stage the field teams would have used RTK-GPS to reacquire and flag the real-world points of potential targets detected by the geophysical survey. At the former Camp Howze, however, Willis chose to depart from tradition.

Given the terrain extremes in this region of rural Texas, the large study area, high- accuracy requirements, limited labor resources and indefinite work schedule, Parsons needed a cost-effective and user-friendly survey solution that would enable UXO technicians to efficiently locate anomalies and precisely position them.

“We opted for the satellite-based system approach for a number of reasons, one of which was the considerable cost savings over an RTK-GPS rental,” he says. “Because it doesn’t require a base station, we don’t have line-of-sight issues nor need to spend considerable time troubleshooting communications and power supply issues. So, we can be much more productive in the field. And the simplicity of the system makes it much easier for the teams — who are not trained surveyors — to set up and use.”

Unearthing UXO
On any given day, Willis and his teams use the GIS as a logistical planning tool to map out clearing strategies based on the digitally flagged hot spots indicating the highest probability of buried munitions. The team imports those coordinates into the controller software of the receiver, and the Mag/Dig teams head to the site.

Once on site, the two-person survey team uses a “quick-start” feature built into the GSBAS receiver software that enables the system to reach full position accuracy immediately by using a previously surveyed position to initialize the navigation function. This set up process takes “less than five minutes,” says Willis, after which the survey team uses the satellite-based system to navigate to the predetermined points on the ground, where they stake the targets with pin flags.

Following relatively closely behind the survey team, the dig team of three to seven UXO technicians armed with shovels and handheld magnetometers investigate each flagged point. They use the metal detector to verify the presence of metal, and, if the indications are affirmative, they gingerly dig up the object.

Should they unearth any munitions, they carefully inspect the UXO to determine if it needs to be destroyed. To neutralize the ordnance they set explosives and destroy it on the spot. All discovered ordnance is properly and precisely recorded — type, position, and confirmed destruction — and the collected field data uploaded into the GIS.

“Although the GIS was not originally a requirement, it has become the information backbone of the project,” says Willis. “It’s a planning tool for UXO searches and the main repository for what we find in the field. It maintains all of the data layers that we have accumulated and created over the past four years, including aerial photos, topographic maps, scans of annotated response maps, parcel boundaries, and pipeline data to provide us with a comprehensive graphical resource. And because it’s tied into the field database, data from daily operations can be displayed geographically in various ways.” 

To date, with the combination of the GSBAS, the GIS, and their conventional Mag/Dig tools, the survey and UXO-removal teams have cleared more than 1,800 acres of Camp Howze’s most hazardous areas. Along the way, they have destroyed more than 860 live ordnance items, including mortars, artillery shells, anti-tank rockets, hand grenades, and land mines.

Stars in Their Eyes
Although Camp Howze stretches across nearly 59,000 acres, FCHRP mandate is not to sweep 100 percent of the land but rather to investigate and clear the most zones constituting the greatest hazard to the public. Willis says that adding the satellite-based system to the fieldwork is helping Parsons to fulfill that charter more efficiently than with their previous RTK-GPS solution, predominantly because they can achieve near RTK-GPS accuracy without a base station.

“We work four, 10- hour days per week,” says Willis. “If you spend an hour setting up a base station and another half hour to tear it down, you’ve lost at least an hour and a half of operational time, provided you don’t have any trouble with it during the day. In rural, rough terrain, radio line of sight is a problem, and it can be a long trip back to the base if we lose the radio signal.”

Also, powering the base for an entire day can be a challenge, he says. Parsons teams have used marine deep-cycle batteries to power the equipment and sometimes the power supply still wouldn’t last an entire day. Cellular RTK was considered for its convenience, but the existence of many cellular “dead zones” in the area precluded its use.

“Because our survey team directly supports our dig team, both teams will normally have to shut down operations if something happens to the base station,” Willis adds. “It is costly to keep a dig team in the field. If they’re forced to stop work because the survey equipment is down, it’s very expensive.”

FCHRP requirements dictate that the teams position any ordnance they discover to within one foot. Willis says the satellite-based unit performs well enough for them to meet this standard. “The decimeter accuracy provided by the system is actually more than we require for this project,” he says.

Because heavy thunderstorms and tornadoes are the only weather-related phenomena that will force the crews inside, the Mag/Dig teams need rugged equipment that’s also portable.

“The [GSBAS] system fits into a single carry case; so, much of the weight and bulk is reduced to a manageable size,” says Willis. “That simplicity and compactness makes it very reliable because it’s easy to transport into the field, set up and to use.”

Willis feels confident that the cost-effectiveness of the system will help Parsons to win similar projects in the future. “When bidding on these projects, it helps to be able to shave thousands of dollars off the cost by simply changing a piece of equipment,” he says.

In the meantime, people in this rural part of Texas can still count on witnessing a handful of men, staring at a fixed point in the distance, waiting for the inevitable explosion of dirt and metal.

For figures, graphs, and images, please download the PDF of the article, above.

By

BOC or MBOC?

Europe and the United States are on the verge of a very important decision about their plans to implement a common civil signal waveform at the L1 frequency: Should that waveform be pure binary offset carrier — BOC(1,1) — or a mixture of 90.9 percent BOC(1,1) and 9.09 percent BOC(6,1), a combination called multiplexed BOC (MBOC). The desire for a common civil L1 signal is enshrined in a 2004 agreement on GNSS cooperation between the United States and the European Union (EU).

Europe and the United States are on the verge of a very important decision about their plans to implement a common civil signal waveform at the L1 frequency: Should that waveform be pure binary offset carrier — BOC(1,1) — or a mixture of 90.9 percent BOC(1,1) and 9.09 percent BOC(6,1), a combination called multiplexed BOC (MBOC). The desire for a common civil L1 signal is enshrined in a 2004 agreement on GNSS cooperation between the United States and the European Union (EU).

For the EU and the European Space Agency (ESA), that decision — and its consequences — will come sooner: with the Galileo L1 Open Service (OS) that will be transmitted from satellites to be launched beginning in the next few years. For the United States, the waveform decision will shape the design of the L1 civil signal (L1C) planned for the GPS III satellites scheduled to launch in 2013. For a background on the process that led to design of the GPS L1 civil signal and its relevance to the BOC/MBOC discussion, see the sidebar L1C, BOC, and MBOC.

The May/June issue of Inside GNSS contained a “Working Papers” column titled, “MBOC: The New Optimized Spreading Modulation Recommended for Galileo L1 OS and GPS L1C”. Authored by members of a technical working group set up under the U.S./EU agreement, the article discussed the anticipated MBOC benefits, primarily improved code tracking performance in multipath. The column also noted that, while lower-cost BOC(1,1) receivers would be able to use MBOC, it would come at the cost of a reduction in received signal power.

An article in the “360 Degrees” news section of the same issue of Inside GNSS noted that some GNSS receiver manufacturers believe MBOC is not best for their applications and perhaps should not have been recommended. (This point was noted on page 17 of the May/June issue under the subtitle “MBOC Doubters.”) See the sidebar “Other Observers” (below) for additional comments from companies with concerns about MBOC recommendation.

This article, therefore, continues the discussion of a common signal waveform by asking several companies with different product perspectives whether they consider the proposed MBOC waveform to be more or less desirable for their applications than the BOC(1,1). Currently, BOC (1,1) is the baseline defined in the June 26, 2004, document signed by the U.S. Secretary of State and the vice-president of the European Commission (the EU’s executive branch): “Agreement on the Promotion, Provision and Use of Galileo and GPS Satellite-Based Navigation Systems and Related Applications.”

Maximum benefit from MBOC will be obtained by receivers using recently invented technology that employs computationally intensive algorithms. Although such receivers clearly will provide benefits to their users because of the BOC(6,1) component of MBOC, the practical value of the benefits have not been quantified, which is one purpose of the questions raised in this article. For the moment, let’s call all these prospective MBOC users “Paul”.

Meanwhile, patents on the most widely used multipath mitigation technologies today, such as the “narrow correlator” and the more effective “double-delta” techniques, will expire about the time the new signals are fully available, making these techniques more widely available. Unfortunately, the double-delta technology cannot use the BOC(6,1) component of MBOC. In addition, narrowband receivers, which today dominate consumer products, also cannot use the BOC(6,1). Let’s call all these users “Peter”.

Therefore, the fundamental question raised by this article is whether we should rob Peter to pay Paul. If the amount taken is quite small and the benefits are large, then the answer should be “yes.” If the amount taken creates a burden to Peter, now and for decades to come, with little benefit to Paul, then the answer should be “no.” The in-between cases are more difficult. The purpose of this article is to explore the tradeoffs.

To address this issue, we invited engineers from companies building a range of GNSS receivers to take part in the discussion. We’ll introduce these participants a little later. But first, let’s take a look at the technical issues underlying the discussion.

BOC/MBOC Background

The RF spectrum of a GPS signal is primarily defined by the pseudorandom code that modulates its carrier and associated data. A pseudorandom code appears to be a completely random sequence of binary values, although the sequence actually repeats identically, over and over.

For the C/A code on the L1 frequency (1,575.42 MHz), the state of the code (either +1 or –1) may change at a clock rate of 1.023 MHz. We call this binary phase shift keying, or BPSK(1), meaning BPSK modulation with a pseudorandom code clocked at 1.023 MHz. Note that the bits of a pseudorandom code often are referred to as “chips,” and four BPSK chips are illustrated at the top of Figure 1. (To view any figures, tables or graphs for this story, please download the PDF version using the link at the top of this article.)

Among many other topics, the 2004 U.S./EU agreement settled on a common baseline modulation for the Galileo L1 OS and the GPS L1C signals: BOC(1,1). (The BOC(n,m) notation means a binary offset carrier with n being a 1.023 MHz square wave and m being a 1.023 MHz pseudorandom code.) Like BPSK(1), the BOC(1,1) waveform also is a BPSK modulation, meaning there are only two states, either a +1 or a –1. The timing relationships of the code and the square wave are illustrated by Figure 1.

Although the agreement defined BOC(1,1) as the baseline for both Galileo L1 OS and GPS L1C, it left the door open for a possible signal “optimization” within the overall framework of the agreement. As documented in the paper by G.W. Hein et al., “A candidate for the GALILEO L1 OS Optimized Signal” (cited in the “Additional Resources” section at the end of this article) and many other papers, the EC Signal Task Force (STF) after much study initially recommended a composite binary coded symbols (CBCS) waveform.

Because the agreement made it desirable for GPS L1C and Galileo L1 OS to have an identical signal spectrum and because GPS III implementation of CBCS would be difficult, a search was made by a joint EC/US working group to find an optimized signal that was acceptable for both GPS and Galileo. The result is MBOC (discussed in the May/June “Working Papers” column and the like-named IEEE/ION PLANS 2006 paper by G. W. Hein et al. cited in “Additional Resources.”).

Like all modernized GPS signals — including M-code, L2C, and L5 — L1C will have two components. One carries the message data and the other, with no message, serves as a pilot carrier. Whereas all prior modernized GPS signals have a 50/50 power split between the data component and the pilot carrier, L1C has 25 percent of its power in the data component and 75 percent in the pilot carrier.

The L1C MBOC implementation would modulate the entire data component and 29 of every 33 code chips of the pilot carrier with BOC(1,1). However, 4 of every 33 pilot carrier chips would be modulated with a BOC(6,1) waveform, as illustrated in Figure 2. The upper part of the figure shows 33 pilot carrier chips. Four of these are filled to show the ones with the BOC(6,1) modulation. Below the 33 chips is a magnified view of one BOC(1,1) chip and one BOC(6,1) chip.

The BOC(1,1) chip is exactly as illustrated in Figure 1 while the BOC(6,1) chip contains six cycles of a 6.138 MHz square wave. With this image in mind, we can easily calculate that the pilot carrier has 29/33 of its power in BOC(1,1) and 4/33 of its power in BOC(6,1). Because the pilot carrier contains 75 percent of the total L1C signal power, then the percent of total BOC(6,1) power is 75 × (4/33) or 9.0909+percent. Conversely, the data signal has 25 percent of the total L1C signal power; so, the calculation of BOC(1,1) power is 25 + 75 × (29/33) or 90.9090+ percent.

Because the Galileo OS signal has a 50/50 power split between data and pilot carrier, the implementation is somewhat different in order to achieve the same percentages of BOC(1,1) and BOC(6,1) power. For the most likely time division version of MBOC for Galileo, 2 of 11 chips in the pilot carrier would be BOC(6,1) with none in the data component. Thus, the percent of total BOC(6,1) power is 50 × (2/11) or 9.0909+ percent. Similarly, the percent of total BOC(1,1) power is 50 + 50 × (9/11) or 90.9090+ percent. This makes the spectrum of Galileo L1 OS the same as GPS L1C.

Code Transitions. The fundamental purpose of MBOC is to provide more code transitions than BOC(1,1) alone, as is evident in Figure 2. (A code loop tracks only the code transitions.) However, these extra transitions come on top of the increased number in BOC(1,1) compared to the L1 C/A signal.

Taking into account that the pilot carrier has either 75 percent of the signal power with GPS or 50 percent with Galileo, GPS with BOC(1,1) has 2.25 times more “power weighted code transitions” than C/A-code (a 3.5-dB increase). Galileo with BOC(1,1) has 1.5 times more (a 1.8-dB increase). MBOC on GPS would further increase the net transitions by another factor of 1.8 (2.6-dB increase), and the most aggressive version of MBOC on Galileo would increase the net transitions by a factor of 2.2 (3.4-dB increase).

Therefore, given the improvement of BOC(1,1) over C/A code, the question raised by this article is whether a further improvement in number of transitions is worth subtracting a small amount of signal power during all signal acquisitions, for all narrowband receivers, and for all receivers using the double-delta form of multipath mitigation.

A portion of Table 1 from the May/June “Working Papers” column is reproduced here, also as Table 1. Of the eight possible waveforms in the original table, only three are included here. These are representative of all the options, and they include the two versions of MBOC considered most likely for implementation in Galileo and the only version GPS would use.

Two new columns have been added in our abbreviated version of the table. The first is an index to identify the particular option, and the last identifies whether GPS or Galileo would use that option.

Receiver Implementations

Most GNSS receivers will acquire the signal and track the carrier and code using only the pilot carrier. For GPS L1C this decision is driven because 75 percent of the signal power is in the pilot carrier. Little added benefit comes from using the data component during acquisition and no benefit for code or carrier tracking, especially with weak signals.

For Galileo, the decision is driven by the data rate of 125 bits per second (bps) and the resulting symbol rate of 250 symbols per second (sps). This allows only 4 milliseconds of coherent integration on the Galileo data component (compared with 10 milliseconds on the GPS data component). Because coherent integration of the pilot carrier is not limited by data rate, it predominantly will be the signal used for acquisition as well as for carrier and code tracking.

Reflecting the reasons just stated, Figure 3 compares the spectral power density in the pilot carrier for each of the three signal options listed in Table 1. In each case the relevant BOC(1,1) spectrum is shown along with one of the three MBOC options. These plots show power spectral density on a linear scale rather than a logarithmic dB scale, which renders small differences more prominent.

The center panel shows the GPS case with either BOC(1,1) or TMBOC-75. (The BOC(1,1) peaks are arbitrarily scaled to reach 1.0 Watt per Hertz (W/Hz). The BOC(1,1) peaks of TMBOC-75 are lower by 12% (-0.6 dB) in order to put additional power into the BOC(6,1) component of TMBOC-75, primarily at ±6 MHz.

All three panels of Figure 3 have the same relative scaling. The reason the peaks of the BOC(1,1) components in panels 1 and 3 are at 0.67 W/Hz is that GPS L1C will transmit 75 percent of its total signal power in the pilot carrier whereas Galileo will transmit 50 percent. The difference is simply 0.5/0.75 = 0.67 (-1.8 dB).

The first panel of Figure 3 also shows the Galileo TMBOC-50 option in which the BOC(1,1) component peaks are lowered by 18 percent (-0.9 dB) in order to provide power for the BOC(6,1) component, primarily at ±6 MHz.

The third panel shows the same Galileo BOC(1,1) power density but with the CBOC-50 option. In this case the BOC(6,1) component exists in the data channel as well as the pilot carrier. That is why it is half the amplitude at ±6 MHz as in panels 1 and 2. That also is why less power is taken from the BOC(1,1) component for the BOC(6,1) component; in this case the reduction is 9 percent (-0.4 dB). This is not considered an advantage by those who want to track the BOC(6,1) component, and it also reduces the data channel power for narrowband receivers by the same 9 percent or 0.4 dB.

As stated before, the fundamental question raised by this article is whether we should rob Peter to pay Paul. As with all such top-level questions, the answers lie in the details and in the perceptions of those affected. Inside GNSS posed a series of questions to industry experts in order to explore their perspectives and preferences.

The Questions and Answers

Q: What segment of the GNSS market do your answers address? Describe your market, including typical products and the size of the market.

Fenton – High precision survey and mapping, agriculture/machine control, unmanned vehicles, scientific products, and SBAS ground infrastructure where centimeter accuracy is very important. NovAtel sells at the OEM level to software developers and system integrators and calculates its present total addressable market (TAM) at $300-$400 million USD, again at the OEM level.

Garin – We are focused on consumer electronics where very low cost and very low power are of critical importance, such as personal navigation devices (PNDs), cellular phones, and in general applications where the power consumption is at a premium. These objectives should be reached with little to no impact on the user experience. The loss of performance due to design tradeoffs is mitigated by assisted GPS (A-GPS).

Hatch /Knight – NavCom supplies high-precision, multi-frequency GNSS receivers that employ advanced multipath and signal processing techniques, augmented by differential corrections from our StarFire network. These receivers are widely used in the agriculture, forestry, construction, survey, and offshore oil exploration markets. Current market size is on the order of 100,000 units per year.

Sheynblat/Rowitch – Our answers address wireless products for the consumer, enterprise, and emergency services markets. There are over 150 million Qualcomm GPS enabled wireless handsets in the market today, and this large market penetration and heavy usage is primarily driven by low cost, low power, and high sensitivity. The vast majority of other GPS enabled consumer devices worldwide are also cost driven.

Stratton – Rockwell Collins is a leading provider of GPS receivers to the U.S. military and its allies, and we are also a major supplier of GNSS avionics to the civil aviation industry. The civil aviation applications demand high integrity and compatibility with augmentation systems, while the military requirements range from low-power, large-volume production to high-dynamic and highly jam-resistant architectures (as well as civil compatible receivers). Military receivers are impacted due to civil compatibility requirements. Our company has produced over a half million GPS receivers and has a majority market share in military and high-end civil aviation (air transport, business, and regional) markets.

Studenny – Our market is commercial aviation where continuity of operation and integrity are the most important performance parameters.

Weill – I and a colleague, Dr. Ben Fisher, of Comm Sciences Corporation, are the inventors of a new multipath mitigation approach which we call Multipath Mitigation Technology (MMT), so our primary product is technology for improved multipath mitigation. MMT is currently incorporated in several GPS receivers manufactured by NovAtel, Inc. Their implementation of MMT is called the Vision Correlator.

Q: Which signal environments are important for your products: open sky, indoor, urban canyon, etc.

Fenton – In general, most of our customers operate in open sky environments. However, a significant number are operating under or near tree canopy and in urban canyons.

Garin – Ninety percent of our applications are or will be indoors and in urban canyons.

Hatch /Knight – Our receivers are mostly used in open sky and under-foliage conditions.

Stratton – Our products use civil signals mainly in open sky conditions, although civil signals may be used to assist the acquisition of military signals in a broad variety of environments.

Studenny – Aircraft environments, with particular attention to safety-of-life. Also, ground-based augmentation system (GBAS) ground stations.

Weill – Any environment in which multipath is regarded as a problem, including precision survey, indoor (911) assisted GPS, and military and commercial aviation.

Q: Which design parameters are most critical for your products: power, cost, sensitivity, accuracy, time to fix, etc.

Fenton – In general, our products service the high end “commercial” markets. Our customers in general have priorities in the following order: a) accuracy, b) robust tracking, c) cost, d) power, e) time to first fix.

Garin – The most important criteria are, from the highest to the lowest: power, cost, sensitivity, time-to-first-fix, and finally, accuracy.

Hatch /Knight – Accuracy is most important.

Sheynblat/Rowitch – We have invested substantial engineering effort to achieve market-leading sensitivities (-160 dBm) while maintaining very low receiver cost. Engineering investment, focus on sensitivity, and close attention to cost models is probably also true for other vendors focused on mass market, AGPS enabled devices that have to work indoors. All of these GPS vendors go to great lengths to improve sensitivity for difficult indoor scenarios. Every dB counts and may make the difference between a successful or a failed fix, which is of particular concern for E-911 and other emergency situations.

Stratton – The tradeoff in relative importance of these parameters varies widely depending on the particular application, though life-cycle cost (including development and certification) arguably is most significant.

Studenny – Actually, all parameters are important. However, we focus on safety-of-life and the drivers are both continuity of operation and integrity (hazardously misleading information or HMI).

Specifically, we believe cross-correlation, false self-correlation, and the ability to resist RFI, as well as improving multipath performance, are signal properties of great interest to us. A well-selected coding scheme minimizes all of these and HMI in particular. Finally, HMI may become a legal issue for non-aviation commercial applications, especially if those applications involve chargeable services, implied safety-of-life, and other such services.

Weill – MMT is most effective in receivers that have high bandwidth and are receiving high-bandwidth signals. However, it can substantially improve multipath performance at lower bandwidths.

Q: Do you really care whether GPS and Galileo implement plain BOC(1,1) or MBOC? Why?

Fenton – Yes, we expect that the MBOC signals combined with the latest code tracking techniques will provide a majority of our customers a significant performance benefit for code and carrier tracking accuracy in applications where multipath interference is a problem.

Garin – I do not believe that MBOC will significantly benefit our short-term market. The MBOC expected multipath performance improvement will be meaningless in the urban context, where the dominant multipath is Non Line of Sight and where the majority of the mass market usage is concentrated. However we believe that a carrier phase higher accuracy mass market will emerge within a 5 year timeframe, with back-office processing capabilities, and wireless connected field GPS sensors. This will be the counterpart of the A-GPS architecture in cell phone business. MBOC would have an important role to play in this perspective. We envision this new market only in benign environments, and not geared towards the surveyors or GIS professionals.

Hatch /Knight – The MBOC signal will significantly improve the minimum code tracking signal to noise ratio where future multipath mitigation techniques are effective. The expected threshold improvements will be approximately equal to the best case improvements indicated by this article. MBOC will be less beneficial to very strong signals where the noise level is already less than the remaining correlated errors, like troposphere and unmitigated multipath.

Designing a receiver to use the MBOC code will be a significant effort. The resulting coder will likely have about double the complexity of the code generator that does not support MBOC. There will be a small recurring cost in silicon area, and power consumption will increase significantly. Overall, MBOC is desirable for our high performance applications. For many applications the costs are greater than the benefits.

Sheynblat/Rowitch – Yes, we do care about the decision of BOC versus MBOC. The proposed change to the GPS L1C and Galileo L1 OS signal to include BOC(6,1) modulation will perhaps improve the performance of a very tiny segment of the GPS market (high cost, high precision) and penalize all other users with lower effective received signal power due to their limited bandwidth. We prefer that GPS and Galileo implement the BOC(1,1) signal in support of OS location services.

Stratton – This decision does not appear to have much influence on our markets when viewed in isolation, but we would like to see GPS make the best use of scarce resources (such as spacecraft power) to provide benefits that are attainable under realistic conditions.

Studenny – Yes, we do care. GPS L5 needs to be complemented by a signal with similar properties at L1, the reason being that a momentary outage during precision approach on either L1 or L5 should not affect CAT-I/II/III precision approach continuity or integrity. We understand that there are constraints in selecting a new L1 signal; however the proposed MBOC waveform better supports this. This is keeping with supporting the FAA NAS plans and transitioning to GNSS for all phases of flight including precision approach.

Weill – Yes. Comm Sciences has established that the performance of current receiver-based multipath mitigation methods is still quite far from what is theoretically possible. It is also known that GNSS signals with a wider RMS bandwidth have a smaller theoretical bound on ranging error due to thermal noise and multipath. Since multipath remains as a major source of pseudorange error in GNSS receivers, I feel that the use of an MBOC signal for GPS and Galileo is an opportunity to provide the best possible multipath performance with evolving mitigation methods that take advantage of the larger RMS bandwidth of an MBOC signal as compared to plain BOC(1,1).

Q: Are the GNSS receivers of interest narrowband (under ±5 MHz) or wideband (over ±9 MHz)?

Fenton – Wideband. High precision GNSS receivers typically process all available bandwidth ~20 MHz (±10 MHz).

Garin – Our GNSS receivers are narrowband today, but we expect the widening of the IF bandwidth (or equivalently their effective bandwidth) to ±9 MHz, in the next 3-5 years, with the same or lower processing and power consumption.

Hatch /Knight – Our receivers are primarily wideband.

Sheynblat/Rowitch – The receivers of interest are narrowband. Low cost GPS consumer devices do not employ wideband receivers today and will most likely not employ wideband receivers in the near future. Any technology advances afforded by Moore’s law will likely be used to further reduce cost, not enable wideband receivers. In addition, further cost reductions are expected to expand the use of positioning technology in applications and markets which today do not take advantage of the technology because it is considered by the manufacturers and marketers to be too costly.

Stratton – All of our markets require wide-band receivers; however, the civil receiver/antenna RF characteristics are adapted to high-bandwidth C/A processing (where the bulk of RF energy is at band center). So the MBOC signal does raise some potential compatibility questions.

Studenny – Wideband.

Weill – I believe the trend will be toward wideband receivers for most applications. If one looks at the history of GPS receiver products, it is clear that there has always been competitive pressure to increase positioning accuracy, even at the consumer level. Not only is better accuracy a marketing advantage, but it has also opened up entirely new applications. The availability of wide bandwidth signals is a key factor in continuing to improve positioning accuracy. Although currently available receivers that can take advantage of wider bandwidth signals cost more and consume more power, the rapid rate of improving digital technology should make low-cost, low-power, wide bandwidth receivers available in the not-so-distant future. The availability of an MBOC signal would maximize the capability of such receivers.

Other Observers

Inside GNSS invited comments from a broad range of companies representative of most GNSS markets. In addition to those who fully responded to our questions, several offered abbreviated remarks:Garmin International, Inc. did not identify a spokesperson, but it submitted the following official statement: “It is Garmin’s policy not to disclose any information about future designs. However, we would like to indicate that we support the BOC(1,1) implementation over the MBOC.”Sanjai Kohli, Chief Technology Officer of SiRF Technology Inc., submitted the following official statement: “The existence of the BOC(6,1) chips in the MBOC signal won’t matter very much to SiRF. Still, to maximize the availability of weak signals, it would be preferable not to suffer any loss of signal power. Therefore, SiRF would prefer that all chips be BOC(1,1). Furthermore, it is doubtful that any advanced method of multipath reduction will be of much benefit for urban and indoor signal reception, since it is likely that the line-of-sight component of the weak signal is blocked.”

European Company – A large and well known European consumer products company could not obtain internal approval to answer the questions, but the following unofficial communication from a technical manager is of interest: “Our understanding about the pros and cons of MBOC as compared with BOC(1,1) is . . . that narrow-band receivers are not able to utilize the higher frequency components of the MBOC signal and they thus represent wasted power from their viewpoint. This is especially true for acquisition, because the acquisition bandwidth many times seems to be narrower than the tracking bandwidth, especially in those parallel acquisition receivers that are used in consumer products specified for weak signal operation. For such receivers the received signal power is critical in the acquisition phase, not so much in the tracking phase.”

L1C, BOC, and MBOC

Pertinent to the subject of this article is the remarkable way in which the L1C signal was designed. The original C/A- and P-code signals were designed by a small group of technologists under the direction of the GPS Joint Program Office (JPO). Although from the beginning GPS was understood to be a dual-use (civil and military) system, the signals were designed primarily from a military perspective.

Design of the L2C civil signal was led by a JPO deputy program manager representing the Department of Transportation (DoT) — but the process took place under extreme time pressure. The RTCA, Inc., with authorization from the Federal Aviation Administration (FAA), initially defined the L5 signal. The RTCA is a consensus-driven open forum, but its focus is almost exclusively on aviation.

In contrast, development of L1C was funded by the Interagency GPS Executive Board (IGEB), now superseded by the National Space-Based Positioning, Navigation, and Timing (PNT) Executive Committee. Representatives of the Department of Defense (DoD) and DoT co-chair the PNT Executive Committee: so, the central focus is on managing GPS as a dual-use utility. Reflecting this, the L1C project was co-chaired by a DoD representative and by a civil representative. (The civil co-chair was Dr. Ken Hudnut of the U.S. Geological Survey. A sequence of JPO officers represented the DoD: Captains Bryan Titus, Amanda Jones, and Sean Lenahan. Tom Stansell of Stansell Consulting served as project coordinator throughout.)

L1C development consisted of two key activities. The first was a study of the wide range of civil requirements and development of five signal structure options. A technical team conducted this part of the work, drawing on experts in all aspects of the signal, including spreading code, data modulation, forward error correction, and message format.

Several team members had deep experience developing civil user equipment, from consumer chipsets to high-precision survey receivers. Others were experts on aviation requirements. The second key activity is, to our knowledge, unique: a worldwide survey of GNSS experts to determine which of the five options to choose. The design process is complete, and a draft specification (IS-GPS-800) has been published.

The innovative MBOC proposal was developed quickly by a group of very competent U.S. and EU signal experts with both civil and military backgrounds. However, this team apparently had only one person with extensive experience in receiver manufacturing, and the timeline did not allow the opportunity for a broad survey to assess equipment manufacturers’ opinions about the design. Informal conversations with some industry representatives also revealed dissatisfaction with MBOC. Therefore, Inside GNSS decided to consult a number of experts from companies that build GNSS equipment to determine their thoughts about the MBOC concept.

Additional Resources

1. Agreement on the Promotion, Provision and Use of Galileo and GPS Satellite-Based Navigation Systems and Related Applications, June 26, 2004, http://pnt.gov/public/docs/2004-US-EC-agreement.pdf

2. Hein, G. W., and J-A. Avila-Rodriguez, L. Ries, L. Lestarquit, Issler, J. Godet, and T. Pratt, “A candidate for the GALILEO L1 OS Optimized Signal”, Proceedings of ION GNSS 2005 – 13-16 September 2005, Long Beach, California

3. Hein, G. W., and J-A. Avila-Rodriguez, S. Wallner, J. W. Betz, C. J. Hegarty, J. J. Rushanan, A. L. Kraay, A. R. Pratt, S. Lenahan, J. Owen, JL. Issler, T.A. Stansell, “MBOC: The New Optimized Spreading Modulation Recommended for Galileo L1 OS and GPS L1C”, Inside GNSS, Volume 1, Number 4, pp 57–65, May/June 2006

4. Hein, G. W., and J-A. Avila-Rodriguez, S. Wallner, A. R. Pratt, J. Owen, J-L. Issler, J. W. Betz, C. J. Hegarty, S. Lenahan, J. J. Rushanan, A. L. Kraay, T.A. Stansell, “MBOC: The New Optimized Spreading Modulation Recommended for GALILEO L1 OS and GPS L1C”, IEEE/ION PLANS 2006, April 24-27, 2006, San Diego, California

5. IS-GPS-200: NAVSTAR GPS Space Segment / Navigation User Interfaces; IS-GPS-705: NAVSTAR GPS Space Segment / User Segment L5 Interfaces; Draft IS-GPS-800 for new L1C signal; http://gps.losangeles.af.mil/engineering/icwg/

By
[uam_ad id="183541"]
May 1, 2006

Mobile RTK: Using Low-Cost GPS and Internet-Enabled Wireless Phones

Government regulation such as E911 and the promise of location-based services (LBS) are the biggest drivers for integrating positioning capability into mobile phones. The increasing sophistication of applications and refinement of map databases are continually tightening the accuracy requirements for GNSS positioning. In particular, location-based games and features such as “friend finder” sometimes require better accuracy than what is achievable with state-of-the-art network-assisted GPS (A-GPS) platforms.

Government regulation such as E911 and the promise of location-based services (LBS) are the biggest drivers for integrating positioning capability into mobile phones. The increasing sophistication of applications and refinement of map databases are continually tightening the accuracy requirements for GNSS positioning. In particular, location-based games and features such as “friend finder” sometimes require better accuracy than what is achievable with state-of-the-art network-assisted GPS (A-GPS) platforms.

Cellular standards for GPS assistance data exist for both control plane and user plane protocols. These protocols carry information that help the integrated GPS receiver to improve its sensitivity, speed up signal acquisition, and especially reduce the time to first fix. However, these approved standards do not contain sufficient information for the receiver to do carrier phase positioning.

Until now, no compelling reason existed for adding carrier phase positioning related features into cellular standards so that they could employ real-time kinematic (RTK) techniques. Generally, RTK-enabled devices on the market are expensive and intended primarily for geodetic and survey applications. Also, there has been no real need in the cellular world for the accuracy RTK provides. With evolving LBS applications, however, this situation is changing.

This article describes a solution called mobile RTK (mRTK), a system specifically designed and implemented for the cellular terminal use. Its design incorporates low-cost single-frequency A-GPS receivers, Bluetooth (BT) communications, and inertial sensors.

Basically, the technique involves exchanging measurements in real-time between two units — one designated as the reference and the other as the user terminal — and producing the best possible estimate of the baseline between the terminals using RTK techniques. We are developing the solution so that in the future it will be possible to add any other Global Navigation Satellite System (GNSS) measurements in addition to GPS measurements — or even instead of GPS measurements.

Using a simulator, we shall provide data that show it is possible to enable high-precision, carrier phase-based positioning in handsets with minimal additional hardware costs. Further, we shall describe some of the protocol aspects and especially the aspects of adding support for mRTK messaging to already existing cellular standards — GSM and UMTS. We believe that the mRTK solution will bring high performance to the mass market.

Moreover, additional GPS signals, such as L2C and L5, and other GNSSes such as Galileo will become operational in the near future. Consequently, it would be very beneficial to begin incorporating mRTK into the pertinent wireless standards now so that the infrastructure and the service providers will be ready when business opportunities present themselves.

. . .

mRTK Solution Overview
A plethora of RTK surveying solutions is available on the market today. Generally, they are characterized by the use of both GPS frequencies, L1 and L2, enabling ambiguity resolution in seconds over baselines of up to 20 kilometers, or even 100 kilometers with more time and under good conditions.  We must emphasize that this article does not claim to demonstrate similar performance and reliability as high-performance dual-frequency receivers.

We are designing the mRTK solution to work with low-cost, off-the-shelf GPS receivers with certain requirements (for example, the ability to report carrier phase measurements and data polarity). Therefore, performance degradations are expected in terms of time to ambiguity resolution, accuracy, and achievable baseline length.

. . .

Testing the System
The mRTK performance testing was accomplished using two identical hardware platforms containing 12-channel off-the-shelf high-sensitivity OEM GPS receiver modules and a 3-axis accelerometer. We constructed this test system to determine the physical limitations and requirements for the protocol and messaging aspects.

. . .

Performance
We conducted several experiments using the testing system and a GPS simulator. The simulator was configured to output data from the same eight satellites for both receivers with using several different baseline lengths varying from 0 meters to approximately 5 kilometers , and using scenarios for different GPS weeks.

. . .

Testing Protocol
The testing protocol used in the mRTK solution was designed specifically for use in research and development and as a reference design for proposed changes to the pertinent cellular standards. The protocol was designed to be as efficient as possible and especially to take advantage of the properties of TCP/IP. As TCP/IP already guarantees that transmitted data are error-free and also preserves the order of the data, our protocol did not need to include extensive error corrections and packet order counts.

. . .

Cellular Protocol Aspects
During the testing protocol design and implementation, several issues emerged concerning the addition of the mRTK feature into cellular protocols . . . User-to-user relative positioning is not recommended for control plane systems because it would require a lot of protocol and implementation work to get the binding of two terminals and relaying measurements between two terminals to actually work.

. . .

Future Work
This article has introduced a new concept called mobile Real-Time Kinematics and shows that RTK-like features are possible using low-cost components and existing cellular communication carriers. Even though a lot of development work remains on the mRTK algorithm side, the biggest challenge still involves cellular carriers and their standardization. Of course, even after standardization, the development of the infrastructure would require a huge effort.

Future work with the existing testing protocol includes more testing, especially field testing, and testing with different signal conditions and satellite constellations. The testing protocol itself should be modified with new features such as the VRS service. Using VRS, the baseline can always be kept very short, and accurate absolute positioning is available everywhere using mRTK.

One of the ideas that also need to be further developed is peer-to-peer protocols. In those protocols the mRTK measurements would be transmitted directly from one terminal to another without the use of a server in between.

As an example, this kind of protocol could be embedded into voice-over-IP (VoIP), in which the data channel for the voice encoding is already open and could easily accommodate other data transmissions that do not have strict real-time requirements, such as mRTK. Other peer-to-peer protocol means would exist, for instance, in WLAN, where the terminals are connected to the same subnet and would be able to open direct connections to each other.

The solution we have presented holds a lot of potential. Especially with the forthcoming satellite systems (e.g., Galileo and modernized GPS), the solution will significantly improve the accuracy of positioning in the mobile terminal. Nonetheless, the standardization of the mRTK features will require a lot of joint effort among terminal and network manufacturers and cellular operators.

For the complete story, including figures, graphs, and images, please download the PDF of the article, above.

Acknowledgments
This article is based in part on two papers, “Bringing RTK to Cellular Terminals Using a Low-Cost Single-Frequency AGPS Receiver and Inertial Sensors,” by L. Wirola, K. Alanen, J. Käppi, and J. Syrjärinne, and “Inertial Sensor Enhanced Mobile RTK Solution Using Low-Cost Assisted GPS Receivers and Internet-Enabled Cellular Phones,” by K. Alanen, L. Wirola, J. Käppi, J. Syrjärinne, presented at the IEEE/ION PLANS 2006 conference, © 2006 IEEE.

By
April 1, 2006

Geodesy and Satellite Navigation

There has always been a love-hate relationship between geodesy and satellite navigation. Indeed, satellite positioning started life as an extension of terrestrial geodesy. When the first satellite, Sputnik 1, started orbiting the Earth in 1957, geodesists in several countries realised that satellites offered substantial potential as a geodetic positioning and navigation tool.

There has always been a love-hate relationship between geodesy and satellite navigation. Indeed, satellite positioning started life as an extension of terrestrial geodesy. When the first satellite, Sputnik 1, started orbiting the Earth in 1957, geodesists in several countries realised that satellites offered substantial potential as a geodetic positioning and navigation tool.

The basic technologies of terrestrial geodesy of the day, notably triangulation, traversing, and precise leveling, were slow and cumbersome, mainly because of the effect of the curvature of the surface of the Earth, which limited the range of measurements to theodolite observations between points situated on hilltops, observation towers, and triangulation masts.

The advent of EDM (electronic distance measurement) in the 1960s helped terrestrial geodesy, but it, too, was affected by the same limitation, namely the shortness of observable EDM ranges due to the Earth’s curvature.

Earth orbiting satellites did not suffer from this drawback. They could be viewed simultaneously from several points on Earth, and therefore direction and range measurements made, provided that the space vehicles were not obscured by high natural features or tall man-made structures. This led to several new satellite geodesy positioning methodologies.

The first of these was satellite triangulation, which was used initially to supplement and strengthen terrestrial triangulation networks. Satellite triangulation consisted of geodetic direction measurements derived from high power photographs of satellite orbits made against a stellar background of stars, with known right ascension and declination.

A few years later, this was followed by range measurements to satellites, made from Earth-bound EDM equipment to corner cube reflectors placed on the early satellites. The methodology used thus far was an extension of geodetic astronomy, with little reference to physical geodesy.

This situation changed significantly when geodesists realized that they could use the Doppler shift on the signal broadcast from a satellite to obtain differential range measurements that, together with the known Keplerian orbit of the satellite, could lead to a relatively fast positioning, or navigation, method. The Keplerian orbital motion of satellites is primarily based on the Earth’s gravity field, a subject of expertise by practitioners of physical geodesy.

This technical advance gave birth to Transit-Doppler, the first satellite navigation technology. Transit-Doppler was used in the late 1970s and early 1980s not only for the positioning of naval ships and of submarines surfacing in the polar regions, but also for the strengthening and scaling of national and continental terrestrial triangulation networks.

However, practitioners soon realized that positioning by Transit-Doppler to a reasonable degree of accuracy took several minutes, and, therefore, precluding its use as a full navigation methodology, which requires quasi-instantaneous positioning.

Enter GPS
These were the early days of a new global satellite positioning, navigation, and timing system, first called the NAVSTAR Global Positioning System, a name later shortened to just GPS. The rest is history. The early decision to base GPS on a constellation of 24 medium-Earth orbit satellites was taken on the advice, as you would expect, of geodesists at the U.S. Naval Surface Weapons Center in Dalgren, Virginia.

The close relationship between the early GPS and geodesy was further demonstrated by the adoption of WGS84, the World Geodetic System 1984, as the basis of the 3-D coordinate system of GPS. As GPS was born during the Cold War, it was declared a US military navigation system, with full access to NATO but only restricted access and down-graded positioning accuracies for civilian users.

This so-called Selective Availability (SA) gave the green light to the civilian geodetic community to come up with new methodologies that could counter the effects of SA. As always, human ingenuity did not disappoint, and two new differential techniques were developed. The first was the differential GPS (DGPS) technique, which improved relative positioning accuracies of GPS by at one order of magnitude, down to a few meters. As a result, DGPS soon became the standard methodology for the offshore positioning of oil platforms, pipelines, etc.

The next advance in improving the accuracy of satellite positioning was made on the advice of radio-astronomers, who proposed replacing the standard GPS pseudorange measurements, which are based on timing the modulated signal from satellite to receiver.

Instead, they suggested making measurements on the basic carrier frequencies of these signals, just as they did with extra-galactic signals arriving at, say, two widely spaced radio telescopes in so-called very long baseline interferometry (VLBI), leading as a by-product to the Cartesian coordinate differences between the two telescopes. This was the beginning of centimetric positioning by the carrier phase GPS method, which was later developed further by geodesists into kinematic GPS and centimetric navigation.

GPS had now become the universal high precision quasi-instantaneous positioning and navigation tool, creating the basis for hundreds of new applications. Again, geodesists led the way, concentrating on high precision scientific and engineering applications. These included surveying and mapping, positioning in offshore engineering, the monitoring of local crustal dynamics and plate tectonics, the relative vertical movements of tide gauges, and the continuous 3-D movements of critical engineering structures, such as tall buildings, dams, reservoirs, and long suspension bridges.

All of these applications required very high relative positioning accuracies, but not quasi-instantaneously as in the safety-critical navigation and landing of civilian aircraft. This came much later.

Geodesy and Navigation
Initially, GPS was considered as a standard navigation tool for military vehicles on land, sea, and air, but not for safety-critical civilian transportation. This was because, unlike military positioning and navigation, safety-critical civilian transportation not only requires quasi-instantaneous and accurate positioning, but also so-called “high integrity and good coverage.”

Geodesists will immediately realize that “integrity” stands for the geodetic concept of “reliability,” whereas “coverage” refers to the availability of a sufficient number of satellites that can be sighted by a receiver continuously and are not obscured by natural or man-made obstructions, such as high mountains, tall buildings, and the wings of an aircraft.

On its own, GPS cannot meet these requirements to the level required in safety-critical civilian transportation. Military transportation, on the other hand, has relatively modest requirements, which can be met by GPS. Indeed, you do not become a NATO Air Force pilot if you want a safe life. Flying as a passenger in a commercial airline is something else all together.

The penetration of satellite navigation, and primarily GPS, into civil aviation involved yet again, as you would expect, geodesists. They had to develop jointly with the civil aviation community the necessary theoretical and practical tools, which could be used to establish and quantify their requirements of accuracy, integrity, and coverage.

This involved the use of existing geodetic tools, such as the covariance matrix, the analysis of least squares residuals, and the well-established geodetic reliability measures. New tools were also introduced, such as the concept of RAIM or receiver autonomous integrity monitoring, based on the analysis of the least squares residuals.

Persuading Non-Geodesists
These geodetic tools, which were highly beneficial to the civil aviation community, initiated a fruitful, long-term collaboration between the two communities. However, this has not always been a straightforward and smooth relationship, and it involved — especially at the beginning — a deep suspicion of these “academic” geo-scientists. Here are a few notable examples of this love-hate relationship.

As a general rule, the existing civil aviation horizontal coordinates were based on latitudes and longitudes, with no particular reference to a reference datum. Heights in civil aviation were and still are based on barometric altimetry, on the assumption that all that matters is “the relative heighting between airplanes,” which is not affected significantly by a change in barometric pressure.

This assumption disregards, of course, the fact that the heights of natural features on the ground, such as mountains, do not change with changing barometric pressure. The first challenge was to convince the international civil aviation community that their horizontal coordinates, that is, latitudes and longitudes, required a proper geodetic datum and, as GPS was being contemplated as a future navigation tool, it made sense to adopt the same reference datum, namely WGS84. It took a while to convince the community to accept that.

The adoption of WGS84 led to the resurveying of most airports, runways, and various en route and landing navigation aids in order to bring them into WGS84, in preparation for the introduction of GPS. This led to the discovery of some large discrepancies, at airports and among navaids in many countries, between the existing horizontal coordinate values and their new WGS84 equivalents. Geodesists will be familiar with such occurrences, whenever they start dealing with a new community, whether they are civil or offshore engineers, oceanographers or meteorologists.

The first GPS receivers did not lend themselves to mass market adoption. Geodesists of a certain age will also remember some of the earliest commercial GPS receivers, such as the TI 4100 receivers, made by Texas Instruments. These early receivers operated by measuring sequentially four pseudoranges to four different satellites. Consequently, the receivers were programmed to first check the geometry of the satellites in view and decide on the best four in terms of geometrical configuration.

However, later on, with the emergence of new receivers that could measure all the available pseudoranges quasi-simultaneously, there was no need to carry on with measurements only to the “best four” satellites. One could track all available satellite signals and process these measurements by least squares, rejecting those with relatively large residuals, if any. This standard processing of observations is bread-and-butter stuff to surveyors and geodesists.

However, this was not the case with a number of navigation experts, who persisted on recommending the use of only the “best four” satellites for quite sometime, before they finally abandoned the practice.

A New Era of GNSS
Satellite navigation and positioning has changed substantially and significantly over the last 5 to 10 years. With Galileo in its development and in-orbit validation phase, the future developments in GPS IIF and GPS III, renewed interest in GLONASS, and satellite navigation initiatives in Japan, China, India, Australia, and several other countries, GNSS or the Global Navigation Satellite System is moving from being a concept, largely based on GPS alone, to a full global reality. A comprehensive program of GPS modernization currently under way aims to deliver significant improvements to both military and civil users.

The earliest mass-market applications of GPS involved road vehicles and mobile phones. In both cases, the twin aims are navigation (where am I, and how do I go to my destination?) and tracking (where is he, she, or it?). In the case of road vehicle tracking, successful applications include fleet monitoring (taxis or road transport companies), theft recovery of private cars, “black box” incident recorders, and the transport of hazardous or valuable cargoes.

Typically, most of these applications share three common features, namely prior knowledge of the proposed route, the continuous tracking of position and velocity by GPS, and the trigger of an alarm by a significant deviation.

Similarly, a number of GPS tracking applications use mobile phone technology (GSM or GPRS), but these are not as developed and widespread as vehicle tracking. Typically, these involve vulnerable people, such as young children, the elderly, key workers in some risky environments (for instance, railways), individuals with a chronic or contagious disease, and even VIPs.

Person tracking with GPS+telematics could also involve judicial cases (ordered by a court of law), of suspected criminals or anti-social elements. Other proposed applications include environmental information, location-based security, and location-sensitive marketing.

On its own, a GPS-enabled phone offers location and communication. This may answer the questions “Where is she or he?” and “Where am I?” but nothing more. However, when position and communication are combined with an appropriate geographic information system (GIS) database and a direction sensor, the combined system could answer two other very important questions, namely “What’s around me?” and “What’s that building over there?”

This could be achieved by a GPS+compass device, providing positional and directional data, which the mobile phone or the PDA transmits to a remote server. The server calculates the user’s position and identifies the building along the measured azimuth, gets the relevant information from the database, and sends it back to the client.

This is clearly valuable for the public utilities (water, gas, electricity, TV), shopping and leisure (restaurant menus, theater tickets), house hunting (details of the property advertised for sale), and of course, for visitors and tourists (museums, notable buildings, archaeological sites).

Leaving mobile phones aside, satellite navigation can also be used for location-based- security. For example, a briefcase or a portable PC can be programmed to unlock safely only in a specified location and nowhere else. This would minimize the risk of sensitive military or commercial material falling into the wrong hands.

Some working prototype systems already exist. Other location-and-context-based applications under consideration include the marketing and selling of goods, the reception of pay-TV, credit card security, spectator sports, road user charging and many others.

Indeed, the qualification of “critical application” is no longer restricted to safety-critical transportation, but it also applies now to financial-critical, legal-critical, security-critical, and business-critical applications as well. This creates a problem with standard off-the-shelf autonomous GPS receivers, which cannot operate indoors, because of signal attenuation and multipath.

Over the last few years, GPS chip and receiver manufacturers have tried, with some success, to develop high sensitivity GPS (or HS-GPS). The latest HS-GPS receivers, which incorporate up to 200,000 correlators operating in parallel, make it relatively easy to identify true pseudoranges from among the many signal and multipath reflections. Several manufacturers in the United States, Japan, Korea, and Europe, already advertise HS-GPS chips, and many other companies use such chipsets in their receivers.

GNSS Evolution
Like nearly all the technologies that preceded it, satellite navigation and positioning is going through the standard stages of development from birth to maturity. Older surveyors and geodesists may well remember the advent of EDM, using microwaves or lightwaves in the late 1960s and the 1970s. When the first EDM instruments were introduced, the distances measured were also measured with tapes, just in case.

Then came the second phase, when surveyors became fully confident about EDM and used it routinely for fast and precise range measurements. It took a few years and several critical mistakes in local mapping and national triangulation, to realize that EDM instruments could go wrong and that they had to be calibrated regularly in order to determine their accuracy and systematic biases.

The development of satellite navigation and positioning is following practically the same stages as EDM did 40 years ago. Only now we can formalize these successive stages of development of a technology and give them names by using Gartner’s famous “Hype Cycle Curve,” which was invented about 10 years ago in conjunction with new information technology products.

Using a simplified version, these successive stages of technology development are now formally called “Technology Trigger,” followed by “Peak of Inflated Expectation,” leading to “Trough of Disillusionment”, happily followed by the “Slope of Enlightenment,” and hopefully leading to the “Plateau of Productivity.”

As I write this, the first Galileo satellite, GIOVE-A, has been launched and tested successfully, opening a new era in satellite navigation. Hopefully, this will lead to the development of a large number of new critical applications — and involve close collaboration with geodesy and several other related disciplines — for the benefit of business, government and society.

Here is one last example about the strange relationship between geodesy and GPS. The U.S. delegation to the International Telecommunications Union (ITU) recently proposed to abolish leap seconds, and thus cut the link between Solar Time and Coordinated Universal Time (UTC) and ipso facto GPS Time.

At present, whenever the difference between UTC and Solar Time approaches 0.7 second, a leap second correction is made in order to keep the difference between them under 0.9 second. This is done every few years on the recommendation of the International Earth Rotation and Reference Systems Service, which monitors continuously the difference between Solar Time and UTC.

This leap second correction, which has to be applied every few years to GPS Time, apparently causes software problems because it has to programmed in manually. However, considering the difficulties that this change would cause to other scientific communities, such as astronomers, and even to users of GPS time itself for some critical applications, the U.S. proposal has now been postponed for the time being.

In conclusion, I must declare a conflict of interest. Although all the work I do at present involves GNSS, my academic background is clearly in geodesy. However, a change is in the air now, as safety-critical transportation is no longer the only critical application that has to be catered to. It has now been joined by several other emerging critical applications, notably financial-critical, legal-critical, security-critical and business-critical applications, which will also require nearly the same level of accuracy, integrity and coverage as safety-critical transportation.

This is where geodesy could step in again and create some new statistical tools, which will differentiate between the navigation and positioning systems on offer, and assess their suitability for the specific critical application.

For figures, graphs, and images, please download the PDF of the article, above.

By
March 1, 2006

Building Monitors

Severe loading conditions such as strong winds and earthquakes acting on modern tall buildings and structures can cause significant loads and vibrations. Recent trends toward slender, flexible, and light-weight buildings have left a large number of buildings susceptible to wind-induced motion. Furthermore, human perception of building motion has become a critical consideration in modern building design.

Severe loading conditions such as strong winds and earthquakes acting on modern tall buildings and structures can cause significant loads and vibrations. Recent trends toward slender, flexible, and light-weight buildings have left a large number of buildings susceptible to wind-induced motion. Furthermore, human perception of building motion has become a critical consideration in modern building design.

More complex building shapes and structural systems further accentuate eccentricities between the mass center, the elastic center, and the instantaneous point of application of aerodynamic loads, and consequently will generate significant torsional effects.

Verifying dynamic structural analysis requires the development of direct dynamic measurement tools and techniques in order to determine the natural frequencies, damping characteristics, and mode shapes. Among these tools accelerometers have played the most important part in analyzing structural response due to severe loading conditions. However, they provide only a relative acceleration measurement. The displacement from acceleration measurement cannot be obtained directly by double integration.

In contrast to accelerometers, GPS can directly measure position coordinates, thereby providing an opportunity to monitor, in real-time and full scale, the dynamic characteristics of a structure. GPS used in the real-time kinematic mode (GPSRTK) offers direct displacement measurements for dynamic monitoring. Earlier studies by the authors and other researchers, referenced in the Additional Resources section at the end of this article, have shown the efficiency and feasibility of structural deformation monitoring by combining accelerometer and GPS-RTK.

However, GPS-RTK has its own limitations. For example, the measurement accuracy can be affected by multipath and depends strongly on satellite geometry. Moreover, the typical GPS-RTK 20Hz sampling rate will limit its capability in detecting certain high mode signals of some structures. The new 100Hz GPS-RTK systems need to be further tested in order to ensure the independence of the measurements.

In order to exploit the advantages of both GPS-RTK and accelerometers, two data processing strategies have typically been used, namely to convert GPS measured displacement to acceleration through double differentiation and compare it with the accelerometer measurements (what we refer to as forward transformation), or to convert the accelerometer measurements into displacement through double integration and compare it with GPS measured displacement (the reverse transformation).

The latter approach is much more challenging because we have to determine two integration constants in order to recover all the components of displacement (static, quasi-static and dynamic). If the structure to be monitored is subject to a quasi-static force, as in the case of a typhoon, this further complicates the analysis.

Although earlier research has proposed a lab-based threshold setting for accelerometers to deal with the quasi-static issue, we believe that avoiding this procedure and developing new ways to recover the false and missing measurements from GPS by acceleration transformation would provide a preferred approach.

This article discusses recent efforts to design such a system based on a new integration approach that employs the correlation signals directly detected from a GPS-RTK system and an accelerometer to transform one form of measurement to the other. The methodology consists of a Fast Fourier Transform (FFT) for correlated signal identification, a filtering technique, delay compensation, and velocity linear trend estimation from both GPS and accelerometer measurements. We also present results derived from its installation on structures in Japan that subsequently experienced the effects of an earthquake and typhoon.

(For the rest of this story, please download the complete article using the PDF link above.)

By
January 1, 2006

Will Success Spoil GPS?

Like some behemoth rocket ship launched in the 1970s, the Global Positioning System sails on through an expanding universe of users and applications, seemingly imperturbable, successful beyond the expectations of its creators, an enormous momentum carrying it into the third millennium.

Like some behemoth rocket ship launched in the 1970s, the Global Positioning System sails on through an expanding universe of users and applications, seemingly imperturbable, successful beyond the expectations of its creators, an enormous momentum carrying it into the third millennium.

To all appearances, GPS is prospering more than ever: a second full signal (L2C) is becoming available to civil and commercial users, a denser ground monitoring system being built out, improved accuracies squeezed out of the algorithms and operational practices at the Master Control Station in Schriever Air Force Base, prices dropping on products with more features and functions than ever, hundreds of millions of receivers in use around the world. A follow-on generation (Block IIF) of satellites with a third civil signal (at the so-called L5 frequency) is being built by Boeing for launch beginning in 2007.

Since its first satellite launch 28 years ago, GPS has blazed a trail for satellite-based positioning, navigation, and timing. Thanks to GPS, global navigation satellite systems have gone from being a technological unknown to becoming a widely recognized utility. GPS, a model and inspiration to its imitators across the oceans.

Or is it?

In fact, for some years now GPS has been a victim of its own success. Performing better than advertised, the system has suffered from budgetary pilfering for other defense programs and risks getting lost in the shifting maze of diffuse dual-use management responsibilities.

“History has shown that the Air Force has had chronic difficulty in adequately funding GPS, even in the absence of the more expensive GPS III satellites,” observes a high-level Defense Science Board (DSB) task force report on GPS issued late last year. “If the Air Force continues to use its GPS investments as a funding source to offset other space/aircraft programs, then GPS service continuity will remain in jeopardy even without the more costly GPS III.” (See article “Bold Advice” in this issue.)

Meanwhile, an Air Force Space Command projection puts the worst-case probability of the GPS constellation falling below its fully operational capability (FOC) of 24 space vehicles sometime between 2007 and 2012 as 20–40 percent. Indeed, the task force argues for a 30-satellite constellation to ensure robust coverage in “challenged environments.”

The timelines for the last three GPS satellite development and launch programs — Block IIR, IIR-M, and III — all slid to the right, as they describe schedule delays these days.

Intermittently starved for fuel, with sporadic guidance from the helm, will new resources reach the system before its speed inevitably begins to slow, threatening its being overtaken by other GNSS vehicles?

Okay, that’s the bad news.

The good news is that no one connected to the program wants to let one of the world’s leading U.S.-branded utilities slip into the shadow of the other GNSSes under development. And steps are under way to ensure that doesn’t happen.

New Game Plan

A long-awaited next-generation program, GPS III, spent well more than hundred million dollars on conceptual studies and several years jogging in place before receiving a renewed go-ahead from the Department of Defense (DoD). The Fiscal Year 2006 (FY06) federal budget allocated $87 million for GPS III. The FY07 budget will be finalized soon in Washington, and current indications are that GPS Block III will receive at least $237 million, according to the GPS Joint Program Office (JPO). Of course, GPS III funds have been zeroed out before.

Current plans call for GPS JPO decision this summer that chooses among proposals submitted for separate space vehicle (SV) and operational control (OCX) segment contracts. Once acquisition strategies are formally approved in Washington, release of the GPS Block III SV request for proposals (RFP) are expected to be released by mid-February and later in the spring for the OCX RFP, according to JPO.

“Minor adjustments are being implemented in the program planning to reflect an incremental development and delivery approach for both acquisitions that will provide increased GPS capability sooner and more frequently over the life of the program,” the JPO told Inside GNSS. Nonetheless, an upgrade in the control segment to accommodate the new generations of satellites is behind schedule, which means that the capability to operationally control those signals will not be available until 2009 at the earliest, according to the DSB task force.

Modernizing Technology

In terms of its fundamental design, the Global Positioning System is nearly 35 years old. More recent spacecraft designs using modern electronics, new rubidium clocks, better satellite management techniques, and navigation message enhancements have improved performance. But the design of the key resource for manufacturers and users, the GPS signals-in-space, is essentially the same as when the first satellite was launched in 1978: a C/A-code on L1 (centered at 1575.42 MHz) and P/Y-code military signals at L1 and L2 (1227.60 MHz).

Over the next five years, however, this situation will change dramatically.

Beginning with SVN53/PRN17, the first modernized Block IIR (IIR-M) satellite built by Lockheed Martin and launched last September 25, GPS has gained a new open civil signal at L2 (centered at 1227.6 MHz). A third civil signal, L5 (centered at 1176.45 MHz) will arrive with the Block IIF satellites now scheduled to begin launching in 2007.

Both IIR-M and IIF satellites will offer new military M-code signals at L1 and L2 with “flex power” capability of transmitting stronger signals as needed. The L5 civil signal will be broadcast both in phase and in quadrature, with the quadrature signal being broadcast without a data message. Air Force Space Command expects to have a full complement of satellites transmitting L2C and M-code signals by 2013; for L5, fully operational capability is expected by 2014.

Generally, the new signals will be characterized by longer code sequences broadcast at a higher data rate and with slightly more power. Beginning with the IIR-M satellites, the Air Force will be able to increase and decrease power levels on P-code and M-code signals to defeat low-level enemy jamming — a capability known as “flex power.”

These new signal features will support improved ranging accuracy, faster acquisition, lower code-noise floor, better isolation between codes, reduced multipath, and better cross-correlation properties. In short, the new signals will be more robust and more available.

Looking farther ahead, another civil signal at L1 is planned to arrive with the GPS III program. Under a GNSS agreement signed with the European Union in June 2004, this will be a binary offset carrier (BOC 1,1) signal similar or identical to that of the Galileo open signal. This is expected to simplify the combined use of GPS and Galileo signals. Nominal first launch date for a GPS III spacecraft is currently 2013.

Modernization will also take place in the ground control segment. Six GPS monitoring stations operated by the National Geospatial-Intelligence Agency (formerly the National Imagery and Mapping Agency) have been folded into the existing five Air Force GPS monitoring stations (which includes the Master Control Station at Schriever AFB, Colorado.) This will eliminate blank spots in coverage and support Air Force plans to monitor the integrity (or health) of civil signals as well as military signals.

New Political Structure

Under a presidential national security policy directive (NSPD) released in December 2004, a National Space-Based Positioning, Navigation, and Timing (PNT) Executive Committee and Coordination Office have taken over from the Interagency GPS Executive Board (IGEB). Mike Shaw, a long-time GPS hand on both sides of the civil/military interface, stepped in toward the end of 2005 as the first director of the PNT coordination office.

Establishment of the PNT committee — now cochaired by deputy secretaries of defense and transportation, Gordon England and Maria Cino, respectively — kicked GPS leadership up a notch from that of the IGEB. Other members include representatives at the equivalent level from the departments of state, commerce, and homeland security, the Joint Chiefs of Staff and the National Aeronautics and Space Administration.

The committee had met once shortly after its formation under President Bush’s NSPD, but a January 26 gathering marks its first with the current leadership. In addition to getting acquainted with one another and the PNT topic in general, the agenda covered such issues as the DSB task force report, modernization and funding of GPS, and the new UN International Committee on GNSS (see article "What in the World is the UN Doing About GNSS?" in this issue).

Without a director and coordination board in place, the executive committee was unable to get on with many of the tasks assigned it by the presidential directive, including writing a five-year plan for U.S. space-based PNT and appointing an advisory board of outside experts. With Shaw on board, the coordination board now has seven staff members detailed from agencies represented on the executive committee.

A charter for the advisory board has been drafted and awaits approval by the committee, as does a draft of an international PNT strategy prepared by the State Department under the direction of Ralph Braibanti, who heads that agency’s space and advanced technology staff.

By
1 25 26 27
IGM_e-news_subscribe