B: Applications Archives - Page 143 of 145 - Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design

B: Applications

January 31, 2007

Trimble Acquires @Road, Spacient

Trimble of Sunnyvale, California, has entered into a definitive agreement to acquire publicly held @Road, Inc. of Fremont, California, and has purchased privately held Spacient Technologies, Inc. of Long Beach, California.

Read More >

By Inside GNSS

RTK Precise Positioning

Calgary, Alberta, Canada’s NovAtel Inc. offers a new real-time kinematic (RTK) positioning solution, known as AdVance RTK, designed to enhance the precision and performance of the company’s OEMV family of GNSS receiver boards.

Read More >

By Inside GNSS

GPS Block III Contracts

The U.S. Air Force has awarded two $50 million contracts to Boeing and Lockheed Martin to execute a system design review for the next-generation GPS space segment program, GPS Block III.

The contracts come on the heels of both companies successfully completing system requirements reviews in November 2006. Those reviews, part of a $10 million follow-on order to a Phase A Concept Development Contract awarded in 2004, assessed Boeing’s and Lockheed’s ability to mitigate development and delivery risks associated with building the Block III satellites.

Read More >

By Inside GNSS
[uam_ad id="183541"]
January 1, 2007

Rescue Mission: GPS Applications in an Airborne Maritime Surveillance System

Maritime search and rescue (SAR) operations do not fit the usual and customary operational modes for aircraft operations. Consequently, neither do their navigation and flight management system (FMS) requirements.

Maritime search and rescue (SAR) operations do not fit the usual and customary operational modes for aircraft operations. Consequently, neither do their navigation and flight management system (FMS) requirements.

SAR missisions are not based on schedules but rather on ad hoc events and flights. Once the mission control center receives word of an accident (ship disaster, aircraft crash, etc.), an aircraft receives a mission order and begins a high-speed ferry flight to the area of concern. After arrival in the area of the incident, the aircraft typically performs a low-altitude (500 to 1,500 feet), low-speed search flight to locate survivors and the vessel.

In executing this search, the crew employs a suite of surveillance radars, electro-optical sensor, and scanning and direction finding equipment to localize  transmissions of emergency beacons that may have been activated during the accident. Once the target (person, ship, aircraft) is found, the crew drops needed equipment, such as life rafts or pumps, out of the aircraft.

The target position and other details are reported to the mission control center in order to initiate further rescue activities. All of these activities require precise navigation and sensor control, which may be obtained by a number of GNSS/GPS applications on board the aircraft.

This article describes an airborne surveillance system, AeroMission, developed by Aerodata AG, and the GPS/inertial navigation system (INS) that supports its operation.

In addition to SAR missions, AeroMission is also suitable for maritime surveillance, border and anti-smuggling patrols, pollution detection and mapping, fishery control, offshore oil field monitoring, and research applications.

System Overview
AeroMission has been developed to provide high reliability, redundancy, and efficiency. It was designed using modular architecture and state of the art technology.

In supporting AeroMission, an integrated GPS/IMU navigation system — AeroNav — combines the GPS advantages of long-term stability and absolute accuracy with those of inertial navigation — short-term accuracy during phases of high dynamics in which GPS positioning may be lost or degraded.

A separate GPS/INS system also provides attitude reference by using strapdown algorithms providing position and velocity solutions. Turn rates and accelerations given by the IMU are corrected by the GPS pseudorange measurements. These corrections are calculated by a Kalman filter.

The basic system components include:
•    surveillance radar (using the separate GPS-supported INS)
•    forward-looking infrared (FLIR) sensor (using GPS services provided through AeroNav)
•    infrared/ultraviolet (IR/UV) scanner (using a dedicated GPS-supported INS)
•    Mission management and guidance system (using GPS services through AeroNav)
•    SAR Homing Device
•    HF, VHF, UHF, and satellite communication
•    Intercom including communication relay
•    Photo/video camera
•    Ergonomic operator work stations

Other sensors such as side-looking airborne radar or microwave radiometer can be integrated as options into the suite.

. . .

Sensor Suite
In addition to the navigation system, moving map display, system software, and databases, AeroMission incorporates a number of additional sensors to aid its surveillance and reporting functions.

  • Surveillance Radar . . .
  • Electro-optical/infrared sensor . . .
  • AIS and direction finding . . .

. . .

Mission Management
TheAeroMission management suite is an integrated solution that consists of equipment and software for sensor operation and control; sensor data gathering, storage, and evaluation; mission reporting, and communications control and recording.

. . .

Flight Deck Interface
The mission system has a number of interfaces to the flight deck in order to support the mission and decrease the work load of both the cabin crew and the flight deck crew.

. . .

System Qualification and Certification
The qualification and certification process for the project was quite challenging. All modifications of the airframe have been certified through a Supplemental Type Certificate (STC) approved by European Aviation Safety Agency.

. . .

Operational Experiences
During the test flights and also during the first 10 months of operations, AeroMission installed in a DO 328 aircraft has demonstrated its reliability and efficiency with an overall service availability of more than 99 percent . . .

For the complete story, including figures, graphs, and images, please download the PDF of the article, above.

By

GPS: Launches of Satellites and Institutional Initiatives

Successful launch of the second modernized Block IIR satellite, IIR-15(M2), on September 25 and scheduling of another IIR-M launch on November 14 underlines recent progress in the GPS program.

IIR-15(M2), also identified by its space vehicle number (SVN58) and pseudorandom code number (PRN31), will be placed into orbital plane A, slot 2. The U.S. Air Force has designated the satellite to be launched in November as GPS IIR-16/M3, PRN15/SVN55.

Read More >

By Inside GNSS

Correlators for L2C

The term “Correlator” is often used in discussions of GPS and GNSS receiver design. It has been used to describe devices as simple as a single exclusive OR gate through to complete “baseband” chips that include a microprocessor.

Most usually, and in this article, the term describes the hardware or software that produces all of the required correlation data for a single signal from a specific GNSS satellite signal. This is also sometimes termed a “channel.”

The term “Correlator” is often used in discussions of GPS and GNSS receiver design. It has been used to describe devices as simple as a single exclusive OR gate through to complete “baseband” chips that include a microprocessor.

Most usually, and in this article, the term describes the hardware or software that produces all of the required correlation data for a single signal from a specific GNSS satellite signal. This is also sometimes termed a “channel.”

With the open GPS civil signal at the L2 frequency (L2C) now becoming available on Block IIR-M satellites, receiver designers have the opportunity to work with a markedly different GNSS signal resource. The first IIR-M spacecraft (designated SVN53/PRN17) was launched September 25, 2005, and the second is scheduled to go into orbit on September 14, 2006. (IIR-Ms also transmit the new GPS L1/L2 military (M-code) signal, but we will not treat this issue here.)

Against that historic backdrop, then, this article examines some of the novel elements of the L2C signal and its implications for GNSS receiver correlators. Our focus will be on a technically challenging aspect of receiver operation: initial acquisition of the signal and its processing by the correlator.

But first we will review some of the key functional aspects of GNSS correlators and some of the signal parameters that affect their operation.

. . . We can more easily explain the role of a correlator if we examine its two functions separately. In a GNSS receiver, correlation is used in two distinct activities:

Acquisition. Before the receiver knows whether it can receive a certain satellite’s signal, it must “search” for it using the correlation in an ordered but relatively indiscriminate way. Effectively, many correlation trials are run for each of many code delays and Doppler frequency offsets.

Tracking. Once acquired, the receiver must still despread the received signal in order to receive data and measure pseudoranges. Several correlators are usually used to keep the local code as closely aligned to the received code as possible. To do this, a “delay-locked loop” is used, with the correlators operating within the loops, some typically ahead of the received code (“early”) and some behind (“late”).

In other words, correlation is used both to “get” the signal and to “keep” it. These actions should be considered quite separately. In this article, we concentrate on the acquisition process.

. . . Various signal parameters affect correlation. These are listed for the three GPS signals in Table 1. Both L2C and L5 have dataless sub-signals which are time-multiplexed (CM and CL) and in quadrature (I5 and Q5), respectively. Both use longer codes than L1, while L5 has a higher chipping rate. L5 also has the added complication of Neuman-Hoffman codes, which will not be further discussed here.

. . . Correlation Signal “Shape” – the L1 civilian C/A-code signal is a single BPSK modulation. Despite the fact that the L2C signal is also BPSK, it introduces another layer of complexity by having the data-carrying and dataless signals multiplexed in time. Typically, the shorter (20-millisecond) data-carrying CM code will be acquired first, then the receiver would hand over to the longer (1.5-second) dataless CL code for tracking.

. . . The beauty of the exploitation of the circular convolution is that the code in the data does not need to be well aligned with the stored data— the whole point of acquisition, after all, is to perform this alignment. So, if the CM code was chopped up into pieces smaller than 20 milliseconds, this circular convolution would no longer reflect the time domain correlation, and a receiver would need to have many stored codes.

. . . Many GPS chipsets are optimized for operation in mobile telephone handsets and, as such, are aimed at minimizing the drain on handset batteries. For such applications, the large L2C acquisition overhead presents a serious problem . .

Conclusion
In this article, we have primarily examined the implications of signal acquisition for the new L2C signal. In a typical L2C-only receiver, significantly more effort is required to acquire the signal than is the case for L1 C/A code: more than 200 times higher in the hardware case and more than 500 times higher in the software case. However, because the long CL code does not carry any data, it can be used for the long integrations required for acquisition in a weak-signal environment.

For the complete story, including figures, graphs, and images, please download the PDF of the article, above.

Acknowledgments
The author wishes to acknowledge the useful suggestions made by Eamonn Glennon of Signav Pty Ltd.

By
[uam_ad id="183541"]
September 1, 2006

Development Update

Development of satellite-based positioning and navigation technology has greatly reformed conventional spatial determination practices and enabled advancement of the digital infrastructure in China. This kind of progress is continuing with the improvement of related techniques.

Development of satellite-based positioning and navigation technology has greatly reformed conventional spatial determination practices and enabled advancement of the digital infrastructure in China. This kind of progress is continuing with the improvement of related techniques.

This article will provide an update on China’s GNSS-related activities in recent years, including research on novel positioning approaches, collaborations between China and international sectors, and, finally, some brief comments on the prospect for China’s Beidou navigation and positioning system.

China’s CORS Network

Beginning in 1990, the mode of continuously operating reference station (CORS) using GPS was first applied by NASA’s Jet Propulsion Laboratory (JPL) and MIT to the research of plate tectonics in southern California, USA. This innovation successfully helped geologists to deepen their understanding of seismic faults because more continuous spatial information can be obtained than ever before.

From 1997 to 2000, as a key state scientific project, the Crust Motion Observation Network of China (CMONOC) was implemented, composed of 25 CORS stations and 1,000 regional network stations. Very long baseline interferometry (VLBI) and satellite laser ranging (SLR) equipment was coupled in some of the CORS stations. Based on CMONOC, researchers achieved significant seismic motion results about continental plates. CORS has subsequently been employed by numerous agencies and organizations in China and has become popular in many fields, including guidance of aircraft similar to the U.S. Wide Area Augmentation System (WAAS) approach procedures.

In contrast to preliminary stages, evolution of networks and communications have enabled CORS to become a leading support component for the national temporal and spatial information infrastructure. CORS is now implemented at many of China’s main cities, such as Shenzhen, Chengdu, Beijing, Shanghai, and Guangzhou.

Among these, the Shenzhen CORS system was started in 1999 as a paradigm of comprehensive service network and spatial data infrastructure in China. The system was designed and implemented in a flexible form of network and wireless communication to perform a variety of positioning and navigation services in both real-time and postprocessing.

The project was jointly accomplished by the GNSS Engineering Research Center, Wuhan University, and Shenzhen Municipal Bureau of Land Resources and Housing Management. It is aimed at applications for surveying and mapping, urban planning, resource management, transportation monitoring, disaster prevention, and scientific research including meteorology and ionosphere scintillation.

In this way, the Shenzhen CORS network is acting to energize the booming economy of this young city. With rapid development of CORS construction in China, these stations are expected to operate within a standard national specification and to play vital roles in realization of the “digital city” in terms of real-time and precise positioning and navigation.

Based on CORS stations properly distributed throughout China, some of these facilities are aligned with stations installed with other spatial observing technologies such as SLR, VLBI and DORIS (Doppler Orbitography and Radio-positioning Integrated by Satellite, a system maintained by France). These sites are serving for satellite orbit determination and, when combined with multiple spatial technologies, have created a dynamic and multi-dimensional terrestrial reference frame for China.

(For the rest of this story, please download the complete article using the PDF link above.)

By

BOC or MBOC? Questions and Answers

Global navigation satellite systems are all about timing. In a narrow sense, GNSS is technically a matter of how long the satellite signals take to reach a receiver. In a larger sense, it’s about designing global infrastructure systems that may not produce practical benefits for 5, 10, even 15 years or more.

During that time, a lot can happen. Technology changes. Electronics get more powerful and cheaper.

Global navigation satellite systems are all about timing. In a narrow sense, GNSS is technically a matter of how long the satellite signals take to reach a receiver. In a larger sense, it’s about designing global infrastructure systems that may not produce practical benefits for 5, 10, even 15 years or more.

During that time, a lot can happen. Technology changes. Electronics get more powerful and cheaper.

But GNSS equipment manufacturers and receiver designers live in the here and now. They face today’s challenges with today’s technology: how to receive signal indoors, under tree canopy, in urban canyons. How to get the most robust tracking capability out of a receiver — the most accurate, the most available capabilities.

And to accomplish these things at a price that prospective customers in the marketplace will see as offering true value.

Will the common civil signal be the binary offset carrier, or BOC(1,1) waveform as stated in a 2004 agreement between the United States and the European Union? Or, will it be the multiplexed BOC (MBOC) signal recommended by a technical working group set up under that agreement to examine further refinements to the design?

The signal decision involves benefit trade-offs for different types of GNSS receiver designs and will have widespread consequences for the products developed over the next 10, 20, or even 30 years.

Although the math and science underlying the discussion may seem esoteric, there’s nothing abstract or theoretical about the consequences of the decision. The selection of a common GPS/Galileo civil signal will profoundly shape the user experience, the engineering challenges, the business prospects and strategies of GNSS manufacturers and service providers, and even the political relations among nations for decades to come.

Our series started in the May/June issue with a “Working Papers” column that introduced the MBOC spreading modulation. Earlier this year, the GPS-Galileo WorkingGroup on Interoperability and Compatibility recommended MBOC’s adoption by Europe’s Galileo program for its L1 Open Service (OS) signal and also by the United States for its modernized GPS L1 Civil (L1C) signal. The Working Papers column discussed the history, motivation, and construction of MBOC signals. It then showed various performance characteristics that the authors believe demonstrate MBOC’s superior performance and summarized their status in Galileo and GPS.

The May/June column also noted, “The United States is willing to adopt for GPS L1C either the baseline BOC(1,1) or the recommended MBOC modulation, consistent with what is selected for Galileo L1 OS.” Given this impartial U.S. government position, Inside GNSS believed it would be appropriate and useful to ask a panel of GNSS industry representatives their thoughts on the subject of a common civil GPS/Galileo signal waveform.

In the July/August issue of the magazine, therefore, in an article introduced by Tom Stansell, nine technology specialists from leading GNSS manufacturers began the discussion of technical alternatives, implications for receiver design, and significance for the products that reach the marketplace.

This month four more GNSS receiver designers join the manufacturers dialog, bringing the total to 13 panelists representing the perspectives of 8 manufacturers — CMC Electronics, Japan Radio Company, NavCom Technology, Nemerix, NovAtel, Qualcomm, Rockwell Collins, and SiRF Technology — and 3 independent consulting engineers. Their biographies follow, along with their verbatim answers to questions posed by Inside GNSS.

(In the sidebar, “Old Questions, New Voices,” at the end of this article, we present the responses of our four latest panelists to the five questions answered in Part 1 of the series. The complete article, as well as the May/June “Working Papers” column, can be found on the Inside GNSS website at https://www.insidegnss.com/mboc.php.)

We also invited the authors of the original MBOC design recommendation to respond to the entire manufacturers dialog, an invitation that we made to the GNSS community in general — and one that still remains open. Their response immediately follows this article (see below). Javad Ashjaee, president and CEO of Javad Navigation Systems who has been designing GNSS receivers for 30 years, also submitted some comments on the panelists’ discussion, which appear in this section as well.

In Part 2 of the Manufacturers Dialog on BOC and MBOC presented here, the panelists discuss performance of narrowband and wideband receivers under weak signal and multipath conditions and offer their opinions on the best signal option.

The Questions and Answers

Q: Would you expect any performance difference for your products if MBOC code is transmitted instead of BOC(1,1)?

Fenton – Yes, depending on the exact MBOC option used, we would expect between 21 percent to 33 percent reduction in code tracking error due to the increased effective chipping rate and a significant improvement in the detection and correction of close-in multipath interference.

Garin – Compared to a theoretically achievable performance with BOC(1,1) only, we would lose performance. Compared to the competition who will have to deal with the same signals in space, we won’t be at a disadvantage.

Hatch/Knight – We expect a modest improvement in multipath mitigation under moderately weak signal conditions, such as under foliage.

Kawazoe – We do not expect any advantage from MBOC.

Kohli/Turetzky – The biggest difference we would see would be in the availability due to the lower signal strength. However, it’s the same for everyone and if the benefit of higher accuracy for some applications is deemed to be of higher importance, we can still build a very high performance receiver on the MBOC signal.

Sheynblat/Rowitch – It is difficult to quantify the impact on indoor and urban canyon positioning accuracy due to a loss of 1 dB of sensitivity. However, it is straightforward to conclude that for successive sensitivity losses in 1 dB steps, measurement yield will also decrease in corresponding steps, eventually falling below the threshold for a successful GPS fix. This has a noticeably negative impact on the user experience for consumer and business applications. As an example, in some indoor signal scenarios we have seen 1 dB of improved sensitivity deliver an additional 20 percent improvement in successful fix rate.

Stratton – As stated earlier, we expect that we would obtain lower levels of multipath under ideal conditions, but the broader impact in off-nominal conditions requires further study. We do not anticipate a difference in user operational benefit for either choice.

Studenny – We prefer high performance signals and simple receiver architectures. Please note that developing an aviation receiver that uses BOC or MBOC will require the same development funds. As far as MBOC goes, we would take advantages of it.

Weill – Let’s consider Galileo signals as an example. When multipath is present, an MMT-equipped wideband receiver using a Galileo BOC(1,1) pilot with a total signal (data + pilot) E/No of 45 dB-Hz-sec and a secondary path 6 dB below the direct path can theoretically produce a worst-case RMS range error of about 63 centimeters at a secondary path delay of about 1.5 meters (the RMS error is over random secondary path phases). This peak error is reduced to about 50 centimeters using a TMBOC-50 pilot, which is a 21 percent reduction. For both signal types the error falls off rapidly at increased secondary path delays. At a path delay of 10 meters these RMS errors decrease to 25 centimeters and 18 centimeters, respectively. At path delays above 20 meters the errors approach those of a multipath-free signal, about 14 centimeters and 9 centimeters, respectively (essentially reaching the Cramer-Rao bounds for error due to thermal noise). In this region the TMBOC-50 signal gives about 33 percent less RMS error than BOC(1,1).

Q: A narrower bandwidth receiver designed for BOC(1,1) will be able to use only about 87.9 percent of the total power in the GPS MBOC pilot carrier or 81.8 percent of the total power in the Galileo MBOC pilot carrier (TMBOC-50 version). Do you see this as a disadvantage in any applications, especially in products/services provided by your company? If so, which ones?

Fenton – In the case of the GPS or Galileo MBOC, the effect of a 12 percent (or 18 percent in the case of Galileo) loss of signal strength would result in a 7 percent and 11 percent increase in RMS tracking error respectively. For example, if the RMS code tracking error of a channel locked to a narrow-band BOC(1,1) signal was 30 centimeters, then the expected tracking errors of the same hardware locked to the respective MBOC signals would increase to 32.1 and 33.3 centimeters assuming all other variables remained the same. We do not see this as being a significant disadvantage. The lower signal level will also slightly extend satellite acquisition times and time to first fix.

Garin – The disadvantage will be minor, at this level, as the fading effects are much more important than the absolute signal power. On the other side, the advantage will be immaterial for our current market. Nevertheless, we support the introduction of MBOC, as the theoretical penalty is minor, and the practical one will be insignificant.

Hatch/Knight – It is not likely that our company will build a narrowband receiver.

Kawazoe – We expect 12.1 percent and 18.2 percent power loss will not cause any serious problems. However, we would like BOC(1,1) to be adopted rather than MBOC for simple and compact design of GPS receivers.

Keegan – Signal level is sensitivity, and sensitivity is a significant part of consumer GPS. So, I believe that this 0.6 dB (or 0.9 dB) is more of an issue with consumer sets than high precision sets. However, in current consumer applications there are many places where architectural improvements would increase the signal-to-noise ratio (SNR) by more that these amounts, such as better antenna technology, more optimum signal sampling (sample rate and quantization), closed loop processing, etc. However, every dB is important.

Kohli/Turetzky – In general, we fight for every tenth of a dB in every aspect of our system design. Giving up 1 dB in transmitted signal power is a concession, but will be mitigated by other processing gains. One dB will translate into additional penetration in a building. This can make a measurable difference in availability at the consumer level.

Stratton – It is not directly a disadvantage. We will produce receivers that utilize every waveform that adds value to our markets. The key factor for us is whether our users would achieve operational benefits by using modernized signals, and we do not perceive a difference in user benefit between these two alternatives.

Studenny – We develop wideband receivers and maximize performance as required. We would use all available signals in the most effective manner possible.

Weill – With today’s technology, a narrowband design is required in applications where the receiver must have low cost and low power consumption. If it must also be capable of operating in poor signal environments, the provider of such a receiver is likely to believe that every decibel counts and therefore be in favor of a BOC(1,1) signal with its lower RMS bandwidth making all of the signal power useable. On the other hand, I would argue that it will probably take a decade to make MBOC signals available, and in that time improved technology is likely to make low-cost, high bandwidth receivers a reality. One must also take into consideration that if satellites without MBOC signals are launched, it will be a long time until the next opportunity to improve signal characteristics.

Q: If your receivers predominantly are narrowband now, do you believe your customers would benefit from wider bandwidth receivers with better multipath mitigation capabilities? Why or why not?

Fenton – The customers of our narrowband receivers would benefit from multipath mitigation capabilities. However the priority of these customers is cost rather than accuracy. It is more important for them to have a lower unit cost than advanced multipath mitigation technologies. However due to Moore’s law, by the time these signals are available, the cost of adding the increased signal processing to achieve better multipath mitigation may be tolerable.

Garin – Our today’s typical user will marginally benefit from the widening of bandwidth, when it will be technically and commercially feasible, mainly in line of sight conditions, that still represents a non negligible percentage of the conditions.

Kawazoe – Our customers wouldn’t benefit from wider bandwidth because multipath error is reduced with dead reckoning sensors, and the largest position errors occur when only non-direct signals are received, such as in areas with tall buildings.

Keegan – The main drivers for Consumer (or narrowband) receivers are cost and power and not accuracy in all but the most demanding environments such as indoors or in urban canyons, in which case improved performance is a desire as long as it does not grossly impact cost or power. However, a multipath environment that could be mitigated by a wideband receiver using conventional multipath mitigation techniques is not the environment experienced indoors or in urban canyons since the signal being tracked is typically a non-line-of-sight multipath signal and not a direct path signal contaminated with multipath. I believe it is unlikely these consumer products will significantly benefit from conventional multipath mitigation techniques employing a wider bandwidth design.

Kohli/Turetzky – Most of our receiver are narrowband today and we have far more requests for narrower bandwidth than wider. The multipath benefit is outweighed by the susceptibility to interference in most consumer markets.

Sheynblat/Rowitch – Given that the current performance capabilities of GPS technology meet the needs of consumers and business users worldwide, cost reduction is the remaining critical element needed to achieve wider utilization of GPS and Galileo in the future. This view is shared by most mass-market product manufacturers in the location industry.

Weill – I believe that customers will undoubtedly benefit from wider bandwidth receivers and that receiver manufacturers will provide more of these products in the not-so-distant future. For example, a major application of narrowband receivers is consumer-level high-sensitivity assisted GNSS handheld receivers, often embedded in a cell phone. Using current technology, these receivers are narrowband in order to reduce cost and power consumption, but this exacerbates multipath errors, which cannot be reduced by differential corrections available in many assisted systems. Compounding the problem is the severe multipath often encountered in indoor and urban environments. Going to a wider bandwidth can significantly reduce these errors, especially in conjunction with newer multipath mitigation technology.

Q: If your receivers predominantly are narrowband now, do you believe your designs will migrate toward wideband receivers in the next 10 to 15 years? Why or why not?

Fenton – What’s limiting the choice of processing bandwidth is unit cost and power consumption. Generally, wideband receivers have more complicated ASIC designs with higher gate counts as compared with narrowband designs. The use of these large and more expensive ASIC components along with larger CPUs required for the multipath processing results in higher unit receiver costs to our customers. Moore’s law may reduce the cost of signal processing to an insignificant amount before these signals are available or during the lifetime of these signals. Larger bandwidths require higher sampling rates and clock rates to the digital sections. These higher rates result in higher power consumption of the receivers. If the customer’s top priority is low power consumption then this will limit the widening of the bandwidth. Traditionally, each generation of electronic components have become more power efficient, so processing wider bands in the future may not increase the power demands beyond tolerable limits.

Garin – Our designs will increase the IF effective bandwidth, first for more accurate measurements, and possibly to accommodate Carrier Phase for the mass market in the next 3-5 years.

Hatch/Knight – Future high performance GNSS receivers will trend toward wider bandwidths. Performance of advanced code and phase multipath mitigation techniques is limited by the composite bandwidth of the satellite and receiver filtering. Receiver bandwidth in most existing receivers truncates a portion of the satellite signal spectrum and thereby reduces the effectiveness of advanced code and carrier multipath mitigation techniques.

Kawazoe – There is a possibility to migrate toward a wideband receiver, but the cost reduction and the jamming robustness are the main requirements from our customer, so we suppose that low cost narrowband receivers will continue to be dominant.

Keegan – One must believe that in 10-15 years the vast majority of consumer GPS receivers will be embedded in mobile handsets. In this environment I don’t believe wideband receivers (as defined here as capable of tracking the BOC(6,1) component) will improve the performance sufficiently to warrant its migration to this market. Other technical drivers would have to change first; such as much better antenna technology that does not impact cost and/or force the user to orient the device and much better low cost interference rejection (filtering) technology. Unless these change, wideband receivers that only offer 1dB of improved sensitivity will not compete with the lower power and cost of narrowband receivers. Unless these also improve, wideband receivers that only offer less than 1dB of SNR improvement will not compete with the lower power and cost of narrowband receivers. I don’t see a benefit that will cause them to migrate to something that is inherently more costly and consumes more power.

Kohli/Turetzky – If it makes economic sense to develop a wideband receiver in the future, we would do so. However, in our current markets today, we do not see that migration.

Weill – I have little doubt that competitive forces for better positioning accuracy combined with enabling technology will result in a trend toward low-cost high bandwidth receivers for most applications, even those which currently use narrowband receivers.

Q: If your receivers now or in the future are wideband, do you now or would you in the future likely use a form of “double delta” multipath mitigation?

Fenton – Possibly. The advanced multipath processing technique used to take full advantage of the MBOC waveform requires increased software processing demands and is more burdensome to the host CPU. It is envisioned that we would offer a modified Double-Delta style tracking technique for those customers who do not wish to burden their CPU with increased processing requirements. However, due to Moore’s law, by the time these signals become available, the cost of processing the algorithms may not be an issue.

Garin – If the bandwidth was suitable and the patents had expired, we would use some form of double-delta correlator as an add-on, but not as the main mitigation technique. We believe that double-delta will be superseded by methods pertaining to estimation theory rather than reference or received signal shaping. There is a misperception that carrier tracking performance won’t be different between C/A code, BOC and MBOC. It is probably true for traditional carrier phase tracking techniques. I would like to emphasize that several Carrier Phase “offset tracking” techniques can capture part of the code multipath performance into carrier phase performance, and will benefit as well from better code multipath performance.

Hatch/Knight – Some future multipath mitigation techniques will combine edge differencing techniques like “double delta” with advanced mitigation techniques.

Kawazoe – We would like to use a new method for multipath mitigation, if we are able to invent it.

Keegan – Double Delta type correlators can help any receiver mitigate multipath contamination and would be a good improvement even for narrowband receivers that actually (closed loop) “track” the signal. Many of the current consumer receivers do not track very low level signals but make open loop measurements of range in these environments, in which case double delta type correlators really have minimal benefit since there is limited control of the actual “sampling point” of the received signal. Other than intellectual property (IP) issues, there is nothing right now to stop narrowband tracking receivers from benefiting from Double-Delta type correlators … though the benefit is not as great for a narrowband as compared to a wideband receiver.

Obviously, high precision survey type receivers will employ any and all available multipath mitigation techniques, with IP issues being the limit.

Kohli/Turetzky – SiRF has a number of patented multipath techniques that we would leverage to take advantage of any new signal structure.

Stratton – Our receivers utilize a variety of tracking architectures depending on the specific requirements. Current civil aviation regulations limit the manufacturer’s flexibility to implement multipath mitigation techniques, though “double delta” discriminators are permitted. These limitations are intended to ensure that augmentation systems meet integrity performance under off-nominal conditions (e.g., spacecraft or atmospheric anomalies). The regulations will need to be revisited prior to the certification of receivers using modernized signal waveforms.

Studenny – No, Double-Delta technologies have their own limitations and problems. Other technologies exist that are superior to Double-Delta. Vision is one example. We are working on in-house signal processing, but we are not ready for disclosure.

Weill – Double delta may be a reasonable choice for low-cost, narrow bandwidth applications using current technology.

Q: If your receivers now or in the future are wideband, do you now or would you in the future likely use a more modern form of multipath mitigation (e.g., Multipath Mitigation Technology (MMT) by Larry Weill, as used by NovAtel in their Vision Correlator)?

Fenton – Yes, NovAtel intends to use a modified MMT algorithm specifically designed to take full advantage of the MBOC signal structure and to provide our customers both code and carrier tracking performance at near theoretically maximum performance achievable. NovAtel has exclusive use and sublicensing rights to MMT for commercial GNSS applications and intends to look at sub-licensing opportunities for its Vision technology.

Garin – MMT and Vision have their respective merits in their own market segments, but definitely not in ours, and not in an hypothetical high accuracy mass market. Other generations of MP mitigation techniques are under study and will probably obsolete the current MP methods. I feel it would be short-sighted to try to evaluate today what will be the impact of MBOC on Multipath, looking only at the impact it will have on the methods published as of now. A narrower correlation peak is also of interest in carrier phase multipath mitigation.

Hatch/Knight – We will deploy a more modern form of both code and phase multipath mitigation and, of course, will attempt to patent our own techniques.

Kawazoe – We would like to use new multipath mitigation, if we will be able to invent one which does not conflict with all multipath mitigation methods patented before.

Keegan – Obviously, the highest precision survey receivers will employ any and all available multipath mitigation techniques, again with IP issues being the limit. However, these types of techniques require substantially more system resources than do correlator type mitigators, so only those receivers looking for the highest accuracy will employ them. Again, this is a customer requirement issue. Users that demand the highest accuracy will use receivers that employ the best multipath mitigation techniques. Others that don’t require the highest accuracy will use receivers that are lower cost and lower power. This is not a technology issue, it is a customer requirements issue. Millimeter accuracy for someone looking for a power pole is not worth any additional cost over sub-meter accuracy.

Kohli/Turetzky – We would look at all of our options of both internally developed and externally available techniques that would be appropriate for our market. Our multipath mitigation needs however are focused on urban canyon type multipath rather than improving centimeter levels of accuracy in open sky.

Stratton – Rockwell Collins is actively developing and fielding multipath mitigation technology, and we hold a number of patents in this area. As mentioned earlier, regulations tend to limit the use of proprietary techniques for safety critical (civil) operations.

Studenny – We either develop or use whatever technology that is appropriate for our business.

Weill – If I were a receiver manufacturer in an environment where there is competition for positioning accuracy, I would at least want to investigate some of the new multipath mitigation technologies currently being developed and to consider whether licensing arrangements would make sense if patents are in force.

Q: If your receivers now or in the future are wideband, what are the “real world” benefits you expect from having the MBOC waveform? Will accuracy be better? By how much and under what circumstances? Will performance be better under poor signal conditions? By how much?

Fenton – Although not fully analyzed, the expected benefit of the MBOC signal will come from the increased effective RF phase transition rate (the number of phase transitions per unit time). As pointed out above, the expected increase of effective signal-to-noise ratio of a tracking loop that takes full advantage of the MBOC signal structure is between 2 and 3.5 dB with respect to a BOC(1,1) signal. For example, if the RMS code error of a channel tracking the BOC(1,1) signal was 30 centimeters, then switching to an MBOC would result in reducing the RMS error to between 23 centimeters and 21 centimeters depending on the exact MBOC code chosen (a factor of between 21 percent and 33 percent improvement). Multipath mitigation technologies also benefit from an effective increase in code tracking signal to noise ratio. These algorithms will be able to detect the presence of multipath sooner with this increased signal gain and be able to provide more precise range and phase measurements in the presence of closer-in multipath interference as compared with BOC(1,1).

Garin – The wider bandwidth will benefit this incoming accurate mass market.

Hatch/Knight – The MBOC codes will improve the “noise” of the multipath corrections estimated by advanced mitigation techniques. They may not significantly improve the mean accuracy, particularly for stronger signals. The weak signal code tracking threshold for the advanced techniques will be improved by the ratio of MBOC edges divided by BOC edges, as discussed in the Part 1 article.

Kawazoe – The “real world” benefit would be a reduced multipath effect, and we would expect better accuracy in urban canyons. Under poor signal conditions we would not expect high sensitivity or high cross-correlation through MBOC.

Keegan – Since the most modern multipath mitigation techniques (not double-delta or equivalent) work better with more observations of the multipath and multipath is observable only at code transitions, I believe these modern multipath mitigation techniques will improve with more code transitions. So, the MBOC signal structure should improve the performance of all wideband receivers tracking the MBOC signal that employ these modern multipath mitigation techniques. The more difficult the multipath is to observe (e.g. with very short delays) the more the additional code transitions will help.

Kohli/Turetzky – For our customers, we would expect some very limited benefit in accuracy under a very narrow set of conditions. When we talk about poor signal conditions, we are talking about -160 dBm and lower.

Stratton – Accuracy will be better under ideal conditions, but we have not seen validation of the theoretical benefit under realistic conditions. The impact of off-nominal conditions on accuracy, particularly differential GNSS (augmentation systems), requires further study, including:

  • Impact of atmospheric propagation effects that distort split-spectrum signals, which may impact MBOC differently than BOC(1,1) or C/A;
  • Impact of spacecraft anomalies that potentially impact MBOC differently than BOC(1,1) or C/A;
  • Impact of RF and antenna characteristics that vary across the bandwidth (e.g., VSWR, group delay differential) and thus may impact MBOC differently than BOC(1,1) or C/A.

It is worth noting that GPS already provides a higher accuracy signal than MBOC – the L1 carrier phase. At this point we favor the adoption of the simpler alternative of BOC(1,1), at least until a broader consensus regarding the above issues is achieved. While it would be interesting to know the benefit of MBOC on airport surface operations, we have not identified any other potential operational benefit to choosing this waveform over BOC(1,1).

Studenny – We desire an L1 capability that matches the L5 capability and which supports the deployment of CAT-III precision approach. It’s not just the power, it’s cross-correlation, false self-correlation, and ability to resist multipath and RFI. A well-selected coding scheme minimizes all these things, and when we compare it with the L1 C/A and L5 signals, it’s these things that really stand out. Recall we desire to minimize hazardously misleading information (HMI) by selecting an appropriate code/signal, because HMI is the key to precision approach. One more thing – a great many commercial applications will depend on minimizing HMI – they just don’t know it yet. Why? Because the position fix will need integrity. I can envision lawsuits, court battles, and so on, when GPS position fixes are questioned. This is coming, the commercial low cost GPS manufacturers may not want to deal with this but may have to, especially if there will be large sums of money involved.

Weill – In the absence of multipath, a wideband receiver using a TMBOC-50, TMBOC-75, or CBOC-50 pilot instead of a BOC(1,1) pilot should have RMS range errors due to thermal noise that are respectively 33 percent, 26 percent, and 21 percent smaller than with a BOC(1,1) pilot, assuming equal received signal power. This relative performance advantage is essentially insensitive to C/N0.

Q: The newest multipath mitigation technology is effective when receiving signals directly from satellites, and MBOC helps most in low S/N conditions. For your applications, how frequently will a low S/N with directly received signals occur? What practical and measurable benefit will MBOC give your users?

Fenton – As mentioned, the MBOC helps most in poor signal conditions such as low elevation tracking or high multipath conditions. The presence of these conditions is highly dependent on the location of the receiving equipment. A well situated antenna with multipath resistant electronics will not see a high proportion of poor signal. However, a surveyor operating in an urban construction site, or a forest engineer walking through the bush will experience a very high proportion of poor and corrupted signal. The large majority of our GPS users are operating in challenging RF signal conditions and would benefit by various amounts from the MBOC signal structure.

Hatch/Knight – In our applications low signal conditions occur at the start and end of satellite passes or when our receivers are near foliage or buildings. Many European farms are small and are surrounded by hedgerows that cause loss of satellite tracking or multipath mitigation when the satellites are masked by the foliage. MBOC improves the use of very weak satellites, but the effectiveness of advanced multipath mitigation algorithms for signals masked by foliage is not yet known. The several extra dB of code edge power provided by MBOC may be useful in such environments, but the benefits can not be quantified without live tests of the signals and processing algorithms on foliage-attenuated signals. The extra multipath mitigation power provided by the MBOC signals will lower the noise and residual multipath for both code and carrier measurements, but the amount of improvement is small for typical satellites.

It is our opinion that the extra number of visible satellites provided by a GPS plus Galileo satellite constellation is far more beneficial than implementing MBOC. Extra satellites greatly reduce the importance of weak signals and increase the precision of navigation. Implementation of the MBOC signal structure will be very costly to our customer base. Our existing receivers can combine a BOC waveform with a PN code. MBOC requires time multiplexing two different PN code in a very specific manner, which requires redesign of the signal processing ASIC and increases the complexity of the code generator by perhaps one-third to one-half. One could also use a 12*1.023MHz memory code to represent the 6*1.023MHz BOC code + PN code. That requires 12 times the storage of the 1.023 MHz memory code. The proposed codes are up to four milliseconds in length (~50,000 bits per channel). This is a sizeable fraction of the ASIC logic required to implement a channel and is more memory than is available.

The extra edge power provided by the MBOC signal structure is meaningful for a very small fraction of the time and can not be attained without a redesign of the code generators in our receivers. This will necessitate replacement of all the receivers in our customer base. We do not think the perceived benefits of MBOC are worth the cost.

Kawazoe – We think it is rare that a low S/N with directly received signals would occur when GPS receivers are used for car navigation. It is seldom that MBOC will give some benefit.

Keegan – I don’t completely agree with the assertion that “MBOC helps most in low S/N conditions”. More code transitions helps in the observation of multipath, that is, the ability to distinguish the multipath from the direct path signal. As the multipath delay becomes smaller, the ability to distinguish and hence measure the multipath becomes problematic. More code transitions assist in this case even in high SNR conditions.

Kohli/Turetzky – The definition of “low S/N” is critical here. We live in the domain of –160 dBm signals, which are almost never direct.

Stratton – Civil aviation receivers must pass specific test criteria under standard interference conditions to provide a margin for the users against interference. The receiver’s ability to maintain carrier track is far more important to accuracy than raw code phase quality in these scenarios. The receiver’s ability to demodulate data in these scenarios is also more critical, since navigation data senescence is a requirement to use the augmentation system. The military user may benefit indirectly from a more jam resistant acquisition signal in cold-start cases; however, the power level devoted to the data channel is all that matters in these cases.

Studenny – In Commercial Aviation, the concern is the integrity in applications supporting all phases of flight including CAT-I/II/III precision approach. As we approach CAT-III precision approach, the bounding probability for a very small position-fix error in the vertical direction and horizontal plane has to be very large (in excess of 99.9999999 percent). Any benefit that the signal-in-space can provide to meet these kinds of requirements is welcome. To answer the question directly, please note that there are various task forces at RTCA, EUROCAE, ICAO, and elsewhere, that are attempting to precisely quantify the various error allocations due to the signal in space, the aviation receiver, the proposed augmentation system, and the aircraft and crew, for all phases of flight, and for precision approach in particular. Please refer to these task forces for more details.

Weill – The wider bandwidth of an MBOC signal will generally improve MMT multipath performance by the same amount relative to BOC(1,1) under all conditions. Even with a relatively weak direct path signal component, MMT can be effective if the application permits extending the observation time of the signal. This is because its performance in reducing multipath error improves proportionately with increases in the ratio of signal energy to noise power spectral density, or E/N0. (This is not the case for double-delta mitigation.) For example, if the direct path C/N0 is 15 dB-Hz (a very weak signal), 10 seconds of signal observation gives an E/N0 of 25 dB-Hz-sec, which is useable by MMT. In some applications 100 seconds of signal observation can bring E/N0 to 35 dBHz-sec to give even better performance. Consequently, MMT multipath mitigation can be effective in many cases when the direct path signal is attenuated by foliage or passes through walls. (Note that extended signal observation times with MMT are appropriate only for static applications.) Urban canyons present a more difficult problem if there is total blockage of the direct path component, but then it is unlikely that any method of receiver-based multipath mitigation will work. On the other hand, the future availability of many more satellites could provide enough unblocked direct path signals to obtain positioning enhanced by good multipath mitigation.

Q: As you know, the statistics of real-world multipath are difficult to assess. Based on your real-world experience, how important is effective multipath mitigation to the GNSS community, and specifically in what applications? How important is it to your company?

Fenton – Having good multipath mitigation technology benefits almost all applications. Very few applications have “ideal” antenna locations providing multipath free signals. Most real-world applications suffer from some amounts of multipath. The amount of benefit that the user sees from this technology is inversely proportional to quality of the RF signal received.

Garin – Multipath is in my opinion the “last frontier” in the pursuit of better navigation and positioning performance for the GNSS community at large. Building monitoring and surveying will be the principal beneficiaries. For the cell phone and personal navigation device we deeply do care about multipath, but the ultimate answer won’t come from a binary choice between MBOC or BOC, nor from any reference signal shaping technique. A new class of methods is about to emerge, some of them adapted from the wireless communications discipline.

Hatch/Knight – Multipath is one of the largest errors in short to medium baseline real-time kinematic (RTK) applications, which are a major portion of our business. Mitigation of multipath is very important to our business.

Kawazoe – We think an effective multipath mitigation is very important for all applications in urban canyons, such as for car navigation or walker’s navigation. It also is important for our company, because we produce many GPS receivers for car navigation.

Keegan – If multipath mitigation is defined as the mitigation of a multipath-contaminated direct path signal, then it is extremely important in High Precision Survey applications. The most difficult multipath is the multipath that is from a nearby reflector that changes very slowly, is difficult to observe, and appears as a measurement bias during a typical observation interval. The ability to observe this type of multipath is enhanced by increasing the number of code transitions that occur during the observation interval. While this type of multipath is also present in consumer hHandset) applications, its impact is less of a problem when the desired accuracy is measured in meters. However, when the dominant received signal is a multipath signal, as is the case in urban canyons and indoors, then the consumer receiver produces solutions with large errors. Mitigation of this type of multipath is more important for consumer chipsets than the mitigation of multipath-contaminated direct path signals, but I don’t expect MBOC to help with this problem.

Kohli/Turetzky – Multipath mitigation can be a clear differentiator in accuracy and our focus is getting the best possible accuracy in obstructed environments, given the constraints of cost, size, and TTFF for consumer applications. Our customers care about “consumer affordable” meter level accuracy to determine streets and house numbers not centimeter level accuracy.

Stratton – Having greater multipath-resistance is secondary in importance to having a robust and available signal with navigation data at sufficient power. During the development of the civil augmentation systems, multipath was seen as a significant issue, but methods were developed to mitigate multipath that were within the reach of current technology. For example, we use carrier smoothing (i.e., complementary filtering that takes advantage of the high accuracy of the L1 carrier phase) to mitigate multipath sufficiently to conduct CAT III landings if the augmentation system is located at or near the airport. In looking at precision approaches flown with this technology, we see no degradation in accuracy as the airplane approaches the runway environment. This is expected because of the frequency separation of the multipath resulting from the airplane’s motion.

Studenny – Multipath is an issue, especially for GBAS ground stations. It has to be minimized by whatever techniques are available. A signal with desirable code properties is a great starting point to minimizing multipath effects. The counter example is the L1 C/A code – it has poor multipath rejection properties and requires specialized signal processing to mitigate some of the multipath effects.

Weill – Effective multipath mitigation has always been regarded as important in high-precision applications, where in some cases careful measurements have shown that enough multipath exists to cause serious problems unless it is mitigated. It has also been demonstrated that receivers used indoors and in urban canyons often produce large errors due to multipath. Although in any given application it is difficult to reliably determine how often multipath is really a problem, a conservative approach uses effective multipath mitigation methods to instill confidence that the required level of positioning accuracy has been achieved.

Q: It is now known that signals with wider bandwidths improve theoretically achievable multipath performance. However, current popular mitigation methods (such as the double delta correlator) cannot take advantage of the higher frequency components of an MBOC signal. On the other hand, advanced techniques (such as NovAtel’s Vision Correlator) are emerging which approach theoretical bounds for multipath error using any GNSS signal regardless of bandwidth, and they are especially effective at reducing errors due to near multipath. In particular, multipath errors using the BOC(1,1) signal can be significantly reduced and MBOC does even better. In what applications, if any, would such improvements be useful to your company?

Fenton – Given that multipath is the biggest single source of error, improved multipath performance is critical for improved positioning in most high precision applications such as surveying and mapping, machine control, and precision guidance. In RTK applications, having precise pseudoranges reduces the convergence time to centimeter position estimates by providing smaller initial search volumes for the fixed integer ambiguity estimators. Not only does Multipath Mitigation Technology (MMT) provide cleaner measurements, it also provides signal quality estimations so that the position computation software can de-weight the poor quality measurements.

Garin – I have already stated earlier that the major improvement MBOC will bring is for surveying applications. It will be more a minor hindrance for the cell phone mass market and a minor limitation on weak signal capabilities. I don’t think that any incremental power improvement in the signal in pace will noticeably change the landscape of the indoor navigation market. It has been implied for a while that high customer demand for “always present” location availability will call for some kind of data fusion. In contrast, MBOC will be a boon for the high accuracy market, and it will engender new ideas as I have always witnessed every time a new concept was introduced in GNSS.

Hatch/Knight – Advanced multipath techniques that are equal or superior to the Vision Correlator will be a required feature of future high performance GNSS receivers.

Kawazoe – We think this would be a high-level and expensive GPS receiver.

Keegan – Since these new techniques require more processing and work better with higher sampling rates, they are only applicable to the highest precision sets. As processing becomes cheaper and higher sampling rates become the norm, this type of multipath mitigation will migrate to lower cost high precision GNSS sets, but I doubt that they will ever be part of consumer chipsets since they only provide mitigation of multipath that accounts for a few meters of code error and centimeters of phase error in relatively static situations.

Kohli/Turetzky – For our markets, near multipath is not the biggest source of error at the signal levels our customers are most interested in. Therefore, the multipath mitigation techniques we would use would potentially be different.

Stratton – Perhaps additional multipath resistance could become more significant in the future if GNSS is used in airport surface applications (i.e., when the airplane is moving slowly), but this requires further study and validation. On the other hand, a more complex signal structure may be more difficult to certify for safety-critical uses. It is not yet clear whether the certification risks associated with migrating to modernized signals will outweigh their potential benefits. This is analogous to the situation that exists today, with low-tech (but proven) instrument landing systems still being installed despite the availability of GNSS landing systems, which are dramatically more accurate from the pilot’s perspective.

Studenny – The preference is NOT to use unusual or complicated receiver technologies. It is also true that a well designed signed will not require such unusual technologies to reach the required performance levels. A well-designed, wide-band signal allows for simple receiver architectures and designs that achieve very high levels of performance. We believe that having an inadequate signal as a starting point and then attempting to extract performance through complicated receiver designs is the wrong approach.

Weill – It is now generally accepted that the real problem in most applications is close-in multipath, characterized by strong secondary signals from nearby reflectors (notably the ground) delayed by less than 10-20 meters. In this region the popular double delta correlator is not effective in suppressing multipath, so new mitigation techniques that solve this problem are certainly of interest.

Q: Would the additional capabilities provided by the MBOC code be useful in your products?

Fenton – Yes, the MBOC will provide additional accuracy and reduction in multipath interference.

Garin – In the medium to long term, 5-10 years, the mass market will migrate toward use of carrier phase. Then we will benefit from MBOC, as the surveying equipment manufacturers would today, because there will be market segment overlap.

Hatch/Knight – We expect a modest improvement in multipath mitigation under moderately weak signal conditions, such as under foliage.

Kawazoe – No. MBOC code is not useful.

Kohli/Turetzky – The capabilities of improved accuracy would have very limited benefit in our application.

Stratton – Having a more multipath-resistant civil signal is secondary in importance to having a robust and available signal with navigation data at sufficient power.

Studenny – Yes.

Weill – Yes. MMT can take advantage of the higher RMS bandwidth of an MBOC signal.

Q: If you could influence the governing bodies regarding the selection either of BOC(1,1) or of MBOC code, what would you recommend?

Fenton – Two fundamental limitations of accuracy are radio transmission bandwidth and the BPSK chipping rate. Since there is very little option of increasing the bandwidth, then increasing the effective BPSK chipping rate is the only option to increase the signal gain and therefore accuracy. I would recommend increasing the effective chipping rate as much as possible.

Garin – BOC(6,1) is in the domain of surveying applications. Because a very large majority of them need to have dual frequency processing capabilities and more available power to accommodate large bandwidths, we would recommend dedicating one non-L1C frequency channel to the exclusive use and benefit of the surveying community, with a larger bandwidth and, why not, exclusively transmitting BOC(6,1) codes. Short of this technically sound solution, we support MBOC for the benefit of the surveying community.

Hatch/Knight – We believe that MBOC may be useful for our applications, but the amount of benefit is unclear and is difficult to estimate theoretically. Support of MBOC will definitely increase receiver complexity. We do not think there is a strong and clear case for implementing MBOC

Kawazoe – We would like to recommend BOC(1,1).

Kohli/Turetzky – We would recommend BOC(1,1), but it’s really more of a preference. We are perfectly comfortable with MBOC, but we do see more benefit for mass market consumers from the higher power of BOC(1,1).

Sheynblat/Rowitch – Given that high cost, high precision GPS devices can afford to monitor multiple GNSS frequencies, employ higher complexity RF components, employ higher complexity processing algorithms, it would make sense to optimize the modernized signals for the low cost, mass market and let high cost receivers pursue the many other options available for improving precision. In summary, Qualcomm is in favor of the original BOC(1,1) proposal with no imposition of BOC(6,1) modulation.

Stratton – Greater public involvement will be needed to finalize the L1C definition. Perform further validation of L1C signal structure before adopting a finalized signal structure. The validation should include impacts to augmentation systems, integrity performance under off-nominal conditions and probable failures, and migration issues (user benefits).

Studenny – We would take advantage of the MBOC signal.

Weill – I would recommend that MBOC be selected. The reduction in power for narrowband applications is small. When MBOC signals finally become available, advances in receiver technology are likely to make low-cost wideband receivers a reality.

Summary and Conclusion

We received remarkable interest and cooperation from eight companies and two prominent consultants who are experts in multipath mitigation techniques. Undoubtedly, their willingness to commit such thoughtful and extensive replies to our questions underscores the importance of the issue.

Although the discussion reflects tendencies within the manufacturing community, our BOC/MBOC series was not intended to serve as a comprehensive poll of sentiments in the GNSS community at large. Instead, we wanted to link the efforts of GNSS signal experts with those of receiver manufacturers – to bring these two worlds closer together and explore how the movements of one affect the other.

Clear tendencies emerged from the panelists’ comments, reflecting separate perspectives of companies and engineers working with single-frequency/narrowband receiver designs and those building wideband, multi-frequency GNSS receivers.

Most of the panel members acknowledged the theoretical potential of the MBOC waveform to enable receiver designs that further reduce the effects of multipath beyond that available with BOC(1,1). Where they parted ways was over the question of the amount of practical benefit that would derive from this advantage. As one might expect, representatives of companies that serve the consumer electronics market generally preferred BOC(1,1) rather than MBOC — the opposite view of their wideband counterparts.

The discussion also highlighted differences of opinion over the likely trajectory of technology development, particularly on the question of whether that trajectory might — or might not — allow consumer-oriented GNSS products in the future to be able to affordably incorporate the benefits of MBOC.

MBOC supporters tended to believe that today’s narrowband receivers would migrate to wideband designs so that they could take advantage of the BOC(6,1) component. Most BOC(1,1) supporters were skeptical of that assessment and asserted that consumer receivers would probably remain narrowband.

There were two surprises, however. One of the consumer electronics companies acknowledged the disadvantage of MBOC for its current market but considered that to be minor compared with the potential benefit to the high-precision applications market and perhaps eventually to the consumer market itself.

The counter-surprise was that a company involved in very high precision applications recognized the potential benefit of MBOC to its applications and will use MBOC if provided. However, they judged the practical benefit to be minor and less important than the disadvantage of having a more complex receiver.

Useful conclusions can be drawn from this limited but focused survey.

1. An industry consensus does not exist regarding the relative merits or demerits of BOC and MBOC. The majority of consumer products companies, which expect to serve a billion users, want to avoid even a small loss of signal power and doubt that they ever will be able to use the high frequency component of MBOC. Most receiver designers targeting high-precision and safety-of-life applications are equally convinced that every increment of robustness and accuracy brings a critical benefit to their customers and, consequently, endorsed MBOC.

2. Quantifying the relative advantage of MBOC and BOC in practical user terms has been difficult, especially without signals in space to test user equipment under actual operating conditions. Consequently, the assessments of benefit have derived from lab tests and simulations.

Under a fairly severe multipath scenario, one panelist calculated that MBOC could reduce the worst-case RMS range error from about 63 centimeters with BOC(1,1) to about 50 centimeters with MBOC. On the other hand, another panelist argued that every decibel makes a difference, especially in E-911 type applications where availability can make a critical difference. Absent extensive field experience, the significance of both positions remains arguable.

3. Whichever choice is made, no killer reasons have appeared that will condemn either choice. The differences are subtle and both choices could be justified.

4. We sympathize with those making the decision in Europe. Either choice will be both praised and criticized.

Civil GNSS Signals at a Crossroads: An Afterword

In an effort to close the loop between receiver designers and signal experts, we invited additional comments on the discussion presented in the two-part article, “BOC or MBOC?”

We received responses from several U.S. members of the US/EU technical work group that recommended the multiplexed binary offset carrier waveform for the new GPS and Galileo civil signals. (They also were coauthors on the original Working Papers column that introduced the signal proposal in Inside GNSS’s May/June issue.) Javad Ashjaee, president and CEO of Javad Navigation Systems and a long-time designer of GNSS receivers, also provided a commentary of the discussion, which we present following the remarks of the U.S. signal team members.

As discussed in the introduction to Part 1 in the July/August issue of Inside GNSS, if MBOC is implemented, the United States and Europe may implement slightly different versions of MBOC, with different allocations of power on the pilot carrier. The comments from the U.S. working group members address the relative merits of MBOC and BOC(1,1) in general as well as the specific U.S. version of MBOC — time-multiplexed BOC.

Additional Comments on MBOC and BOC(1,1)

John W. Betz, Christopher J. Hegarty, Joseph J. Rushanan
The MITRE Corporation

As members of the United States team who worked with our European colleagues to design the MBOC spreading modulation, we respectfully offer the following comments on the article entitled “BOC or MBOC? Part 1,” published in the July/August issue of Inside GNSS.

This response is meant to provide additional information that complements the views presented in the introduction to the article and to explain the background of the GPS-Galileo Working Group A (WG A) Recommendations on L1 OS/L1C Optimization, which can be viewed at the GPS and Galileo signal specification websites, respectively, GPS: http://gps.losangeles.af.mil/engineering/icwg/ and Galileo: http://www.galileoju.com/page3.cfm. Our focus here is on the GPS L1C signal.

The MBOC modulation contains an additional high frequency component that produces a sharper correlation function peak — fundamentally improving its suitability for tracking. In particular, MBOC enables a receiver to better process against multipath errors, often the dominant source of error in navigation receivers.

Most modernized signals in GPS, Galileo, GLONASS, QZSS, and mobile telephony reflect this trend toward wider bandwidths and sharper correlation function peaks, because of the many benefits that accrue. Moreover, MBOC has the added advantage that it retains excellent interoperability with narrowband receivers.

Indeed, many of the favorable responses to MBOC in the July/August article were explicitly tied to statements that look ahead to when L1C will become operational late in the next decade and then be used for decades afterward in applications that we can scarcely fathom today. At least seven more cycles of Moore’s Law will have unfolded before initial operational capability of L1C, reflecting more than 100-fold improvement in digital processing capability.

As in the many other systems engineering tradeoffs involved in the design of L1C, pros and cons were carefully considered in making the recommendation on the spreading modulation. The full set of engineering data comparing TMBOC (the time-multiplexed BOC implementation for L1C) versus BOC(1,1) substantiates the net benefits in robustness and performance to all users whether or not BOC(1,1) or TMBOC is used.

For example, when narrowband GPS receivers track both C/A code and L1C transmitted from the same satellites, compared to using C/A code alone they obtain 2.7 dB more signal power with TMBOC or 2.9 dB more power with BOC(1,1). With either modulation, there is a significant benefit to narrowband receivers, and the difference between modulations yields an imperceptible difference in available power.

Figure 1 lists tradeoff factors considered in the L1C spreading modulation design; these supplement the subset of factors discussed in the introduction to the BOC or MBOC article. TMBOC’s relative advantages are shown as dB values to the right and BOC(1,1)’s relative advantages are shown as dB values to the left. (To view Figure 1, download the PDF version of this article using the link above.)

TMBOC’s benefits, such as reduced correlation sidelobe levels, apply to all receivers, with most value to those that must use weak signals. Observe that receivers need only employ bandwidths of roughly ±6 MHz to obtain the other benefits of TMBOC in signal tracking and multipath mitigation.

As indicated in our earlier article on MBOC in the May/June issue of Inside GNSS, the Galileo program has the lead in choosing a common signal modulation that will be used for decades by not only Galileo, but also GPS, QZSS, and possibly satellite-based augmentations systems, and other radio-navigation systems. We understand Galileo decision makers’ need to balance near-term programmatic issues against the longer-term investment in improved satellite-based navigation, and respect their decision process.

In conclusion, we sincerely welcome receiver manufacturers’ views on both BOC and MBOC. The challenge for all of us — signal designers, receiver designers and manufacturers, and decision makers — is to make this decision in the context of applications and receiver technologies that will be relevant later in the next decade and for decades to follow.

We believe the engineering tradeoffs reaffirm that TMBOC, like other aspects of L1C, will provide solid net benefits to future generations of satellite navigation users.

MBOC Is the Future of GNSS; Let’s Build It

Javad Ashjaee
Javad Navigation systems

All I can say is, I’m glad these guys complaining about MBOC weren’t the ones designing the GPS system — or the new common GPS/Galileo civil signal. What is their basic complaint about MBOC? That it adds complexity and power consumption. But 25 years ago, GPS user equipment weighed 150 pounds and a receiver cost $250,000. If they had based the system design on the state-of-the-art receivers at the time and tried to simplify the system design to accommodate them, they would have said, “We don’t need carrier phase or a second frequency.” They would have been thinking about receivers as if they were carrying an FM radio from those days around in their pocket.

But technology changes. Product design improves. How old is Moore’s Law [that says the complexity of integrated circuits, with respect to minimum component cost, doubles every 18 months], and yet it’s still going on. The same thing is repeating itself today.

In the early 1980s when we were building the first GPS receivers, we only had 8-bit microprocessors. Multiplying two floating point numbers together was a huge task. I had to write software to simplify the computation of the signals as much as possible, but I never complained about the GPS system design itself.

Now the technology has matured to the point that you see today — single-chip GPS receivers. And yet modern user equipment is based on this GPS system design of 30 years ago.

We should design the system and make it as good as we can. By the time it’s up and running, technology will have advanced a long way in the products that we are building.

Even with the current technology, however, what do the people who don’t want MBOC lose? One decibel. But the new satellites have 3 dB more than we have today.

On the other hand, what do we gain with MBOC? Maybe a little, maybe a lot, depending on who looks at it. MBOC gives us more things to work with. It may help us to get faster RTK by removing multipath in the automatic landing of an aircraft. The people worried about getting GPS signals further indoors are talking about users who may be sitting around drinking wine, not sitting in an airplane that’s landing in the fog. Even if there is an emergency indoor application, it most probably can wait a few more seconds to get a position fix or have a few more meters of error.

The chips that will be designed to fully use this new GNSS system will come 10 years from now. It’s a crime to say that we can’t build the best system for the future because today someone needs an extra bit of processing power.

One final note: my hat’s off to a dear, long-time friend, Tom Stansell, for a job well-done in having helped coordinate the BOC-MBOC discussion in Inside GNSS in such an unbiased even-handed way.

Javad Ashjaee, Ph.D., is the president and CEO of Javad Navigation Systems, San Jose, California, USA, and Moscow, Russia.

Old Questions, New Voices

Q: What segment of the GNSS market do your answers address? Describe your market, including typical products and the size of the market.

Kawazoe – Typical products are GPS receivers for car navigation. The total Japanese Car Navigation market was over 4 million units in 2005, and JRC sells about 1.8 million units per year.

Keegan – I have worked with companies in all Market areas from Consumer to High Precision Survey as well as Military.

Kohli/Turetzky – SiRF has a broad array of location and communication products at the silicon and software level that address mainstream consumer markets. Our main target markets are automotive, wireless/mobile phones, mobile compute, and consumer electronics. These markets have a potential size of more than a billion units per year. Although the consumer GPS market is growing very fast, the overall penetration of GPS in these markets is still quite low. Our technology is used in a range of market leading products including GPS-enabled mobile phones, portable and in car navigation systems, telematics systems, recreational GPS handhelds, PDA and ultra mobile computers, and a broad range of dedicated consumer devices. Our customers are global and we currently ship millions of units per quarter all over the world. We focus on providing best in class performance for consumer platforms (availability, accuracy, power, size) at a cost effective price.

Q: Which signal environments are important for your products: open sky, indoor, urban canyon, etc.?

Kawazoe – It is an urban canyon environment.

Kohli/Turetzky – There is not a single most important environment, our products are designed to operate across all environments. The biggest challenge for us and our “claim to fame” is our ability to make GPS work in obstructed environments. The consumer expectation is that location is always available and meeting this expectation is the focus of our innovations. Our technology is targeted to meet the difficult challenges of the urban canyon, dense foliage, and indoor environments.

Q: Which design parameters are most critical for your products: power, cost, sensitivity, accuracy, time to fix, etc.

Kawazoe – The most critical design parameter is cost. The next parameters are sensitivity and accuracy. Our main GPS receiver specifications are: power: 88 mA typical at 3.3 Volts, sensitivity: less than -135dBm, accuracy: 10 m 2DRMS typical, and time to fix: 8 sec. typical (hot start).

Kohli/Turetzky – We target different parameters for different target markets. In general, however, availability (a combination of sensitivity and time to first fix) with reasonable accuracy and power are more important than extreme accuracy.

Q: Do you really care whether GPS and Galileo implement plain BOC(1,1) or MBOC? Why?

Kawazoe – Yes. We prefer BOC(1,1) for easy implementation.

Kohli/Turetzky – We don’t have a strong opinion. We can see the benefits of both for different markets. Whatever is chosen, we will build the best receiver for our customers.

Q: Are the GNSS receivers of interest narrowband (under ±5 MHz) or wideband (over ±9 MHz)?

Kawazoe – Our receivers of main interest are narrowband because low cost and jamming robustness are most important for our major customers. Even so, some JRC receivers are wideband because accuracy is more important for these receivers.

Keegan – I have designed receivers that are narrowband (consumer) as well as wideband (Survey) receivers.

Kohli/Turetzky – Our customers have a definite preference for narrowband receivers because it makes their system design more robust to interference. As our receivers operate in harsh RF environments and can navigate at extremely low signal levels, keeping interference out lets them utilize our technology to its fullest. Interference in integrated products arises from LCDs, disc drives, and other RF links, and the interfering spectrum can be wideband.

Sheynblat/Rowitch – The receivers of interest are narrowband. Low cost GPS consumer devices do not employ wideband receivers today and will most likely not employ wideband receivers in the near future. Any technology advances afforded by Moore’s law will likely be used to further reduce cost, not enable wideband receivers. In addition, further cost reductions are expected to expand the use of positioning technology in applications and markets which today do not take advantage of the technology because it is considered by the manufacturers and marketers to be too costly.

By Alan Cameron
July 1, 2006

Orbital Precession, Optimal Dual-Frequency Techniques, and Galileo Receivers

Q: Is it true that the GPS satellite geometry repeats every day shifted by 4 minutes?

A: It is true that the GPS satellite orbits were selected to have a period of approximately one half a sidereal day to give them repeatable visibility. (One sidereal day is 23 hours, 56 minutes, and 4 seconds long or 236 seconds shorter than a solar day.) However, because of forces that perturb the orbits, the repeat period actually turns out to be 244 to 245 seconds (not 236 seconds) shorter than 24 hours, on average, and changes for each satellite.

Q: Is it true that the GPS satellite geometry repeats every day shifted by 4 minutes?

A: It is true that the GPS satellite orbits were selected to have a period of approximately one half a sidereal day to give them repeatable visibility. (One sidereal day is 23 hours, 56 minutes, and 4 seconds long or 236 seconds shorter than a solar day.) However, because of forces that perturb the orbits, the repeat period actually turns out to be 244 to 245 seconds (not 236 seconds) shorter than 24 hours, on average, and changes for each satellite.

The selection of a half sidereal day orbit causes the satellite ground track and the satellite visibility from any point on earth to be essentially the same from day to day, with the satellites appearing in their positions approximately 4 minutes (236 seconds) earlier each day due to the difference between sidereal and solar days. This was a particularly useful property in the early days of GPS when session planning was important to ensure adequate satellite coverage. With this easily predictable coverage, GPS users could schedule repeatable campaign sessions well in advance just by shifting their experiments forward each day by 4 minutes.

(For the rest of Penina Axelrad and Kristine M. Larson’s answer to this question, please download the complete article using the PDF link above.)

Q: How can dual frequency code and carrier measurements be optimally combined to enhance position solution accuracy?

A: The smoothing of GPS code pseudorange measurements with carrier phase measurements to attenuate code noise and multipath is a well-established GPS signal processing technique. Unlike carrier phase real time kinematic (RTK) techniques, carrier-smoothed code (CSC) positioning solutions do not attempt to resolve carrier phase ambiguities. As a result, they offer a number of design and operational advantages for those applications that do not require RTK accuracies.

Ionospheric effects are a limiting factor in how much smoothing of pseudorange errors can be accomplished with single-frequency measurements. The use of dual-frequency code and carrier measurement combinations in CSC processing to attenuate pseudorange errors and as a precursor for carrier phase ambiguity resolution has gained increasing importance, particularly with the availability of all-in-view dual-frequency GPS receivers in the survey and military markets. Interest in these techniques will increase with the advent of additional GNSS signals as the result of GPS modernization and implementation of Galileo, along with the proliferation of differential services.

(For the rest of Dr. Gary McGraw’s answer to this question, please download the complete article using the PDF link above.)

Q: What is the availability of Galileo receivers?

A: With the launch of the GIOVE-A (Galileo In-Orbit Validation Element – A) Galileo test satellite in December last year, the European Galileo satellite navigation system is making progress. How will we be able to recognize the benefits of Galileo? We will require enough Galileo satellites to make a difference when used with GPS alone, and we will require dual-mode Galileo/GPS receivers.

First, let us recap what Galileo will provide to users. And second, let us summarize what benefits we can expect to see, not only from Galileo alone but from a combined GPS/Galileo constellation of approximately 60 satellites.

Galileo will offer several worldwide service levels, including open access and restricted access for various segments of users.

(For the rest of Tony Murfin’s answer to this question, please download the complete article using the PDF link above.)

By

The L2C Study: Benefits of the New GPS Civil Signal

GPS has had enormous benefits to the economy and society that go well beyond military and civil aviation applications – that is becoming ever more widely understood. What has been more open to discussion are the civilian non-aviation benefits of further U.S. efforts at GPS modernization, particularly the introduction of additional signals.

GPS has had enormous benefits to the economy and society that go well beyond military and civil aviation applications – that is becoming ever more widely understood. What has been more open to discussion are the civilian non-aviation benefits of further U.S. efforts at GPS modernization, particularly the introduction of additional signals.

In an effort to define and measure civilian benefits, the U.S. departments of commerce and transportation commissioned some economic analyses of civil signal modernization. Particular emphasis was placed on the value of the L2C signal centered at 1227.60 MHZ, which recently began broadcasting from the first modernized GPS Block IIR-M satellite. This article is an outgrowth of that effort.

The analysis focused on the value of signals at more than one frequency for precision non-aviation use by business and government. It considered how utilization of the second civilian signal and its benefits would evolve in the coming decades as the L2C constellation expands and as additional signals become available from GPS and other GNSSes.

In the study, projections were developed under four scenarios — with the “moderate benefit”scenario seeming most likely — that reflect combinations of developments, including the strength of markets, the timing of L2C signal availability, the timing of Galileo availability, and complementary and competitive relationships with augmentations.

The main findings of the study are:

  • The projected number of U.S. high precision users of any signal nearly doubles from 39,000 to 75,000 from 2004 to 2008, and reaches 146,000 in 2012 and 333,000 in 2017.
  • Under a “moderate benefits” scenario, the number of L2C users reaches 64,000 by 2017, of which 35,000 are dual frequency users and 29,000 use three or more frequencies.
  • Civilian benefits of L2C net of user costs range from $1.4-$9.6 billion under alternative scenarios and civilian net benefits are about $5.8 billion under the moderate benefits scenario.
  • Results are robust.
  • Positive present values of benefits net of user costs are obtained in all tests.
  • The ratio of benefits to user costs ranges from 8 to 20 in all tests.

In addition to the domestic benefits examined, L2C will undoubtedly have important international benefits.
This article presents in more detail how we defined the problem, approached the study, and arrived at those conclusions.

The L2C Evolution
L2C, together with the present L1 C/A-code signal and the future modernized civil signal L1C, will provide an alternative to augmented single frequency GPS for precision users. Separate investigations have outlined the incremental benefits of L1C (See sidebar, “The L1C Studies,” at the end of this article)

L2C signals can be used for both horizontal and vertical measurement and positioning along with L1 C/A as satellites become available over more areas and in more times of the day. The first satellite can be used for improved timing. L2C also can be used in configurations of three or more frequencies in combination with the forthcoming GPS L5 signal and with signals from Galileo and GLONASS.

At various times in each signal’s deployment and development of markets, other signals will, to varying degrees, provide complements to L2C and competitors to it. L2C has its greatest potential to generate benefits for dual frequency applications until alternative signals are widely utilized, and for long-term use in applications taking advantage of three or more frequencies.

The L2 signal is currently being widely used for augmentations, and the new signals can be used in that way along with the existing constellation. However, L5’s use as a competitor to L2C and as a partner to L2C in multiple frequency implementations primarily depends on the launch timeline for satellites carrying the L5 signal since L5, centered at the 1176.45 MHz frequency, is not currently in service. Plans call for its implementation on the GPS Block IIF satellites, with the first IIF now expected to be launched in 2008.

L2C deployment requires a commitment to operational capability. Decisions will be required as to launch dates and signal activation for each successive satellite containing the signal. The L2C benefits study is intended to contribute to decisions about L2C deployment with consideration of alternative scenarios informed by quantitative and qualitative analysis. 

To explore the implications of L2C evolution, we make projections about the numbers of U.S. precision users, incremental benefits, and user costs, based on examination of applications and available evidence on value of benefits, and consider how these can unfold over the period 2006–2030.

The analysis focuses on precision users of L2C who use two or more frequencies, although we do include estimates for supplementary multiple-frequency users and single-frequency users. However, the estimates of these types of use are more conjectural and do not contribute much to the overall value of benefits.

Benefits net of user costs are measured according to the widely accepted economic productivity approach, which includes productivity gains and cost savings. This comprehensive approach is more appropriate than one that measures benefits simply by expenditures on equipment and services.

Incremental benefits and user costs are defined to include all differences in outcomes from what would be expected in the absence of L2C. 

Signal Advantages and Availability
The L2C signal, scheduled to be the first of the modernized civil GPS signals, is intended for civilian purposes other than aviation and safety-of-life. It will provide greater accuracy and robustness and faster signal acquisition than the current L1 C/A-code signal.

Higher signal power and forward error correction will improve GPS mobile, indoor, and other uses.
The L5 signal that will arrive within a few years will be in a protected aeronautical radionavigation system (ARNS) band intended for aviation and other safety-of-life uses and will have broader applications.

Multiple signals will allow many users to obtain greater precision and availability at lower cost than achievable with proprietary augmentation systems. However, signal combinations combined with public and private augmentations for even greater precision and reliability will support applications with some of the greatest potential benefits.

Combined use of L2C with L1 C/A and L5 will also enable some precision users to achieve even greater reliability and accuracy. Although available simulations differ on the size of benefits of three signals over two, many professionals expect important advantages from such “tri-laning” techniques.

The U.S. Air Force launched first satellite containing the L2C frequency on September 25, 2005, and the signal became available on December 16. Going forward, two to four Block IIR-M satellites are expected to be launched each year. With six to eight satellites anticipated to be available by about December 2007, users will be able to access at least one single satellite with L2C at almost all times. Eighteen L2C-capable satellites  (including the Block IIF generation) will be available by about 2011 and 24 L2C signals, around 2012. (These statements are based on official 2005 launch schedules and are subject to revision.)

The first L5 launch is scheduled for March 2008. L5 does not have a GPS signal in use at its frequency, so it will not be usable to any great extent until a large part of its constellation is available. In contrast, L2 is in place to transmit the military P(Y) code and the carrier signals of the satellites are currently being used along with L1 C/A for higher-accuracy applications.

Consequently, the L2C signal can be used immediately as a second frequency. The GPS signal L1C, which is being planned now for implementation on the GPS III satellites scheduled for launch beginning in 2013, will be able to be used immediately, even for single frequency use, without augmentation because it is at the same frequency as the L1 C/A-code.

Using Multiple Frequency GPS
Many private and government precision applications could potentially benefit from multiple frequency GPS.
For example:

  • Centimeter accuracy is important to many land and marine surveying applications including planning, zoning, and land management; cadastral surveying, harbor and port mapping, aids to navigation, coastal resources management, mapping, and surveys of sensitive habitats.
  • Machine control applications using high precision GPS have grown rapidly in a number of sectors, including agriculture and forestry, mining, construction, energy, transportation, structural monitoring and positioning for mapping and geographic modeling..
  • Civil applications that rely on precise timing will benefit from increased GPS signal availability and elimination of atmospheric effects possible using dual-frequency techniques.  Beneficiary industries include those operating cellular telephone, power, and financial information networks.

Scope of Benefits and Costs
Incremental benefits — those that arise because of the availability of L2C— include far more than the comparison of multiple frequency with augmented single frequency use. Companies adopting GPS in the future may even skip single-frequency options and instead choose multiple-frequency equipment (incorporating L2C) over non-GPS alternatives. Large candidate markets include construction, agriculture, and other applications where technological alternatives exist.

In some organizations, dual-frequency GPS will be the catalyst for extensive changes in systems that will occur earlier than if dual frequency GPS had not been adopted.

In the L2C study, benefits are measured according to the “economic productivity approach,” which is superior to the expenditure/economic impact approach because:

  • Productivity gains and cost savings, which this approach emphasizes, are the main purpose of much of GPS deployment and can be much larger than expenditures.
  • Benefits may accrue to a large number of customers of the purchaser, as occurs with use of GPS timing in communications, financial services, and electric power and in use of GPS positioning for mapping, structural monitoring, and weather.
  • The more common approach (economic impact) gauges benefits by added GPS spending without deducting the loss of benefits of non-GPS expenditures that are replaced.

L2C benefits can take both market and non-market forms, including increases in the productivity of business and government operations, user cost savings, benefits to the public through provision of public services and saving lives, and through improved health and environment.

Net benefits are benefits minus user costs. Incremental user costs include all additional costs that are expected with the availability of L2C, not simply the difference in costs between single- and dual-frequency receivers. These can take the forms of enhancements and accessories purchased when adding L2C capability (e.g.  better displays, controllers and software) or costs associated with users upgrading to multiple frequency GPS from less sophisticated single-frequency GPS systems or non-GPS systems.

However, incremental user cost is net of savings from use of receivers with less proprietary technology and any reduced use of private augmentation subscription services.

Expenditures to develop the GPS system infrastructure (satellites and ground segment) are not included, however, because most represent nonrecurring, sunk costs. Moreover, if we added them to our L2C analysis, we would need to include benefits to aviation and military users as well as their associated equipment costs.

Scenarios
The analysis takes into account alternative conditions of timing and impact of alternatives through the use of scenarios. Projections of signal use and value of benefits are developed through the year 2030 under four scenarios: High Opportunity, Moderate Benefits, Diluted Benefits, and Opportunity lost.

These scenarios reflect combinations of developments, including the strength of markets, the timing of L2C signal availability, the timing of Galileo availability, and complementary and competitive relationships with augmentations. (See the sidebar, “L2C Benefit Scenarios” at the end of this article for details of assumptions behind each.)

Probabilities are not given for the scenarios because the likelihood of alternative Galileo delays cannot be evaluated quantitatively. Moreover, the diluted benefits and opportunity lost scenarios are significantly affected by U.S. GPS policy, which is also not predicted.

Estimates of GPS Users
The L2C study projections shown in Figure 1 are based on assumed rates of decline in prices for user equipment and services and increases in the number of users in response to price changes. Projections reflect assessments of market sizes and patterns of market penetration under each scenario. Allowance also is made for effects of economic growth on market size. Table 1 (to view tables and figures, please download the PDF of this article using the link above) provides a detailed breakdown of results by scenario.

Within each scenario, projections are made for precision L2C users of three or more frequencies, dual frequency precision users, multiple frequency supplementary users, and single frequency users of L2C.

The starting point for determining the number of high precision users is a widely relied–upon estimate of 50,000 high precision users worldwide in 2000. We assumed that the United States had 40 percent of precision users in that year.

The study further assumes that the number of U.S. high-precision GPS users will grow by 18 percent per year from 2000 to 2030. This projection is based on a rate of price decline for user equipment of 15 percent per year and a corresponding a 1 percent increase in users for each 1 percent decline in price. Finally, we include an assumption of general growth in the economy (i.e., independent of GPS receiver price) that adds 3 percent per year.

These assumptions and calculations produce a projection of U.S. high precision GPS users — those using augmentations, of 38,776 in 2004. The estimated number of U.S. high precision users of any signal or combination nearly doubles to 75,177 from 2004 to 2008 and reaches 145,752 in 2012 and 333,445 in 2017.

We computed the numbers of multi-frequency GPS users by applying an estimated percentage to the number of high-precision users for each scenario. The number of multi-frequency precision users adopting dual versus three or more frequencies was then calculated using projected values for the percent of each category. Finally, the number of L2C users was calculated based on projections of the percent of multiple frequency users that use L2C, constructed to reflect the dynamics of each of the scenarios. 

Rapid growth is projected in the numbers of U.S. precision multiple-frequency L2C users. In the moderate benefits scenario, the number of L2C users reaches 64,000 by 2017, of which 35,000 are dual frequency users and 29,000 use three or more frequencies. The numbers of L2C users vary widely among scenarios.

Average Net Benefits per User
The study defines average incremental net value of benefits per L2C user as the incremental value of benefits per L2C user above the incremental user cost of equipment and services. Benefits largely reflect productivity gains and/or cost savings. Estimates reflect a review of available evidence ranging from formal studies to case histories and expert opinion across a wide range of applications.

Our research suggests that average annual incremental benefit per precision L2C user net of costs could reach the range of $8,000–$16,000 per year. This includes benefits across systems that are not attributable to specific numbers of users and non-market benefits, such as safety and environmental advantages, as well as market benefits associated with the value of goods and services transactions. Market benefits attributable to numbers of users are estimated at 60 percent of all incremental net benefits.

These are peak values after benefits have had an opportunity to rise with experience using the new signal. The values decline from their peaks as new users with lower benefits are attracted by declining costs and some high benefit users move to alternatives.

In considering the plausibility of these figures, consider that:

  • If a worker saved one hour a week by avoiding rescheduling due to signal unavailability, slow signal acquisition, loss of lock and additional work due to phase ambiguities, and further assuming labor costs of $80 per hour (including salary, fringe benefits, equipment, support staff and other overheads), — the saving would total $4,000 per year. Improvements in the organization’s processes with better work flow could make the savings even greater.
  • If the telecommunications, electricity generation, and financial industries together had system benefits that together were valued at $20 per customer over 20 million customers, the benefits would be $400 million per year. Market benefits of $400 million per year, if divided by 100,000 dual frequency users, for example, would amount to an average of $4,000 per user per year.
  • $400 million in non-market benefits over 100,000 precision users would equal an additional $4,000 per user per year.

(This could result, for example, from avoiding 100 deaths due to industrial accidents or environmental impacts at a value of $4 million per incident.)

The present values of incremental user costs range among scenarios from $175 million to $514 million in year 2005 purchasing power.

Costs represent one eighth or less of the total value of benefits in each scenario.

Value of Benefits
Civilian net benefits per user are incremental, net of incremental costs, and derive from prospects for major areas of application. The patterns incorporate some high-value initial use, assume that higher benefit users switch earlier to newer signals, factor in a buildup of productivity gains with experience, and project lower values for late-entry users attracted by lower equipment prices as well as later increases in higher benefit users switching to alternative signals.

We calculate the value of civilian net benefits of L2C through multiplying civilian net benefits per user by the number of L2C users for the user type and scenario. Higher net benefit scenarios result from higher benefits per user and larger numbers of users.

At a 7 percent real (above inflation) discount rate, present values of total net civilian market benefits range from $9.6 billion to $1.4 billion dollars. Benefits under the moderate benefits scenario have a present value of $5.8 billion and those under the high opportunity scenario $9.6 billion. (Values are discounted using annual data to calendar year 2006. That essentially places the values at the middle of 2006.)

Nearly all of the incremental benefits of L2C stem from precision use of two or more frequencies. That is both because of moderate numbers of other types of users in these and their low benefits per user.

The timeframe in which other signals become available after L2C plays an important role in the size of estimated benefits. In the high opportunity scenario, for example, dual-frequency net benefits appear higher than benefits from use of three or more frequencies because the latter applications start later as additional frequencies become available.

In the other scenarios, benefits from applications using three or more signals are higher than dual-frequency benefits because the benefits of dual frequency remain as strong when competing frequencies become available.

New spending can encourage greater long run economic growth, especially when it is associated with new technology for widely usable infrastructure. The spending may induce others to innovate, invest in greater capacity, take risks and/or provide financing. While direct estimates of the size of long run economic multipliers are not readily available, analyses of determinants of growth suggest that effects are modest, perhaps adding 20% to market benefits. Because of the uncertainty surrounding such estimates, no allowance is made for growth multiplier effects in the estimates shown.

Cost-Benefit Analysis
The ratio of incremental civilian benefits to user costs is calculated by dividing the present discounted value of total incremental benefits (including net benefits and costs) by the present value of incremental costs. These are shown with a 7% real (above inflation) discount rate.

The ratios of benefits to costs range from a multiple of 20 in the high opportunity scenario to 9 in the opportunity lost scenario. It would be surprising if benefit/cost ratios were not high because only direct user expenses (and not system costs) are included to get a picture of incremental costs of each set of outcomes. 

The moderate benefits scenario, which has a ratio of 20, is considered more likely than the others. Because of the interest in obtaining the greatest benefits, focusing on the present value of net benefits is appropriate for policy rather than using the benefit/cost ratio when all ratios are high.

As mentioned, changes in various factors could substantially affect the outcomes of L2C benefits and produce either an overstatement or an understatement of these. See the “Benefit Variables” sidebar at the end of this article for a listing of the most important factors.

Conclusions
Rapid growth is projected in the numbers of U.S. precision GPS users and in most scenarios for the numbers of high-precision multiple frequency L2C users. Substantial L2C benefits can occur along with availability of other signals and constellations, augmentations, and alternative technologies. While Galileo will compete with L2C, Galileo signals also can increase precision L2C use in multiple frequency applications, an alternative that will become increasingly affordable.

The economic productivity approach offers a means of considering benefits in a comprehensive way. Benefits and costs are incremental. They are defined to include all changes that occur as a result of the existence of L2C.

Defined comprehensively, benefits can encompass results from more extensive changes in equipment and systems and include both benefits that are attributable to specific numbers of users and those that may be incorporated in systems and spread over a broad population. They include both market and non-market benefits — those that are not bought and sold in markets, such as benefits to life, health, security and the environment.

User costs also are incremental, including all changes that occur with the availability of L2C, and are net of savings from moving to less sophisticated and less proprietary equipment.

Sidebar: The L1C Studies
Before the L2C study, important progress had already been made in understanding the benefits of additional GPS signals. These activities included the discussion of civilian applications in the report of the Defense Science Board Task Force on GPS, released last December, and the L1C Study undertaken by the Interagency GPS Executive Board in 2004. (See the “Additional Resources” section at the end of this article to find out how to obtain these studies on line.)

Upper limits of total benefits of L1C for the single year 2005 — including those obtained by single- and multiple-frequency users in private households, businesses, governments — were estimated at approximately $2 billion: $640 million for mobile and wireless location services, $62.5 million for information/data services, $990 million for “commercial GPS,” and $490 million for in-vehicle information and navigation services (telematics).

The L1C study approximated a “rough order of magnitude” dollar value of L1C applications based on 2005 spending by applying a “team consensus” for an assumed incremental benefit as a percentage of market value (revenue) for each of 13 user categories. Spending in user group categories was based on a compilation of trade estimates.

Sidebar: L2C Benefit Scenarios
The four scenarios developed to support the L2C benefits study, along with the assumptions underling each, include the following:

High Opportunity

  • Timely signal availability
  • Larger than expected markets
  • High complementarity with L5
  • Success of High-Accuracy Nationwide Differential GPS augmentation
  • Full Galileo deployment in 2012 with less than complete technical performance

Moderate Benefits

  • Timely L2C availability
  • Large potential markets
  • Benefits moderated by competition from other signals and augmentations
  • Full Galileo deployment in 2011

Diluted Benefits

  • Large potential markets
  • Gradual L2C deployment and uncertainty about schedules slows investment in innovation and market development
  • Many users wait for L5 and for Galileo, which is expected in 2010
  • Improvements in public and private augmentations make single signal use more attractive

Opportunity Lost

  • Late signal initiation and protracted pace of L2C deployment
  • Slow introduction and adoption of user equipment
  • Some users wait for Galileo
  • Moderately large potential market size, moderate effects of availability of other signals and delay in Galileo FOC to 2011
  • Attractiveness of augmentations

Sidebar: Benefit Variables
Overstatement could result from competition from other signals, from augmentations and from other technologies that is greater than anticipated. For example,

  • Greater attractiveness of other signals because of the availability of satellites from Galileo in addition to those from GPS at the L1 and L5 frequencies
  • Advances in augmentations that make single frequency use more attractive
  • Slower price declines for L2C user equipment
  • Less triple frequency use when additional satellites are available from Galileo and/or greater use of Galileo signals at frequencies that do not correspond with L1 and L5
  • More users waiting for L5 for non-aviation civilian dual frequency use than allowed for in the study.

Understatement could result from

  • More important and/or numerous applications than were allowed for in the calculations
  • Faster price declines for multiple frequency user equipment (e.g. if competition squeezes high end margins even more) and/or larger price sensitivity of demand
  • Non-market benefits greater than the 25% of market benefits assumed
  • Impacts of L2C on long run economic growth, which were not included in the calculations and perhaps could add perhaps 20% to benefits.

For figures, graphs, and images, please download the PDF of the article, above.

Acknowledgments
Steve Bayless, Tyler Duval, Jason Kim, Scott Pace, Mike Shaw, Tom Stansell, Dave Turner, Jack Wells, Rodney Weiher, and Avery Sen offered comments, guidance and assistance to the study. Many others contributed expertise through interviews.

Additional Resources
Kenneth W. Hudnut, and Bryan Titus, GPS L1 Civil Signal Modernization (L1C), Interagency GPS Executive Board, July 30, 2004, <http://www.navcen.uscg.gov/gps/modernization/L1/L1C-report-short.pdf>

U.S. Defense Science Board, The Future of the Global Positioning System, Washington, D.C.: Office of the Under Secretary of Defense For Acquisition, Technology, and Logistics, October 2005, <http://www.acq.osd.mil/dsb/reports/2005-10-GPS_Report_Final.pdf>

By

Uh-Oh, It’s UXO!: A Global Differential GPS Solution for Clearing Unexploded Ordnance

At any given time along a large swath of rural, northern Texas you might witness a loud, dirty ritual. A handful of men standing still in the middle of a field, their expectant eyes fixed on the same point. Just about the time your gaze sets on the same point, it happens: a deep sound like the forceful downbeat of a drum cracks through the air and the dirt-caked ground explodes in a dusty plume of metal and sand. The cloud dissipates and the men, satisfied that the World War II–era munition has been successfully destroyed, move on to their next pin-flagged target.

At any given time along a large swath of rural, northern Texas you might witness a loud, dirty ritual. A handful of men standing still in the middle of a field, their expectant eyes fixed on the same point. Just about the time your gaze sets on the same point, it happens: a deep sound like the forceful downbeat of a drum cracks through the air and the dirt-caked ground explodes in a dusty plume of metal and sand. The cloud dissipates and the men, satisfied that the World War II–era munition has been successfully destroyed, move on to their next pin-flagged target.

It’s an almost daily exercise for the survey and “dig” teams of Parsons Corporation and USA Environmental who together for the past four years have been steadily clearing the once-active infantry and artillery training facility Camp Howze and returning the 59,000 acres of land to its former cattle-grazing condition.

Officially designated as the Former Camp Howze Removal Project (FCHRP), the Texas effort is part of a long-standing U.S. Army Corps of Engineers (USACE) program to clean up unexploded ordnance (UXO) remnants at former military training bases around the globe. And it’s one in which Parsons, an engineering and construction firm based in Pasadena, California, has been heavily involved for the past 15 years. During that time, Parsons survey and explosives teams have located, unearthed, and recycled or destroyed more than a billion pounds of munitions, fragments, and other range-related items.

Unlike the typical short-term UXO removal projects of the past, the FCHRP has already required four years of dedicated labor and doesn’t as yet have a fixed end point. The USACE funding approach enables the teams to stay on site until either the money runs out or all the ordnance is found and cleared, says Terry Willis, Parsons field data manager for FCHRP.

Such unusual circumstances make budgeting for operational costs and developing highly efficient and productive work methods that much more critical, says Willis, because any substantial funding cuts by Congress could mean the end of the project.

An additional motivation for taking an open-ended approach to the FCHRP could be that all of the former Camp Howze land is privately owned and is home to families who have lived and worked there for more than 50 years. Each family must grant consent to the teams to clear individual parcels.

Despite the fact that their house or barn could be sitting on a land mine or other ordnance, many people are rather complacent and don’t see the urgency in having the munitions removed, Willis says. Even though field teams have found artillery rounds eight feet from people’s doorsteps, acquiring the necessary consent to access the property has been a time-consuming process.

The Basis for Going Baseless
The rather atypical FCHRP presented Parsons with the opportunity to arm the project teams with their own atypical survey “weapon” — a global satellite-based augmentation system (GSBAS) that provides corrected GPS positioning without the use of base stations.

Although Parsons had never before employed the GSBAS technology in its numerous UXO removal projects, given the way the GSBAS has performed so far on the Texas project, Willis predicts that similar systems will become as common place in the field as the shovels and the explosives used to remove munitions.

Previously, Parsons’ UXO-removal teams employed real-time kinematic GPS (RTK-GPS) systems to create search grids, find and stake out anomalies for investigation, and record the position of munitions found. RTK techniques require the broadcast of differential corrections to the GPS signals’ carrier phase measurements. These corrections are transmitted via a high-speed data modem from a base station to roving GPS receivers.

“Although RTK-GPS is extremely accurate,” says Willis, “its complexity, bulk, and expense make it less than ideal for Parsons’ purposes.” Since putting the GSBAS system to use in the field, the Camp Howze team has virtually eliminated its need for RTK-GPS in the majority of the fieldwork, obtaining decimeter accuracy for one-third the cost of an RTK-GPS unit.

A departure from local real-time differential GPS systems, the GSBAS relies on a global network of base stations to calculate and compensate for  clock and orbit errors in the satellite transmissions. Broadcast of the DGPS corrections, available globally in real time, eliminates the need for local base stations, which in turn eliminates the struggle to maintain communication links to a source of local corrections. In short, users are no longer tethered to a base station for precise positioning.

Recycling a Metallic Past
For four years, from 1942 to 1946, Camp Howze was the temporary home for thousands of soldiers as they prepared for battle overseas. Located along the Texas-Oklahoma border about 55 miles North of Dallas, the camp offered an immense area for training forces, artillery ranges, libraries, chapels, theaters, banking facilities, and even a camp newspaper.

For the last four years, however, former Camp Howze has been the temporary home of the Parsons and USA Environmental teams as they continually search for the telltale metallic signs of the camp’s previous incarnation.

When their FCHRP activities began in 2002, the Parsons team started out with very scant historical and practical knowledge of the area, having only a few sheets of county property maps and background information on the camp itself provided by the U.S. Army. This information included engineering maps with the approximate locations of artillery ranges, aerial photos of the facility from 1943, and written records from units that trained there.

Parsons then obtained updated aerial photos of the site taken in 1995 and records of what project managers call “phase one properties,” occupied properties or buildings believed to be near or on former range or training areas. All of these data sets were incorporated as layers into a geographical information system (GIS) to begin to identify logical areas to investigate for UXOs. Of critical importance for prioritizing their efforts was the identification of current high-traffic areas where private citizens live, work, and play.

“Once we identify the areas to investigate, we determine which methods for removing ordnance will be the most effective based on many factors such as accessibility, terrain, vegetative cover, time of year and land-owner consent,” says Willis. In the case of Camp Howze, all of those factors led to the decision to couple standard search tools with new technology to improve efficiency.

The two most common investigative tools are what Willis calls “magnetometer (Mag) and Dig” — a thorough, yet costly and time-consuming process of manually clearing smaller areas with the aid of shovels and handheld metal detectors – and “digital geophysics,” a survey technique that uses large, highly sophisticated electromagnetic sensors to detect the presence of buried metal objects. As the latter method can rapidly cover a much larger territory, Parsons first applied the technique to perform a geophysical survey in combination with RTK-GPS to pinpoint suspected unexploded munitions.

To perform the geophysical survey, three computer-controlled electromagnetic sensors are connected together to create a three-meter wide sensing array. The sensors are then physically pulled by an all-terrain vehicle over the area of interest to detect the presence of metal items in the ground and record their positions.

The readings from the electromagnetic sensors coupled with the continuous GPS readings are postprocessed to generate coordinates of anomalies, that is, possible UXOs, which are then added to the GIS. All uploaded position readings are tied to Texas North Central State Plane coordinates.

Through considerable postprocessing of the geophysical survey data, the Parsons’ geophysicists plot positions of the anomalies on digital maps and “flag” them through color-coded points to signify the level of probability of being UXOs.

Following their usual practice, at this stage the field teams would have used RTK-GPS to reacquire and flag the real-world points of potential targets detected by the geophysical survey. At the former Camp Howze, however, Willis chose to depart from tradition.

Given the terrain extremes in this region of rural Texas, the large study area, high- accuracy requirements, limited labor resources and indefinite work schedule, Parsons needed a cost-effective and user-friendly survey solution that would enable UXO technicians to efficiently locate anomalies and precisely position them.

“We opted for the satellite-based system approach for a number of reasons, one of which was the considerable cost savings over an RTK-GPS rental,” he says. “Because it doesn’t require a base station, we don’t have line-of-sight issues nor need to spend considerable time troubleshooting communications and power supply issues. So, we can be much more productive in the field. And the simplicity of the system makes it much easier for the teams — who are not trained surveyors — to set up and use.”

Unearthing UXO
On any given day, Willis and his teams use the GIS as a logistical planning tool to map out clearing strategies based on the digitally flagged hot spots indicating the highest probability of buried munitions. The team imports those coordinates into the controller software of the receiver, and the Mag/Dig teams head to the site.

Once on site, the two-person survey team uses a “quick-start” feature built into the GSBAS receiver software that enables the system to reach full position accuracy immediately by using a previously surveyed position to initialize the navigation function. This set up process takes “less than five minutes,” says Willis, after which the survey team uses the satellite-based system to navigate to the predetermined points on the ground, where they stake the targets with pin flags.

Following relatively closely behind the survey team, the dig team of three to seven UXO technicians armed with shovels and handheld magnetometers investigate each flagged point. They use the metal detector to verify the presence of metal, and, if the indications are affirmative, they gingerly dig up the object.

Should they unearth any munitions, they carefully inspect the UXO to determine if it needs to be destroyed. To neutralize the ordnance they set explosives and destroy it on the spot. All discovered ordnance is properly and precisely recorded — type, position, and confirmed destruction — and the collected field data uploaded into the GIS.

“Although the GIS was not originally a requirement, it has become the information backbone of the project,” says Willis. “It’s a planning tool for UXO searches and the main repository for what we find in the field. It maintains all of the data layers that we have accumulated and created over the past four years, including aerial photos, topographic maps, scans of annotated response maps, parcel boundaries, and pipeline data to provide us with a comprehensive graphical resource. And because it’s tied into the field database, data from daily operations can be displayed geographically in various ways.” 

To date, with the combination of the GSBAS, the GIS, and their conventional Mag/Dig tools, the survey and UXO-removal teams have cleared more than 1,800 acres of Camp Howze’s most hazardous areas. Along the way, they have destroyed more than 860 live ordnance items, including mortars, artillery shells, anti-tank rockets, hand grenades, and land mines.

Stars in Their Eyes
Although Camp Howze stretches across nearly 59,000 acres, FCHRP mandate is not to sweep 100 percent of the land but rather to investigate and clear the most zones constituting the greatest hazard to the public. Willis says that adding the satellite-based system to the fieldwork is helping Parsons to fulfill that charter more efficiently than with their previous RTK-GPS solution, predominantly because they can achieve near RTK-GPS accuracy without a base station.

“We work four, 10- hour days per week,” says Willis. “If you spend an hour setting up a base station and another half hour to tear it down, you’ve lost at least an hour and a half of operational time, provided you don’t have any trouble with it during the day. In rural, rough terrain, radio line of sight is a problem, and it can be a long trip back to the base if we lose the radio signal.”

Also, powering the base for an entire day can be a challenge, he says. Parsons teams have used marine deep-cycle batteries to power the equipment and sometimes the power supply still wouldn’t last an entire day. Cellular RTK was considered for its convenience, but the existence of many cellular “dead zones” in the area precluded its use.

“Because our survey team directly supports our dig team, both teams will normally have to shut down operations if something happens to the base station,” Willis adds. “It is costly to keep a dig team in the field. If they’re forced to stop work because the survey equipment is down, it’s very expensive.”

FCHRP requirements dictate that the teams position any ordnance they discover to within one foot. Willis says the satellite-based unit performs well enough for them to meet this standard. “The decimeter accuracy provided by the system is actually more than we require for this project,” he says.

Because heavy thunderstorms and tornadoes are the only weather-related phenomena that will force the crews inside, the Mag/Dig teams need rugged equipment that’s also portable.

“The [GSBAS] system fits into a single carry case; so, much of the weight and bulk is reduced to a manageable size,” says Willis. “That simplicity and compactness makes it very reliable because it’s easy to transport into the field, set up and to use.”

Willis feels confident that the cost-effectiveness of the system will help Parsons to win similar projects in the future. “When bidding on these projects, it helps to be able to shave thousands of dollars off the cost by simply changing a piece of equipment,” he says.

In the meantime, people in this rural part of Texas can still count on witnessing a handful of men, staring at a fixed point in the distance, waiting for the inevitable explosion of dirt and metal.

For figures, graphs, and images, please download the PDF of the article, above.

By
IGM_e-news_subscribe