Combining Positioning with Perception to Enable Autonomy - Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design

Combining Positioning with Perception to Enable Autonomy

Positioning is the fundamental prerequisite for autonomy—and both absolute (global) and relative (local) information is required. While a critical piece, GNSS and inertial need to be combined with other sensors, like cameras, to perceive the surrounding environment to enable autonomous navigation.

Sandy Kennedy
Sandy Kennedy, Vice President of Innovation for Hexagon’s Autonomy & Positioning division

Because there’s no one sensor that works in every condition, sensor fusion is key to unlocking autonomy.

“A camera is a really powerful sensor, but it’s not a perfect sensor that works everywhere,” Hexagon Vice President of Innovation Sandy Kennedy said. “But where cameras are likely to have gaps, GNSS is unlikely to have gaps. Inertial is the heartbeat that will coast you through any gap, but it needs to have some kind of regular, external information to keep it on track because it has accumulating error over time.”

Kennedy will focus on complementary sensors, including cameras, LiDAR and radar, and their integration with GNSS during her HxGN Live Global 2023 presentation. This year’s event is slated for June 12 to 15 at the CAESARS FORUM conference center in Las Vegas. Kennedy’s June 15 talk, Perception and Positioning: Enabling Autonomy with Vision, will explore how combining visual odometry measurements with GNSS and inertial data leads to a more reliable solution that supports a wider range of applications than any of these sensors can provide on their own.

An elegant combination

Kennedy will talk a bit about what odometry is and how it’s different from a SLAM solution, and she’ll also cover ways cameras are deployed today that aren’t useful for precise positioning and navigation. For example, you can fly a drone with cameras purely for image collection and real-time classification purposes that are machine learning based. It’s important to recognize those cameras also can, and should, be leveraged for more reliable positioning.

“The part that’s interesting is that things can exist in their own data stream by themselves,” Kennedy said, “but being able to do an elegant combination of them that’s seamless, that’s easy and reliable for the user, is actually the problem that’s not well solved yet.”

She’ll also cover calibration requirements, an area that doesn’t get a lot of attention.

“You need to know how your sensors are affixed with respect to each other,” she said, “and you need to know pretty precisely if you’re trying to navigate precisely.”

The foundation

GNSS positioning, navigation and timing is the foundational technology that defines time and a common reference system, and it’s also Hexagon’s foundation. Hexagon is the leader in this area, and many things that apply to GNSS also apply to other types of positioning technology and perception sensors.

“Everything that goes into doing something that’s operationally reliable from a GNSS sense actually applies equally when you start moving into more sensors,” Kennedy said. “Even more so because you have interactions to manage. We started with GNSS, then we added inertial to it and now we’re adding other sensors as well. There’s a large body of experience and knowledge that supports that across many different applications and use cases. And that’s really different than focusing on one aspect in one specific application. That’s not the same as understanding what it is to use it in a lot of different applications in a way that’s very reliable.”

The framework behind GNSS is estimation expertise, Kennedy said. GNSS involves an amazing number of signal processing and estimation techniques, from signal acquisition and tracking through carrier phase ambiguity estimation, and then on to estimating inertial errors on the fly. All that estimation and signal processing expertise can be applied to perception sensors like cameras and LiDAR. The GNSS estimation framework has a lot of commonalities to the framework required for sensor fusion.

You can augment and supplement with some machine learning as well, she said, but you would “never approach a problem that has a good estimation framework purely by machine learning because it’s literally trial and error, and not efficient.”

Bringing it all together

GNSS gives you a coordinate, Kennedy said, but that coordinate doesn’t have any meaning until you put it in the context of a map. You need the other sensors to perceive what’s around you, because that’s the part GNSS is missing.

“GNSS gives you your anchor point to be able to put all your context together, but you still need that context,” she said. “Your perception sensors help you sense your immediate surroundings to figure out your context. You really have then the anchoring part of how do I fit all of these things together both in terms of assembling a scene or a map and also in terms of how do I calibrate all of these sensors to each other.”

The technologies Kennedy will describe are advancing, but they of course have their operating requirements and constraints. And they aren’t new; they’re building on decades of electronic navigation experience.

“It isn’t magic. It’s mostly deterministic. It’s science and engineering,” she said. “There’s actually been many more years put into this than most people are aware of.”

Kennedy is just one of the many speakers who will present during this year’s HxGN Live. Attendees will have the opportunity to take part in various breakout sessions, hands-on workshops and user forums. Click here to learn more and to register.

IGM_e-news_subscribe