Skip to content(if available)orjump to list(if available)




·January 18, 2022


> Naturally, large range uncertainty increases the ambiguity of position, but the relative position of the satellites also matters. If they aren’t well spread, the exactness of calculated location also suffers.

(see the excellent example in OP)

Fun tidbit, the resulting error is known for the system in closed form as Geometric Dilution of Precision, and is a 3x3 (edit: or 4+x4+ if you are estimating bias or quantities like time, thx brandmeyer) matrix that depends on all the locations of the visible sats, and your position relative to them.

GDOP is a general relationship for any estimator based only on the equations used to derive something from sensor remote sensor measurements. It's possible to derive GDOP for any sensing system using the Fisher Information Matrix (which is the inverse of GDOP). Some minor caveats apply, but in general this is a useful trick.

FIM is worth learning if you want to get into sensing & estimation.

Another fun thing: FIM can be derived a number of ways, and appears if you simply ask (mathematically) "What is the most likely position of the gps sensor given sat locations" as the hessian matrix of the system that you use while answering that question using e.g., convex minimization.

All of sensing & estimation is just mostly convex optimization.


The natural expression of the DOP matrix is 4x4, since the receiver is computing a solution in 4D space-time. Its pretty common for the dominant eigenvector to be along the time-vertical axis for a terrestrial receiver.


Great point, thanks. I edited.


You and I have definitively a different notion of what is fun ;)


Geometric quality is easy to consider in terms of using trig and measured angles to solve for an (roughly) equilateral triangle vs a triangle with a very small measured internal angle.

Also, it's easier to understand variables vs uknowns of GPS if you consider that direct measurement is of velocity and/or acceleration, and position is the resultant derivative, after taking into account the probabilities of various solutions.

(Velocity and acceleration can be measured directly without making as many assumptions about various starting conditions.)


Do you know a good source to learn about the FIM?

(Postgraduate level stats/maths, mostly applied, tiny bit pure.)


For these kinds of problems, literature has multiple derivations of FIM for the purposes of tracking & estimation (and path planning for sensing -- my former specialty). Shameless plug: chapter 3.

Most of that was distilled from literature or basic math (and probably contains errors -- thanks grad school).

Bishop was always a good reference for me,

as was

B. Grocholsky, “Information-theoretic control of multiple sensor platforms,” Ph.D. dissertation, University of Sydney. School of Aerospace, Mechanical and Mecha- tronic Engineering, 2006

And here's a tutorial that might help:


Interferometry isn't convex, even the kernel trick won't save you. I don't think...


In my experience, the usual trick is to do a few iterations of "linearize and solve the new convex problem". Sometimes, you can get super clever and use LM:

Look there in equation 6: That's the FIM being left multiplied when solving these types of problems. (under standard gauss-newton step, which is also common)

Another tidbit: If you apply matrix inversion lemma to eq 6, you can get the (Extended)Kalman Filter update steps. Somewhat related:


Why is it called "Geometric Dilution of Precision" and not "covariance matrix"? In what ways is the former not the latter?


GDOP is sometimes taken to mean the largest eigenvalue or trace of the covariance matrix. It's a metric for the badness of the estimate.

Often, GDOP is broken into components HDOP, VDOP, etc for the values corresponding to some earth-fixed coordinate frame. That starts to look more like statistics about the covariance matrix.

Here's a derivation:

Here, it ends up being (usually) the sqrt(trace(covariance))


This is very nice. Very clear, and the 3D interactives are excellent at guiding the explanation.

One thing that I thought was a little confusing was right at the beginning, when we were estimating the position of the figurine and there was an area of uncertainty shown by the yellow circle.

It isn't clear how you're estimating position, and why we have an area of uncertainty. At first I figured it was going to explain it using triangulation (measurement of angles) but there's no reason triangulation wouldn't be exactly as accurate as the tape measure method on a 2D surface, so wouldn't explain the area of uncertainty.

The description merely says:

> Just by using these three reference points we can relate the figurine’s position in the environment to an approximate placement on the map as show with the yellow shape on the right.

I worry that having this ambiguity so early on might put some people off from reading the rest, because they figure they don't understand that and so won't understand the rest of it.


Yes, that section is poorly written.

More generally, the assumptions about what things were uncertain and what could be taken as exact were poorly motivated. The author should wither justify them with real-world constraints ("satellites can host atomic clocks but destroyers can't" -- not obvious!) or explicitly announce the assumptions as unjustified, but he shouldn't make it seem like the assumptions could have been reasoned to by the reader.

I also was disappointed the author used circles/spheres and guesses for the timing offset rather than the much more edifying choice of hyperbolas. Just as you can think of a sphere as points reachable by the end of a rope of fixed length tied to a post, you can think about a hyperbola as the points reachable by tying separate ropes to two posts and spooling out equal amounts of rope.


I think it's as though you were the yellow figure standing in the landscape, and you were trying to position yourself on the map by going "Okay, the red landmark is quite close to me to the east, the blue landmark is a bit farther away to the north-west, and the green landmark is way over to the south-west. Now, where am I on the map?"


Was taught in college that if we get stuck, just move on forwards, things ahead can clear that up. And it works in this case for me. That part really doesn't matter much. At the end of the day, our brain is not a linear programming interpreter.


I stopped reading there because frankly, if there’s such a flaw in the article that early, I was worried about the accuracy of the rest of it.

The concept is just not explained at all. I spent too long making sure I didn’t miss some sentence somewhere explaining what was happening in that diagram. It felt like the article was wasting my time, and that maybe the author himself didn’t really understand what was happening.


You are assuming instead of knowing because you are talking about something you didn't read. LOL


I actually stopped reading the guide when I got to this point, and I have a lot of experience working with GPS.

My thought was, "If the simple parts are this unclear, I don't want to spend time getting to the more complex portions".


You should consider reading the rest of it, it's excellent after this.


I had the same question. I moved on in the article, and its great, but this still puzzled me. If the author cleared this up, this article would be an absolute masterclass.


Literally left the article to come here to see if anyone addressed this yet. I can’t figure out what it means.


It's about literally looking at it (in 3D) and guessing the exact position on the 2D map, which is easier closer to the landmarks.

At least that's my interpretation.


Yeah, I suppose it’s like “you are a human standing on a flat plain looking at these 3 landmark towers. Where on the map are you?

Intuitively you would be able to eyeball the angles between the landmarks, and eyeball the relative distances. I don’t think I’d draw a circle though. I’d probably have way less confidence for the landmark farther away and balloon out my estimate.


Think about it as if you are standing somewhere in relation to the monuments and are trying to figure out where you are on the map. You don't have precise measurements of distances or angles, you only have the estimated distance / angle you are from each of them that you get by looking at them. And without precise measurements there is uncertainty in where you are located.

You probably know exactly where you are on the map if you are within a meter of a monument. As you move farther away from the monuments your estimation of the distance from each one becomes less precise. At least when I am estimating a distance things end up rough really quickly. At 10m I might be off by 1m. By 50m I am off by 10m, and so on. Now translate that into an exact position on the map. It not possible, there is always some level of uncertainty.

I didn't realize it at first, but all of the examples are interactive. You can move the figure around, I found that pretty helpful and fun as well. In the very first example: place the figure somewhere, and then try and point to where it is on the map. I found myself circling areas naturally, even though the scale is relatively small. Especially when viewing it from an angle. The second example is quite exaggerated as far as the circles go but it is representative of the idea.


After reading all the replies, I still have no idea what that part means.


It's so depressing that – at some point in the future – I'm going to want a really clear and precise explanation of how GPS works and the likelihood that an internet search directs me to this excellent, clean, and ad-free blog is essentially nil.


My own recommendation is NIST Technical Note 1385 "Global Positioning System Receivers and Relativity" by Ashby and Weiss[0], and the GPS ICD for the L1 and L2C signals IS-GPS-200[1].

They are aimed at the practitioner who actually needs to solve problems. TFA is entertaining and pretty, but insufficient to actually get work done.




I searched for an excellent, clear and precise explanation of GPS that is clean and ad-free, and landed on your comment.


  I know not by what tools Web 3.0 content will be found, but Web 4.0 will be searched using hn comment breadcrumbs.


this is why i always add "thanks"||"it worked" in my queries


You may be on to something here. With GPT-3 you could improve the quality of the responses by starting with a prompt like, "You're a well-educated and very helpful individual".

Maybe internet searches can be improved by changing the search from "how gps works" to "excellent, clear, and precise explanation of gps"?


Searching for "TechA vs TechB" was also a great trick. Until the SEO people noticed. They also have access to GPT-3 now to generate their auxiliary "content".

I think we need web search to move towards something more reputation-based. (Based on sites like HN, Reddit, StackOverflow, etc. that don't allow to easily fake high reputation. Or some kind of trust network starting from all your social accounts' subscriptions/follows and spreading your trust far outwards.)


Its not all that bad, for example googling 'GPS FPGA receiver' will give you equally excellent


IMO the best place to find in-depth documentation of existing technologies is the patent office. A while ago I programmed a way to locate the source of gunfire with microphones and drew a lot of inspiration from GPS patents because I couldn't find anything in-depth on the subject on Google (finding the time and position of the source of a shockwave is basically just inverse GPS multilateration)


The HN search will probably quickly direct you to this post, and you'll remember where you saw it.


There's an article similar to this but about Sound.... I wonder if I'll ever find it again.


Make a bookmark for it, let it sync to all your devices.


Cool, so now we are back to 1996 with Internet as curated lists.


Amazing work.

I work on the Android location team at Google, and I sent this article out to my team. GPS/GNSS is critical for accurate location/context, and there's still plenty of innovations happening in this field.

One of our directors is Frank Van Diggelen - none other than the professor who taught the Stanford GPS MOOC referenced by the article :) I'm sure he is going to appreciate seeing the course called out there!


How many satelites are tracked by a typical android phone? 4? Or more? Can it track both GPS and other systems like glonass at the same time?


It depends on the GNSS chipset in the phone, but usually it'll be all the GNSS satellites visible in the sky, from all major constellations. The current Android platform supports GPS, SBAS, GLONASS, QZSS, BEIDOU, GALILEO, and IRNSS[0].

Each satellite adds another constraint, which helps, and with more satellites to choose from, you can drop the weakest or worst measurements.



Side note:

> The current Android platform supports GPS, SBAS, GLONASS, QZSS, BEIDOU, GALILEO, and IRNSS[0].

But it depends (and varies wildly) on the hardware of said Android. I buy 3 to 5 cheap (sub 100 euro/US$) Androids per year and it always surprises me which device supports which GNSS is supported and how accurate or not they are.

PS I use them to Wigle ( mainly, and the WiFi reception is exactly as diverse as the GNSS.


Mine's detecting 20 at the moment, and using 14 of those. This varies slightly from minute to minute. Many GPS apps display this data.




There are certain streets in downtown SF where Android (and iOS) location always suddenly jumps to a block away. Navigation apps then direct drivers to make incorrect turns or even dangerous turns like turning the wrong way onto a one-way street.

It seems to me that your software could prevent this error.


Yup! This issue, which is commonly described as the "wrong-side-of-the-street" or "wrong-city-block" error, is one of the biggest ones, and is where a lot of the innovation I mentioned is happening today.

This occurs because in "urban canyons", meaning streets with tall high-rises or sky scrapers, there is little or no line of sight to GNSS satellites (GNSS being the generic term for all satellite positioning systems, not just the American GPS). Consequently, what your phone picks up are signals reflected off of buildings, which exaggerates the distance between you and the satellites, and causes the positioning solution to be pushed away- onto the other side of the street or another city block.

One way Google/Android is tackling this is by using Google's trove of 3D building data, the same that is rendered in Google Earth when you use it in 3D. Your phone uses the building data to correct for reflections. Read on here (and note the authour!):

And, the device can use various filters and smoothers to minimize sudden jumps, and normally does, but there are edge-cases (for example, an app may be requesting pure "unfiltered" GNSS location returned directly from the GNSS chipset, hence the jumps). But rest assured, we are working on this issue.

Anyway, thanks for the feedback. We're always doing our best to improve location accuracy and reliability for billions of users in all scenarios and environments, and it's no trivial feat!


I wonder how the inventors of GPS would have responded if back in the early nineties you had suggested to just add a map of all the worlds high buildings to the GPS receiver.


Fun fact: when I was looking into patenting this concept in c. 2007, the prior art search revealed that it had already been patented by someone in Japan in c. 1997.

Amazing to think that ideas people had back when Selective Availability was on for the foreseeable future are just now becoming available to consumers. Congrats on getting it done!


Trying to predict reflections by processing building geometry sounds very complicated and error-prone.

Wouldn't it be easier to query Google Maps & Waze telemetry for impossible position jumps? Then you could make geofences where Google Play Services ignores position jumps and falls back to Wifi-based location and integrating the accelerometer.


Japan solved this problem in Tokyo by adding some extra satellites:


Uber has an interesting approach to solving this issue:


I experienced this as a pedestrian in downtown Chicago (on iOS in my case). It made me curious about what was going on — were the buildings reflecting the signal or obscuring it somehow? It's a fascinating phenomenon.


That's exactly what's happening! Please see my other comment (along with the contained blog link)


Wonderful write up, as always from Bartosz!

Here are some fun GPS projects I've found in the past, maybe others can add to this list.

GPS/Galileo/Beidou/Glonass status and error monitoring, open-source community-ran project:

DIY GPS receiver using minimal signal frontend, FPGA Forth CPU for real-time processing and RPi running position solvers:


How are these interactive visualizations made? As a senior machine learning engineer (with only rudimentary JS skills) it would be fantastically fun to make something like these.


It's WebGL in a <canvas>, written by hand by the looks of the source -



Beautiful. I also checked the archives on that blog and every article is a work of art. I would love to work with this dev!


The author wrote his own WebGL library. If you don't have much knowledge about 3D, then is a fantastic library to learn. It abstracts away much of the tedious part.

Not sure what's the best starting point to learn, but there's lots of videos on YT to help you get started.


Of all the reasons, I was stunned with how much detail went into the work at seeing the little globe/Earth in the satellite orbits section -- the Earth has the weather patterns and clouds running in animation as you spin the globe around!


not what he/she uses, but if you are interested in these kind of things, check out .

it provides you with an in-browser, graphical, node based interface where you can just connect boxes together and it will output js-code ready to implement in your website.

(disclosure: i know the dev plus am a huge fan!)

rathish_g is excellent.


Someone told me that apparently they aren't made, they are discovered.


There's a lot of really great info in here. One random things I learned from this:

> As that angle increases, the signal from a satellite travels more sideways and its larger portion gets affected by the atmosphere. To account for this, GPS receivers ignore ranges measured from satellites at very low elevation angles. ... atmospheric effects are primary source of GPS inaccuracies.

(I know GPS has inaccuracies, but I didn't really know what caused them, but if I had to guess, the atmosphere wouldn't have been on my list of guesses for the top causes)


Something not mentioned: the "new" L5 signal at 1176Mhz, combined with the existing L1 signal at 1575Mhz, allows the receiver to estimate the atmospheric effects and reduce the uncertainty, allowing for a much better position fix. Think centimeters instead of meters.

One more thing I've wondered: the system depends on the sattelites knowing and broadcasting their exact position, but how do you determine this position? From ground stations, sure, but how exactly? What's the margin of error on that?

And to add to this, how do you bootstrap this?

Galileo had an outage from 2019-07-11 to 2019-07-18 [0]. I've not read much about the details what caused the outage, or why it took an entire week to get back up & running.



The knowledge of a satellite's orbit is taken by using the prior parameters of the orbit, predicting where the satellite will be, and pointing a combination of telescopes (for precise angular measurements) and radars (for precise distance measurements) at this location, and measuring the error between where the satellite is and where it is expected to be. A set of these observations are then used to update the "known" orbit.

This known orbit is then provided back to the satellite so that it can be broadcast. If this system of updates stopped working, the quality of GPS position estimates would degrade pretty quickly (think weeks, not years).

This also means that if a GPS satellite were to need to maneuver for some reason -- either periodically boosting back into its assigned orbit or for debris avoidance -- the normal system of updates will catch this and users will never have to know or care that the satellite moved.


You're spot-on, the bootstrapping is exactly why the outage took so long to recover:


That was a very interesting read, thanks for the link!

From the article:

The outage in the ephemeris provisioning happened because simultaneously:

* The backup system was not available

* New equipment was being deployed and mishandled during an upgrade exercise

* There was an anomaly in the Galileo system reference time system

* Which was then also in a non-normal configuration

So they had to do a cold boot, which is by design slow because it focuses on high accuracy/certainty. Disappointing to read that the collaboration between the involved companies is downright bad in case of emergencies such as this. And the communication is also terrible, there's no public/official report of what exactly went wrong, why it took so long to recover, and what lessons were learned. It sounds to me that GPS being under military control is an advantage over Galileo.


Look up GPS operation control segment (OCX). Currently it's mostly Airforce and JPL, transitioning to Space Force. Lots of details published.


It's basically just a huge square root extended Kalman filter tracking all GPS satellite states.


This is also true for celestial navigation with a sextant and the light refracting in the atmosphere: the "Altitude Correction Tables" give the combined correction for refraction, semidiameter, and parallax under standard atmosphere conditions.





Ionospheric distortions are the largest source of errors in single-frequency solutions! And it's why the WAAS birds transmit a correction model, which all modern (post-2004 or so) receivers can apply.

Multi-frequency receivers can derive the corrections directly because the distortions affect the different frequencies in predictable ways, and they can work back to "ionosphere-free" pseudoranges, and base the rest of the solution on those.

To your quoted comment, nicer receivers also tend to have a configurable "horizon mask" aka "elevation mask", so you can tune this rejection behavior. I could swear I've heard of some that let you configure the mask height _per azimuth_ but I can't find an explicit reference right now.

Elevation masking is tricky because if you crank it up too high, you force yourself into poor-DOP geometries. But if you relax it too low, not only do you get heaping piles of ionospheric distortion, you also invite ground-clutter multipath. I think it's primarily used by stationary timing receivers, because they know their position is fixed, they're less susceptible to GDOP.


Author doesn't mention refraction from varying atmospheric density introducing a non-straight path. Maybe that is negligible for air? It's extremely important for sonar ranging. Lots of things affect water density.


Atmospheric density isn't as relevant as electron density:


For as long as this blog is, one thing that's missing is a discussion of multipath errors. Multipath errors are when the GPS signal reflects off of buildings or mountains, giving the illusion that the satellite is further away than it is. This is why it can sometimes be hard to get a precise location in cities.


Not only that. You can use a ground-based fixed station to listen to GPS signals and work out how much they have been affected by the atmosphere. This then gets fed back into the weather prediction models used by many weather forecasting services.


Even better, use a receiver in low-earth orbit. See also: COSMIC, GeoOptics, Spire, and PlanetiQ.


There’s an interesting chicken-and-egg problem there. You don’t know what angle the signal is coming from (unless you have some kind of sophisticated multi receiver setup) - so first you need to estimate your position, then figure out whether the satellite is low in the sky, then you can determine whether to trust the timing of the signal from that satellite.


I'm weirdly impressed by the "switch to metric/imperial" button that updates the article text. It's just so helpful.


I wish this was standard in recipe webpages. It really makes a difference, and shows the author is thinking about their audience.


Adam Ragusea has a good video on why recipes aren't so easy to translate between imperial and metric:

It comes down to the fact that the ingredients we buy from the stores near us tend to come in nice round numbers in our local measuring system, and recipes tend to be tailored to that. For example, "1 cup of shredded cheese and 1lb sausage" may translate precisely to "236.9 mL shredded cheese and 0.45kg sausage", but your nearby store selling metric ingredients may have shredded cheese in a 300mL bag and sausage in 0.5kg packages. So you either measure very precisely and waste food (which makes the recipe a pain to get right), or try to use the local equivalent (and the recipe might not end up tasting the same as a result.) So there ends up being some "art" to doing a proper translation.


For baking, precision is important, but subsequently most recipes are precisely specified, so the conversion should be followed pretty exactly. For most non-baking recipes, precisions is much less important, so a 'recipe translator' could either perform some rounding (maybe there should be a 'level of precision' metadata element for recipes), or, I could just do the rounding in my head and use my critical thinking facilities rather than following the recipe blindly.


One of the things I love about GPS is: Since you know your exact position, you can pick a good GPS satellite (one of the satellite's you're using to calculate your position), look at the timestamp from that satellite, and use it as a highly-accurate time source!

Purpose-built GPS time servers (like those from Meinberg) give you an option to enter the length of the coax cable connecting the receiver and the antenna, so that it can correct for the extra time it takes for the signal to travel over the cable (for example, see page 19).


My car does this. Unfortunately, it's a Honda/Acura, and there's a downstream bug in the way the receiver sends the info to the clock display that this year, almost all older Hondas/Acura's are reporting the wrong time:

> Honda’s head unit receives a GPS signal for date and time including a number representing a week, coded in binary. These digits count from 0-1024 and rollover to 0 after the completion of week 1024. Honda’s head unit supplier did not code their head units to account for the rollover and, on January 1, 2022, reverted to a date and time 1024 weeks in the past [1024/52 = 19.7, so 20 years in the past or 2002]

So despite all the almost-magic level of engineering that has gone into the GPS system that has stayed consistent for 40-some-odd years, a classic integer overflow has ruined it all because some subcomponent test engineer didn't think to check the inputs against the expected lifetime duration of the car's equipment.

Another fun issue with these is DST databases. The satellites will tell you the time, but it's up to you know how your location translates into a DST zone. And if you have long-running offline equipment (say, a car), and the DST dates change, well, your smarts are only as smart as the update procedure.


It's been said that week number rollover occupies a "sweet spot of awfulness" where it happens infrequently enough that it doesn't get much testing, but often enough to impact equipment deployed in the real world.

The designers of GPS either should've made it use like 64 weeks so WNRO would happen constantly and we'd have to get good at handling it, or 32768 weeks so we could ignore it for the entire life of the system and any successors.


> It's been said that week number rollover occupies a "sweet spot of awfulness"

It's even worse than that: The traditional way of handling week number rollover is a rollover count in nonvolatile memory, incremented every time a rollover is seen (for a standalone receiver there's no other source for how many rollovers there have been)

So a griefer with a software-defined radio can radio out repeated week number rollovers and GPS receivers will increment their rollover counters. In 99% of cases there's no way to decrease them, and now your GPS receiver is convinced it's year 2100.


Your car does it somewhat different then those Meinbergs. Yes you can get datetime from GPS, but what you really want is a clock signal that triggers 1 PPS.


It's a bit more complex than this, the entire GPS fix is 4D since position depends on time and vice versa. The time reported by a GPS receiver, once fix is attained, is not just the time from one of the satellites but the time resulting from the 4D fix in space and time. This eliminates (to within a certain precision) the latency.

A lot of discrete GPS receivers have some nonvolatile storage where they "cache" fixes to reduce fix time. This has the amusing result that when you buy a GPS receiver and monitor its output immediately you usually find out the time and location where QA was performed, as the first fixes emitted without the quality flag.


Or a test location emitted by QA's satellite simulator.

I have a small list of funny locations I like to pipe into gps-sdr-sim, including Null Island, the north pole, 500 feet above the Kremlin, the middle of Lake Erie, and a quiet beach in the Bahamas.

Not that I expect anyone to look at the first few sentences of output after I hand them hardware, but if they _do_...


I have a hand-held GPS receiver that was last used in Chicago in November. I just turned it on again in another part of the world but inside a reinforced concrete building, where is gets no satellite signals. It still thinks it's at the Chicago airport.


Yup, that's covered by the functions on Pages 20 and 21, which let you either completely wipe all stored state, or update it to account for being moved a long distance (while still retaining satellite data).


Then you’d have a Stratum 0 source for a stratum 1 NTP server!


Which is what Google did to have a quality time-source for synchronized time for their global database:


> ... quality time-source ...

Except they decided to "smear" leap seconds [0], instead of handling them appropriately.

For that reason, if you "require correct timestamps for legal purposes" [1], you may want to make sure that you aren't using Google's NTP servers.

(N.B.: Amazon (AWS) too, FWIW.)





Dumb question, but how does this deal with security? Can't anyone broadcast valid but malicious data on 1575.42 MHz? (e.g. to crash planes/missiles etc)

EDIT: found wiki from some quick googling


Yes, and there are moves in the next generation to add cryptographic signatures to the satellite streams. Someone could still jam it, but they couldn't spoof it.

Tons of info starting here:

Continued here:

And finally here:


One security measure effective against a simple class of GPS spoofing is to check against the satellites' epheremi.

For example, if your RX tells you that bird #7 is part of your location fix, but it knows from prior valid ephemeris data that that bird is currently below your horizon, the bogosity indicator will flash red. Ditto with certain pull-off spoofing methods.


Military aviation systems often have their GPS antenna designed to receive signals only from above.


It's illegal in most countries to jam or transmit on that frequency.


it's already illegal to cause a plane to crash, regardless of the mechanism used, and the legality of the mechanism generally has no bearing on its effectiveness...!

but, gps (or other gndss) spoofing or jamming is effective, and i think that commercially available jammers simply broadcast reasonably broad spectrum noise around that frequency, which can overwhelm nearby recievers; although the article describes how noise rejection is performed, it has its limits. spoofing is more difficult, but still possible, including simple re-broadcast attacks using data received at another location, however i believe these things are non-trivial for military receivers due to countermeasures like beam steering?


But common in military activity.


If all of the teaching materials would be so good... I've encountered first GPS devices back 1997.i remember when my coulegues were explaining them to me. At that time you wouldn't get precise measurements right away. You had to wait for correction factors or something like that. The GPS signal was scrambled at that time.


In order to determine where you are you need to know where all of the satellites are. For a standalone receiver this involves downloading a almanac of the satellites from the signal, but GPS receivers have small antennas and the satellites don't blast out at tremendous power so the available bandwidth is very low. This means the effective bitrate of a GPS signal is only 50 bits per second so it takes twelve and a half minutes to transmit the entire list.

Cell phones get around this by downloading the almanac from the internet. Standalone receivers also keep the almanac in nonvolatile storage, but the almanacs eventually go stale if you leave the receiver off for too long.


surely you need to know where you are not, to know where you are. if the difference between where you are not and where you were, or vice versa, is correct, then you are being targeted by the missile.


That's post-processing, and it's still done. Here's why:

The satellites only know their own position to a certain precision, and there are only so many bits to express it in the data packet. More bits wouldn't make sense because the measurements aren't that good in the first place.

So what you get "live" is naturally limited by both of those things. Single-frequency unassisted solutions are usually good to a few meters, dual-frequency to a meter or so.

But ground stations can determine, after observing the satellites for a long time, where they _were_ to a much higher accuracy. It's a complicated process involving a whole network of ground stations, whose own positions are precisely surveyed, etc.

The product of that network is known as "precise ephemeris", and it's available in an "ultra-rapid" (3-9 hours later), "rapid" (24 hours later), and "final" (13 days later) version. With these data, the initial observation can be post-processed to get very good solutions. Down into the millimeters.

The RTKLIB manual has a lot more detail if you're curious.


Thanks. Yes, I remember that it took about two weeks to get the final results. I always thought that the data on how much "interference" was added was released with some offset.


The article explains the delay. The satellites transmit their ephemeris data and other important data very slowly, 50 bits per second, so you have to listen to the signals for a long time to get all of it. Not explained in the article is that modern GPS receivers in phones download this data separately from the internet, so they can calculate positions instantly without waiting for the data to finish transmitting.


I think you're talking about receiving a full almanac and ephemeris, and the parent is talking about post-processing. See my parallel comment about PP.

Even sidestepping the internet just handing you a full alm+eph dump, modern standalone receivers can perform a cold-start much faster than their predecessors, because they have huge numbers of receiver channels available. The system operators cleverly offset the almanac being transmitted by each satellite, so if you can receive several satellites at once, you can start writing your almanac with several pencils on the page writing different paragraphs, as it were. Finish the page very quickly.

In the early 90s, it was common for a GPS receiver to have just 4 channels. So a blind search through all the satellite PRNs could take quite a while, and since the receiver didn't know where anything was yet, Murphy's law guaranteed that any satellite it did get a lock on would soon disappear over the horizon anyway. It took agonizingly long to get lucky and hit a bird just coming into view, so you could get whole messages from it and start filling in that table.

And of course any obstructions that limited your sky-view just made it worse.

By the late 90s, 12-channel receivers were fairly common, my first was one of these. This greatly increased the odds of getting useful satellites in a reasonable period of time, and on cold-start it would get a fix pretty reliably in 15 minutes, sometimes less.

In all cases, if the user could give the receiver a hint of the current time (within a few minutes) and location (within a few degrees), as soon as it got part of the almanac it could start figuring out which satellites must be behind the Earth right now, versus which ones would likely be overhead, and make much better use of its receiver channels to shorten the TTFF. Additionally, being able to estimate the Doppler shift greatly shortens the lock-on period.

Today's receivers don't even have discrete radio channels in the old sense, they just have a wide RF front end and then slice the data into digital correlator pipelines, achieving hundreds of virtual channels. True "all-in-view" reception is possible even with four full constellations aloft, and it's nearly magical how good they are. Cold-start times under a minute in some cases.


A survey receiver, whose data is being post-processed, need not even calculate its own position. (It probably does, since that costs nothing once the data has been received, but it's not strictly necessary.) It just records carrier-phase measurements and pseudoranges, along with clock and doppler info, in (or later converted to) a format called RINEX. The surveyor just keeps it in one place for a while, marks down "3:32pm-3:38pm, marker C", and then moves to the next point. Later back at the office (once the precise ephemeris comes out), the RINEX is crunched with that better data, and solutions are derived which allow the surveyor to say exactly where Marker C actually is.

This is better than doing it in real time, because the ephemerides available in real time just aren't that good. Only by measuring with a network of ground stations, can the better ephemerides be calculated, and then applied to the observations.

There's also RTK and correction networks, which deserve mention:

Real-Time Kinematic is called that because it tells you about distance and motion, the kinematics, _relative to a nearby base station_. If the base doesn't know where it is, the rover doesn't either. So the base is usually surveyed first, using the techniques outlined above, and then that surveyed position is combined with the kinematic differences, to derive the rover's precise position. It requires a data link between the base and rover, though that's gotten dramatically easier in the last few decades...

Correction networks do all of that, over a wide area, providing a "virtual reference station" nearby to wherever you need it to be. The corrections are transmitted typically over a separate data channel (often on leased L-band satellite time), and applied by the receiver. Some are available over the internet as well, if cellular signal is easy to come by wherever you happen to be. I don't know as much about these as I'd like to.


Could be that we were using Trimble survey receivers. They stranded on something like camera tripods and would stay on same position for longer time.


Modern cell phones use A-GPS (Assisted GPS): I also remember back in the early 2000s we were using a GPS-quipped PDA as a turn-by-turn navigation device, and for the first ten minutes the device simply asked us to wait.

At that time the signal was intentionally degraded in a process called Selective Availability ( I didn't have any experience with that.


So who else spent five minutes playing with the flexible rope?


Gosh even the little drones are so adorable


I was totally inmersed in that animation. I only wish it was longer, It must've account for half of my total reading time. The author is genuinely great and every single one of their posts are terrific.