Even accounting for tectonic drift, there is a concept of positioning reproducibility that is separate from precision. In general the precision of the measurements is much higher than the reproducibility of the same measurements. That is, you may be able to measure a fixed point on the Earth using an instrument with 1cm precision at a specific point in time but if you measure that same point every hour for a year with the same instrument, the disagreement across measurements will often be >10cm (sometimes much greater), which is much larger than e.g. tectonic drift effects.
For this reason, many people use the reproducibility rather than instrument precision as the noise floor. It doesn’t matter how precise an instrument you use if the “fixed point” you are measuring doesn’t sit still relative to any spatial reference system you care to use.
> if the “fixed point” you are measuring doesn’t sit still relative to any spatial reference system you care to use.
But do those points actually move or the air medium changes the measurements?
I ask because I saw a very interesting documentary once about how they started accurate mapping in England with fixed points and measuring the angles between those points to a high degrees of precision.
My mental model has always been that those points are all fixed, but now that you mention it, why should they be fixed?
After all, my 7 grade teacher clearly demonstrated the thermal deformation or copper rods and all bridges have gaps that allow for thermal deformation, so indeed, this would apply to soil on the scale of tens of kms?
Fixed points actually move relative to each other. This is measurable even locally if you are doing high-precision localization e.g. with LIDAR. The geometry of relationships between objects is in constant motion but below the threshold of what a human can sense. There are many identifiable causes of this motion that vary with locality (tidal, thermal, hydrodynamic, tectonic, geophysical, et al). Additionally, there are local time dilation effects, both static and transient, that influence measurement but aren’t actually motion.
This comes up concretely when doing long-baseline interferometry. Lasers are used to precisely measure the distance between receivers in adjacent structures for use in time-of-flight calculations. Over the course of a day, the distance between those structures as measured may vary by multiple centimeters, which is why they measure it.
The air medium does add noise to the measurement depending on wavelength but it’s also small things adding up like the repeatability of the angle the satellite is at when it measures that same point. An arc-second of error at 400km is over a meter so even a fraction of an arc-second is enough to introduce a lot of noise between measurements.
A typical domestic GPS will give you accuracy to worst case 5m, but a good one will be sub metre, and taking enough measurements over time, especially with DGPS or RTK you'll get to less than 10cm.
After 20 years at 7cm per second that's 1.4m. That's the same order of magnitude of error as domestic.
Related but slightly different. The accuracy is real but it is only valid at a point in time. Consequently, you can have both high precision and high accuracy that nonetheless give different measurements depending on when the measurements were made.
In most scientific and engineering domains, a high-precision, high-accuracy measurement is assumed to be reproducible.
I think this is a charitable interpretation of the remark which deprives GP from learning something (sorry if this comes across as condescending, I'm genuinely trying to point out a imo relevant difference)
No it's not at all accuracy vs precision. That statement is about a property of the measurement tool, where one can have systematic offsets [0] (think about a manual clock, where the manufacturer clued the finger on with a slight shift) vs they can simply be inaccurate (think about a clock that only has a minute finger, but not one for seconds).
The thing pointed out by the original comment is about a change in the _measured_ system. Which is something fundamentally different. No improvement in the measurement tool [1] can help here as its reality that changes. Even writing down the measurement time is only going to help so much since typically you aren't interested in precisely the time of measurement and will do an implicit assumption of staticness of the real world.
[0] The real reason for those is that it is _much_ simpler to build a precise relative measurement tool (i.e. it's easier to say "bigger than that other thing" than "this large"). One example is CO2 concentration measurements, they are often relative to outdoor CO2, which is - unfortunately - not stable
[1] Assuming that the tool is only allowed to work on one point in time. If you include e.g. a weather modelling supercomputer in your definition of tools, that would again work.
For this reason, many people use the reproducibility rather than instrument precision as the noise floor. It doesn’t matter how precise an instrument you use if the “fixed point” you are measuring doesn’t sit still relative to any spatial reference system you care to use.