eBird App GPS Accuracy

Motivation

In the summer of 2017, eBird introduced the ability to record tracks — GPS points recorded at regular intervals over the course of collecting observations for a checklist — in the Android version of the eBird app.  GPS units, especially consumer-grade GPS units like those found in smartphones, are not perfectly accurate.  As a result, the records of distances travelled will not be entirely accurate.  I wanted to understand whether there are systematic sources of error in GPS travel distances, in order to be able to better use the GPS information being recorded during the analyses of eBird data.

The Take-Home Messages

Based on analyses of a set of data that I gathered, over 1000 data points from 2 Android smartphones, here is what I have learned about sources of error in the GPS travel distances being recorded.  First here are some general conclusions:

  • Some phones are consistently more accurate than others. Even with data from only 2 phones the only systematic effect on the magnitude of the error in estimated distance travelled was the result of differences between phones.  In this specific case the newer phone, even from the same manufacturer, less accurate than the older phone.
  • The magnitudes of the errors hardly varied at all with the duration of the observation period, or with the total distance travelled.  There was a slight tendency for errors to be larger for longer-duration counts, and a slight tendency for errors to be smaller for counts when longer distances were travelled.  The veracity of neither of these patterns was anywhere close to supported by statistical tests.
  • There is also a tendency for the errors in travelling distances to be higher for stationary counts than for travelling counts (by about 5 meters).  I’m guessing that this is the result of some errors during travelling counts being cancelled out because they are in the same direction as actual travel.
  • The speed of travel does not consistently affect accuracy of estimated distances.
  • About 10% of the “random noise” is due to consistent site-to-site differences in the accuracy of recorded locations.
  • Roughly 2% of the “random noise” is the result of consistent day-to-day differences in the average accuracy of recorded locations.

Now, here are some specific findings for the two phones that I have been using:

  • The average error in the estimated distance travelled for my specific phones was about a 5 meter overestimate.
  • The most accurate 50% of estimated distances had errors of 0 – 10 meters for my specific phones.
  • The most accurate 95% of estimated distances had errors of -20 – +50 meters for my specific phones.
  • The most egregious errors were an underestimate of 90 meters, and an overestimate of 150 meters for my specific phones.

There are also a few other observations that I have, based on paying attention to the GPS data, and looking at the tracks that I have produced:

  • My gut feeling is that site-to-site variation in accuracy is largely attributable to how flat the surrounding terrain is (big bumpy things like mountains are bad), and the type of vegetation surrounding the phone (meadows good, thick forests bad).
  • The phenomenon of a phone having a “bad day” (i.e. produce consistently less accurate location information, and take longer to achieve a specific estimated level of accuracy when starting offline checklists), seems to be worse for a phone that is already predisposed to less accurate location estimates even on the best of days.  My speculation (well, actually “wild guess” would be more accurate…) is that an important cause of this issue is day-to-day variation in the severity of interference in GPS signals caused by solar flare activity.
  • If one moves slowly enough, it is possible to travel at least 30 meters without the app recording any distance travelled.  My guess is that any attempt to change the app’s criteria for smoothing out erroneous movements, so as lower the incidence of errors on less-accurate devices, would result in even greater underestimation of distances for slow travelling counts.

While I think that these conclusions are generally valid, I think that it would be useful to look at the details regarding the data and analyses on which I am basing the above statements.  So, keep reading…

Collecting the Data

The specific, quantitative details of my findings depend on the range of conditions under which I collected the information on which I am basing my conclusions.  Conclusions about accuracy  outside of those conditions can be extrapolated, but there’s no way of knowing how appropriate those extrapolated conclusions are.  Probably the most relevant limitations are that the information comes from relatively short-duration counts on which I neither travelled fast nor far.  Here are what I think are the most relevant details about how I collected the information:

  • I only worked with 2 phones, both reasonably good smartphones at the times of their purchases, but not top-end phones.  The phones were a Nexus 5, and a Nexus 5X; LG was the manufacturer (and I presume designer) of both of these phones.
  • A substantial proportion (48%) of the time, I recorded the estimated distances travelled from both phones simultaneously, and when I did so I was carrying the two phones next to each other.
  • The count durations were mostly 5 minutes, but varied from 2 – 52 minutes.
  • The distances travelled were mostly 0 meters (i.e. eBird stationary counts), but ranged up to 1.38 km.
  • For the travelling counts, I could measure the distances travelled  to within 10m or better either from Google Earth’s ruler tool, or by taking a 50 m tape measure out and directly measuring the travelled distances (when paths through forests were not clearly visible in Google Earth).  I repeatedly remeasured distances using Google Earth to insure that the distances could be repeatedly and consistently measured, and I spot-checked a very small number of these distances using my tape measure.
  • All travelling counts were made while travelling relatively slowly, mostly on foot, but at times on a bicycle.  I made frequent stops when travelling by bicycle because my primary purpose was to watch birds, not collect GPS track data.
  • I always tried to insure the highest possible accuracy for each initial location during a count period.  To do this, I always created checklists in the eBird app using the “Create Offline Checklist” option for selecting a location, and I always waited until the estimated accuracy of the GPS location was displayed as being less than 10 m, and almost always 5 m or less.  Under some conditions (e.g., a period of high solar flare activity, or while standing in a forest) this would require waiting one or more minutes.
  • The majority of the information was collected during repeated visits to a small set of locations at which I regularly go birding.  There is information from 176 separate locations, with the median location visited 2 times. Seventy-file percent of locations were visited between 1 and 6 times, and 95% of locations were visited between 1 and 29 times.

This information is accurate as of the data that I collected up to the end of 12 July 2017.

Analysing the Data

The previous section describes the raw materials that went into my analyses. Now, here are the gory details of the statistical analyses whose output I used to make my conclusions:

  • Statistical analyses were run using R statistical software, and the lmer function in package lme4 specifically.
  • I discarded all information collected using the very first beta version of the tracking functionality (beta “1.5.3-tracks1”).  In preliminary analyses I found that all of the other versions of the app to date had statistically identical levels of accuracy in estimated distances.
  • The response variable in all analyses was the error in estimated distance, calculated by subtracting the true distance travelled from the app-estimated distance for each checklist.
  • I experimented with the following predictor variables as fixed effects:
    • phone model: categorical variable with 2 categories
    • count stationary?: boolean variable, coded 0/1 for “no” and “yes”
    • count duration: in minutes (continuous variable)
    • true distance travelled: in kilometers (continuous variable)
    • travel speed: continuous (km/h)
    • app version: categorical variable only used in preliminary analyses, prior to removing data from the initial beta version of the app
    • duration × distance interaction: only used in preliminary analyses

The variables whose names are in grey were only used in preliminary analyses.  All other predictor variables were used in the final model on which I based the conclusions listed further up on this page.  Here are the estimated effects of the fixed-effect predictor variables in the final model, and the statistical probabilities that the effects would not be observed by chance alone:

Predictor Variable Parameter Estimate Std. Error P-value
Intercept-0.00230.00280.40
Count Duration (min)0.00020.00020.19
Distance (km)-0.00430.00860.62
Speed (km/hr)0.00100.00120.43
Stationary Count (yes)0.00380.00250.13
Phone (Nexus 5X)0.00500.00090.00000003