There's a concept, in mathematics, where you can introduce accuracy into your results via averaging. Take a wide enough sample, average it out, and even if your original measurement was imprecise you can still end up with a reasonably accurate result.
Take my cheap kitchen scales. They're digital, they're not precision-perfect, and they only work in gradations of 2 grams. (So, if you put a twenty-five gram weight on them, the scales will either display as 24g or 26g.) For baking purposes, they work fine, but if you want to know the mass of something very light, such as a single Skittle, then you're out of luck.
But if you weigh multiple Skittles, and then average the result, you'll end up with an average that's far more accurate and precise.
Here I weighed Skittles in increments of ten and then plotted the results. As you can see, they build towards a better value as you average the mass of more and more Skittles.
There are limitations to this method. In the above graph there appear to be two converging lines: one forms a flat ceiling which the other curves up to meet. The flatness of the higher line indicates that the scales tend to round upwards a little bit aggressively, which means there's a consistent bias in the scales which will never be completely erased by averaging. Regardless, the result I ended up with is probably not far off the mark, and I would only get a better result if I were to repeat the exercise with more packets of Skittles.
Relevance to Ancient Astronomy
Ancient astronomers worked within a number of constraints. Their instruments were imprecise, they had a limited ability to compare information with astronomers in other locations, and they were working with centuries or even millennia-old data sets with no way of verifying their accuracy. Subsequently, they were forced to admit a lot of uncertainty in their determinations.
We know from Ptolemy, for example, that Hipparchus was concerned that the obersavation of the summer solstice by Aristarchus may have been off by as much as a quarter of a day. But, as Ptolemy also noted, this error in oberservations, rounded over a large number of years, is comparitaviely small. Thus both Hipparchus and Ptolemy were able to determine that the length of the tropical year to be less than 365 days and 6 hours by about one day in three hundred years. (We now know the average length of the tropical year is about 365 days, 5 hours, 48 minutes and 45 seconds.)
The technique also allowed for other discoveries. Hipparchus was able to determine a discrepancy between the tropical and sidereal year by averaging-out the errors between his observation of the occlusion of Spica, and those of Timarchus around 150 years earlier. Thus he is usually attributed with being the first to identify the Precession of the Equinoxes, a twenty-six-thousand year cycle in which the Zodiac completes one full revolution of the cosmos with respect to the tropical year.
This gives us a few hints as to why astronomy developed so slowly as a scientific discipline. Without the precise instruments that were developed in the early modern era, astronomers were reliant on data sets, some of which were ancient even at the time these astronomers were working, to uncover the delicate order of the heavens. Hence why Ptolemy's planetary model was considered such an incredible acheivement for its time, and why it was appreciated as such well into the Sixteenth Century.
Ptolemaeus, Claudius, G. J. Toomer, and Owen Gingerich. Ptolemys Almagest (Princeton, NJ: Princeton University Press, 1998), 137. ↩︎
The tropical year is found by measuring the amount of time that transpires between one equinox or solstice and the next. For example, from vernal equinox to vernal equinox. There are other ways to measure the length of the year, such as the sidereal year, but the tropical year has the most fidelity with the seasons. ↩︎
Swerdlow, 301. ↩︎
Swerdlow has misgivings about this, though, noting that Hipparchus's observations were too inadequate to confirm any suspicions of a Precession. ↩︎