Traditionally in optical astronomy the brightness of stars is measured in magnitudes. This system has its origins in classical antiquity. In 120 BC Hipparchus classified naked-eye stars into six groups or magnitudes, with the first class comprising the brightest stars and the sixth the faintest. The scale was based on the progressive visibility of stars during the onset of twilight. The duration of twilight was divided into six equal parts and the stars that became visible during the first part were assigned the first magnitude, those that became visible during the second part the second magnitude and so on.
The response of the human eye to the brightness of light is not linear but more nearly logarithmic. In 1856 Norman Pogson defined the modern magnitude scale in a way which corresponded closely to the historical subjective classifications. He defined the ratio between two brightness classes and as . If we define an arbitrary ‘standard’ flux density , then the apparent magnitude, , of any source with an observed flux density is defined by:
So the magnitudes2 of any two stars with observed flux densities of and are related by:
The system discussed so far is reliant on flux density which is a function of distance from the star, and so says nothing of the intrinsic brightness of the star itself. The absolute magnitude, , is defined as the apparent magnitude of a star that would be observed at a distance from it of 10 parsec3. Considering the flux density at 10 parsec and the observed distance parsec we can say:
So the relationship between apparent and absolute magnitudes is given by:
which is more usually written as:
Though the use of the magnitude scale is ubiquitous in optical astronomy it is worth bearing in mind that it has three major drawbacks (see Hearnshaw):
2Some magnitudes : Sirius = -1.5, full Moon = -12.5, Sun = -26.8