Traditionally in optical astronomy the brightness of stars is measured in **magnitudes**. This system has
its origins in classical antiquity. In 120 BC Hipparchus classified naked-eye stars into six groups or
magnitudes, with the first class comprising the brightest stars and the sixth the faintest. The scale was
based on the progressive visibility of stars during the onset of twilight. The duration of twilight was
divided into six equal parts and the stars that became visible during the first part were assigned the
first magnitude, those that became visible during the second part the second magnitude and so
on.

The response of the human eye to the brightness of light is not linear but more nearly logarithmic. In
1856 Norman Pogson defined the modern magnitude scale in a way which corresponded closely to
the historical subjective classifications. He defined the ratio between two brightness classes
$n$ and
$n+1$ as
$\sqrt[5]{100}\simeq 2.512$. If we define an arbitrary ‘standard’
flux density ${F}_{0}$, then the **apparent
magnitude**, $m$, of any source
with an observed flux density $F$
is defined by:

$$m=-2.5log\frac{F}{{F}_{0}}$$ | (4) |

So the magnitudes^{2} of any two stars
with observed flux densities of ${F}_{1}$
and ${F}_{2}$ are
related by:

$${m}_{1}-{m}_{2}=-2.5log\frac{{F}_{1}}{{F}_{2}}$$ | (5) |

The system discussed so far is reliant on flux density which is a function of distance from the star,
and so says nothing of the intrinsic brightness of the star itself. The **absolute magnitude**,
$M$, is
defined as the apparent magnitude of a star that would be observed at a distance from it of 10
parsec^{3}.
Considering the flux density at 10 parsec and the observed distance
$r$ parsec
we can say:

$$\frac{F\left(r\right)}{F\left(10\right)}={\left(\frac{10}{r}\right)}^{2}$$ | (6) |

So the relationship between apparent and absolute magnitudes is given by:

$$m-M=-2.5log\frac{F\left(r\right)}{F\left(10\right)}=-2.5log{\left(\frac{10}{r}\right)}^{2}$$ | (7) |

or

$$m-M=5log\frac{r}{10}$$ | (8) |

which is more usually written as:

$$m-M=5logr-5$$ | (9) |

Though the use of the magnitude scale is ubiquitous in optical astronomy it is worth bearing in mind that it has three major drawbacks (see Hearnshaw[37]):

- it is an inverse scale, with fainter stars having larger magnitudes,
- it is a logarithmic scale,
- the base of the logarithm is 2.512.

^{2}Some magnitudes : Sirius = -1.5, full Moon = -12.5, Sun = -26.8

^{3}Strictly speaking this is the apparent magnitude which would be observed in the absence of interstellar extinction (see
Appendix A).

Copyright © 2001 Council for the Central Laboratory of the Research Councils