The Nyquist Theorem states that a signal must be sampled at least twice as fast as the bandwidth of the signal to accurately reconstruct the waveform; otherwise, the high-frequency content will alias at a frequency inside the spectrum of interest (passband). The minimum required sampling frequency, in accordance with the Nyquist Theorem, is the Nyquist frequency.

Figure 1. The Nyquist Frequency
f n y q u i s t > 2 × f s i g n a l

where *f _{signal}* is the highest frequency of interest
in the input signal.

Sampling frequencies above *f _{nyquist }* are called
‘oversampling’. This sampling frequency, however, is just a theoretical and
absolute minimum sampling frequency. In practice, the user usually wishes
the highest possible sampling frequency, to give the best possible
representation of the measured signal, in the time domain. In most cases,
the input signal is already oversampled.

The sampling frequency is a result of prescaling the CPU clock; a lower prescaling factor gives a higher ADC clock frequency. At a certain point, a higher ADC clock will decrease the accuracy of the conversion as the Effective Number of Bits, ENOB, will decrease.