Overview Video cameras acquire images by
sampling "pixels" (an acronym for "picture elements") at periodic
time intervals. Most cameras utilize self-contained
crystal-controlled oscillators for timing the acquisition of pixels,
resulting in very stable pixel sampling rates.
Frame grabbers convert analog video signals into
digital form by digitizing pixels at the same rate they were
acquired by the camera. Unfortunately, cameras typically do not
output the associated pixel acquisition clock along with the video
signal. Consequently, frame grabbers must attempt to reconstruct the
camera's pixel clock by generating a stable reference clock which is
synchronized to the horizontal sync pulses of the camera video
signal.
Any timing difference between a frame grabber's
sampling clock and camera's pixel clock has the potential to cause
measurement error. Since useful video signals are seldom constant
over time, such measurement errors result in a form of noise. The
magnitude of this noise is affected by two components: sampling
clock error and video slew rate.
What is Pixel Jitter? In order to
quantify the noise caused by pixel timing errors, it is useful to
isolate and focus on the sampling clock error since this
characteristic is independent of the nature of the video image being
captured. Typically, sampling clock errors are principally due to
fluctuations in a frame grabber's reconstructed pixel clock. "Pixel
jitter" is the measure of a frame grabber's time variation in
pixel-to-pixel sampling interval.
Example To visualize how pixel jitter
affects a digitized video signal, let us consider an example.
Suppose we are monitoring an NTSC video signal at a particular
pixel, located at an image area where the luminance amplitude is
changing by 30 percent of full scale per pixel. This corresponds
approximately to a complete black-to-white transition over three
adjacent pixels. Furthermore, let's assume that the frame grabber
employs an eight-bit A/D converter for digitizing the incoming video
signal.
If pixel jitter is specified to be ±5 nanoseconds,
the relative sample time--the sample time with respect to the most
recent horizontal sync pulse--for the monitored pixel can vary by as
much as ten nanoseconds from frame to frame. In NTSC standard
resolution mode, the period of one pixel is close to eighty
nanoseconds, so the resulting pixel amplitude variations will be:
30% * 10ns / 80ns = 3.75%
Therefore, the monitored pixel may
vary by as much as 3.75 percent of full scale, which corresponds to
a 9.5 LSB error from the A/D converter.
This example demonstrates that the effect of pixel
jitter depends greatly on the rate of change of the video signal
amplitude. If signal amplitude varies rapidly along the time axis,
noise resulting from pixel jitter can be substantial, as this
example shows. At lower rates of change, the effects of pixel jitter
are reduced and eventually become negligible.
Measuring Pixel Jitter The block diagram
of a pixel jitter measurement circuit is shown below. This circuit
produces a monochrome video test signal consisting of the standard
NTSC sync signals and a synchronized video sawtooth waveform. |