General Information Back to the Home PageBack to the Home Page

Image Capture and Display

Contents


Areascan

An area scan camera takes images of a fixed area (like a TV image) and then returns to re-scan the same area to detect any changes in that picture. This is easily served by having a digital framestore, normally made of VRAM, accessible by a 'C4x into which images are captured. Subsequent frames of data can be overwritten just as the real scene, or can be double buffered in memory so that a limited history of the image is stored As an area is re-scanned the time interval between each frame can be quite large, so sometimes a technique known as interlacing is used. This is where the area is scanned twice per frame, with an offset of half a line between them. This means that to build up a whole image two scans are needed, which are known as fields, but that the fields are timed half-a-frame apart, thus minimising the changes that occur between frames, making movement appear less jerky.

Sometimes one field contains enough information to perform the processing required and the extra time between them can be used to perform the processing. This technique can run into problems if the field used in the processing is not always the same field, as there is a physical offset between them In this type of field processing system it can be useful to either specify which field will be captured, or at least have a indication of which field has been captured in order to apply corrections to the processing. Area scan cameras usually generate a standard TV format such as PAL, NTSC or SECAM, These are timing specifications which define the time taken to scan a line, the timings of the synchronisation within that line, and the number of lines in a frame (field).

There is usually a number of pixels per line associated with the way the camera is made, as the cameras are usually made up of descrete areas of detector, known as pixels. PAL/NTSC/SECAM formats usually have about 700 pixels per line.

Area scan cameras normally provide either :


Linescan

A linescan camera just scans from side to side, and relies on the "scene" to pass by. Thus subsequent lines are used to build up an image, that image is continuous and is not based on "frames". It is less sensible to store the data in a dedicated framestore, as the number of lines that are required to be stored as "history" will vary with each application. Here it is more sensible to provide the Linescan data (through a minimal buffer) to a comm-port based interface. This allows a 'C4x to store as much or as little as necessary for the application in whatever type of memory is appropriate.

The nature of linescan imaging makes interlacing techniques impossible. There is no standard linescan format, like the PAL/NTSC/SECAM for the area scan, but there are usually separate line valid and data signals. The number of pixels per line can vary greatly sometimes being as many as 4096. Linescan systems are usually chosen for applications where the item to be imaged is naturally moving, such as a production line system. In these applications an area scan camera can be used, but camera frame timing control or strobe lighting must usually be used to ensure that a complete workpiece falls into one frame.


Both Area and Linescan cameras can provide their outputs in Analogue or Digital form.

Analogue

Analogue cameras normally combine the synchronisation pulses with the video data onto one analogue signal, making cabling very simple, but requiring a sync-stripper circuit and a digitiser before the data can be stored.

In a monochrome (or on each channel of an RGB) system the signal comprises of a linesync pulse, two porch areas defining the blank level and the video pixel data. Timing is defined by the format (such as PAL/NTSC/SECAM) and voltage levels are defined by the standard (such as RS170).

The digitisation frequency of a video line governs the number of pixels of information that will be gained from it. Usually this is chosen to be the same number of pixels in the camera array, but it can be varied. One problem that should be considered when varying the pixel frequency is that the geometry of each pixel will become rectangular, which can cause problems in systems where dimensional processing is to take place.

Mono/RGB Video Signal

Mono/RGB Video Signal.

A composite colour signal contains the same information as in the monochrome signal, but has a colour modulation added to it. There is a colour burst in the back porch which defines the colour modulation's frequency and phase. A similar modulation is added to the whole of the video data area, where the phase defines the hue and the size of the modulation defines the colour saturation. Special circuitry is needed to decode this complex combination of information, and typically the colours resulting are not as good as those from an RGB system.

It should be noted that digitising a composite signal with a digitiser intended only for a monochrome (RGB) signal will result in artifacts appearing in the image caused by the digitisation being performed on the peaks and troughs of the modulation. If a composite colour signal is to be digitised for intensity only, then a filter should be used on the video to remove this modulation, before digitising.

It is conventional (but not always true) that monochrome images will be digitised with an 8-bit resolution, and that colour images stored as RGB will have an 8-bit resolution per colour channel, i.e. 24-bits per pixel.

Composite Colour Video Signal

Composite Colour Video Signal.

Analogue Frame Grabber Products

To support lower cost systems and until digital camera technology reaches price parity with the analogue equivalents Sundance can offer Analogue frame grabber products.


Digital

A lot of cameras are available with a digital output. This is most common for linescan camera but area camera are also available, In these cases the pixel clock, synchronisation and digital pixel data are provided separately. Here the function of a grabber is made simpler as the data is sampled with the pixel clock only when the synchronisation signals indicate it is valid.

Digital Frame Grabber Products

Sundance believe very strongly that digital cameras whilst being higher in price than the equivalent analogue product produce a higher performance and higher quality system at a lower cost. We are committed to design state of the digital frame grabbers utilising flexible programmable logic to offer quick and efficient re-configuration of standard products to almost any digital camera requiring support.  


Storage Formats

Pixel storage in 'C4x memory space can take two main forms, packed and unpacked pixel format. It is conventional to store 24-bit pixel data as one pixel per 'C4x word, but it can be seen that 8-bit component data can be stored either as one pixel per 'C4x word (unpacked) or as 4 pixels per 'C4x word (packed).

Packed data is more storage efficient, and can be copied faster, e.g. across comm-ports, but when it comes to pixel processing the unpacked format is better as no masking and shifting operations are necessary. Most systems are better with one format or the other, but quite a number really need both.


Image Processing

A digitised image often needs to be processed, in order to allow features to be detected from software. This can be performed in software by the 'C4x that is gathering the image data, but most image processing algorithms involve matrix processing on the data, requiring a large number of mathematical operations to be performed on each and every pixel. This means that even with the processing power of the 'C4x you cannot expect to process large images in "real time" (i.e complete the processing on one image before the next one is received). Image processing libraries can make the software execute faster, but often still not fast enough.

A popular method of accelerating this processing is to parallelise the operation, and perform the processing on subsets of the image that can be processed in "real-time" and to distribute the processing over a number of 'C4xs. The image acquiring module must then distribute the data to the various modules involved in the processing, remembering that edge effects of matrix type applications mean that some of the data is required by more than one processor. This type of application usually requires a very small amount of data that is stored, a small code segment that is repeated many times and good communications to receive the data and transmit the results. Multiple 'C4x TIM-40s are ideal for this type of application having the 'C4x cache for code, a small amount of memory for the data and several comm-ports for the data reception and transmission.

Specialist hardware TIM-40s can also be used to process pixel streams received and re-transmitted over 'C4x comm-ports. Framegrabbers usually provide features to grab single or continuous images, to generate interrupts or flags to track the progress of capturing, etc. Once the acquisition is initialised by the 'C4x, its full power is available for image processing or distribution tasks.


Display

Display of image data is very similar to the acquisition of images, but is normally an analogue output, and almost always RGB output. This is largely driven by the easy availability of computer monitors which can be used for the display.

The availability and cost-effectiveness of these monitors make it usual to display in a high resolution mode, using colour even when monochrome images are used.

Display of PAL/NTSC/SECAM formats are only used in special circumstances, i.e. where the output is to be recorded using a VCR, or when the image is to be fed to a computers existing monitor through an intelligent computer display card that can accept video inputs that are displayed in a window on the host computes screen.

A typical display can have 'C4x generated graphics which can be in the framestore and hence 8 or 24-bits per pixel, or on the overlay planes.

Overlay planes usually only have a small number of bits per pixel but can override the data planes, to provide markers, text and other graphics which are not affected by changing the data store (i.e. grabbing or processing). The use of monitors which are higher resolution than the images that are being manipulated by the system allows both raw and processed images to be displayed at the same time. Cursors are useful for user interaction, and can be implemented in software, but this adds a significant software overhead. Hardware cursors can move user defined shapes while software only supplies the x and y co-ordinates to be used.

Interlaced and non-interlaced output formats can be used to reduce display flicker. Typically the higher resolution formats ar interlaced, but it is the specification of the monitor used which governs this.

It is possible to imagine systems where both display and grabbing are useful within the same TIM-40, and also systems, such as pipeline processing systems, where it is more sensible to have the Framegrabbing at on end of the pipeline and the display at the other end.


Related Information and Products


[Return to Home Page]

If this or any other page is causing problems then please email us on www@sundance.com, or use the Your Shout! form.
© 1995-2000, Sundance Multiprocessor Technology Ltd. & Sundance DSP Inc., E&OE