Account

Sie sind derzeit noch nicht eingeloggt.

Anmelden
Home Wiki Industriekamera

Wiki Industriekamera

Definition list of most used terms for Machine Vision Cameras.

This list decribes the most used terms in machine vision regarding machine vision cameras.
 
Bandwidth
Bandwidth is the amount of data which can be simultaneously transmitted over a certain connection during a certain period. The higher the bandwidth of a connection, the more and faster the data can be received or sent. Bandwidth is defined as the amount of Megabit or Megabyte that can be transmitted per second. Typical values for Bandwidth are 40 Megabyte/second (USB2 connection), 100 Megabyte/second (GigE connection) and 400 Megabyte/second (USB3 connection).
Bayer pattern
A Bayer pattern, also called Bayer filter is an ordinary square raster of color filters for red, green and blue (RGB) which is used for digital image sensors in vision cameras and consumer cameras. The pattern is named after the inventor Bryce E. Bayer from Eastman Kodak. Every color pixel has a RGB value, where the values of the surrounding pixels are used to generate the RGB value of a single pixel. The decoding of the Bayer pattern is called de-Bayering
Bit Depth
Bit depth is a measurement used to indicate the number of bits used to code the color / gray value of a pixel. The higher the bit depth is, the more values can be coded. A monochrome image in theory only needs a bit depth of 1 bit. However, it can only use the color black and white, no grey values. A normal monochrome picture therefore normally uses 8 bits and can hereby have 256 different shades. A color picture needs between 8 and 24 bits, depending on the quality of the image. With a higher bit depth is the image information often saved as RGB (red, green and blue), because not all different colors need to be saved in a separate table then.
Bits
A pixel contains information about color and brightness. The amount of information that a pixel contains differs and is expressed in bits. A 1-bits pixel is the minimum, which only has two options: on and off. This will result in a monochrome image. An 8-bit pixel entails 256 different shades, a 16-bit pixel 65.336 shades and a 24-bit-pixel 16.7 million shades.
CCD
A charge-coupled device (CCD) is a chip which converts electromagnetic radiation into an electrical pulse. The chip consists of rows of capacitors connected by electronic switches. By opening and closing, these switches alternate so they can transport charges from on to the other side. The CCD chip is placed behind the lens of a machine vision camera. Whereby the incoming light is transformed into an electrical signal. Thereafter is this signal converted by a different chip into a digital signal, only existing out of ones and zeros. This data can be understood by a computer. CCD sensors are one of the most used sensors in machine vision cameras, however recently CCD sensors are replaced by CMOS sensors.
CMOS
Complementary metal oxide semiconductor (CMOS) is the semiconductor technology which uses metal oxide field effect transistors with both a n- as p-conduction. By this complementary way of switching are circuits either connected with the positive and negative voltage supply, whilst the opposite transistor does not conduct voltage, whereby the switch barely uses current when there is no switching. Because of its low power usage, CMOS-technology is often used for image sensors. In this field it is the biggest competitor of CCD. Many of the operations that are needed for CCD afterwards, can be done on the chip itself with CMOS-chips, such as strengthening, noise reduction and interpolation. A CCD receives a charge on the end of an entire row of pixels which is then converted into voltage, with CMOS this happens with every pixel separately. The latest generation of machine vision cameras have only CMOS sensors and no CCD sensors anymore.
Dynamic Range
The dynamic range is the ratio of the brightest light to the weakest light that can be perceived by a camera. The value is measured in dB. The higher the value in dB, the larger the difference between the brightest light and weakest light can be. The dynamic range is important when you want to capture an image of an object with a big contrast. You want the dark and bright parties both to be captured well.
Fill Factor
The fill factor of an image sensor is the ratio of the light sensitive area from a pixel of the total area. For pixels without micro lenses is the fill factor the ratio of the photodiode area to the total pixel surface. However, the use of micro lenses increases the effective fill factor, often to a 100%, by converging light of the total pixel area to the photodiode. In another occasion the fill factor of an image can be reduced.
Frame Grabber
A frame grabber is plugged in the PCI bus of a computer and provides the connection between the machine vision camera and the computer. The framegrabber transfers the data of the machine vision camera to the CPU of the computer. Newer interfaces like GigE, USB2, and USB3 can be used without frame grabber. These interfaces can use the standard network or USB ports of the computer, where an interface like Camera Link and Coaxpress still requires a frame grabber, making it way more expensive.
Frame Rate
The number of frames per second is a measurement which indicates how fast a device captures frames or how fast it processes these frames. The term is often used in films, computer graphics, cameras, and displays. The number of frames per seconds can be expressed in frames per second or in hertz (Hz).
LUT, Gamma Correction
Gamma correction is a non-linear operation to correct a moving image’s light intensity, illumination or brightness. The amount of gamma correction does not only change brightness, but also the RGB ratio. Gamma is the contrast of proportionality between brightness of the display and the video signal
Global Shutter
If the images sensor is global shutter, every pixel will start its exposure and end its exposure at the same time. This requires large amounts of memory. Since the complete image can be saved in the memory after the exposure time ends, the data can be read gradually. Manufacturing global shutter sensors is a complex process and is more expensive than making rolling shutter sensors. The main benefit of global shutter sensors is that they are able to capture high-speed moving objects/ products without having distortion. Global shutter sensors can also be used in a wider range of applications.
Image Sensor
An image sensor is the general term for an electronic component which incorporates multiple light sensitive components and is able to capture images electronically. It converts an optical image into an electronic signal. The most used image sensors are the CCD-chip and CMOS-chip. Image sensors are applied in a variance of cameras, both for video and digital photography. The sensor converts incoming light into a digital image.
Image Sensor Format
Before purchasing a vision camera, it is of importance to know which sizes of image sensors are available. The image sensor of a camera has great influence over the quality of your images. The image sensor format is indicated in inches. It has to be noted that inches cannot be converted to real image sensor formats. This descends from the format of old television tubes. The image sensor format is needed to calculate the lens. The most used sensor formats in machine vision are: 1/4 inch, 1/3, 1/2 inch, 2/3 inch and 1 inch. For C-mount cameras the sensor format varies from 1/4 inch up to 1.1 inch.
Image Sensor Sensitivity
The amount of light exposed which a camera can convert into electrons determines the image sensor sensitivity. This depends on pixel size and the technique that was used to make the image sensor. Traditionally CCD sensors used to be more sensitive to light than CMOS sensors. However, over the last years this has turned around. The Sony IMX sensors are very light sensitive and hereby we can recommend the Sony Pregius image sensors very much. Such as the IMX 265.
Interface
An interface is a method whereby two systems can communicate with each other. An interface converts information from one system into understandable and recognizable information to another system. The display is a fashion of an interface between user and computer. It converts digital information from the computer into textual or graphical shape. For machine vision cameras, the interface is the type of connection between the camera and the PC. This can be GigaBit Ethernet, USB2 or USB3.  
Interlacing
Interlacing or interlaced scanning is the technique of capturing moving images on a display. With a camera, whereby the quality of the image is improved without using more bandwidth. With interlaced scanning is an image divided in two fields. The one field exists of all even lines (scanlines), and the other out of all odd lines. Alternately both fields are refreshed. By interlacing is the amount of image information halved. So, when the whole image is drawn in one go it is called progressive scanning. When an image is compiled in two separate times it is called interlacing. Compiling two interlaced frames together can result into a “cam effect”. This is due to when using a moving image difference exist between the two frames. Two frames are then compiled which differ 1/50th of a second. This results into two different snapshots of the same image. A display needs to compensate for this matter, this is called deinterlacing.
I/O
I/o stands for input / output. Signals being received are considered as input, signals being sent are output. A vision camera has multiple I/O ports for communication. The signal is high or low. The output signal of the vision camera can for example be used to trigger a light source, sending a trigger signal to another vision camera to synchronize both cameras or sending a signal to a PLC. The input ports are for example used to trigger the camera to capture an image, or to read the status of a button that is connected to the input port.
Machine vision
Machine vision is the ability of a computer to see, have vision. Machine vision is comparable to computer vision, but then in an industrial or practical application. A computer requires a machine vision camera to see, this camera then collects data by capturing images of a certain product, process etc. The data that must be collected is specified beforehand by the software used on the vision system. Data will be sent to a robot controller or computer after the data collection phase, which will then execute a certain function.
Motion Blur
Motion blur is the phenomenon that objects on a photo or video image appear blurry, as a result from the movement of an object and/or the camera. Motion blur often occurs when a shutter time is used which is too long. The projected image of the object on the image sensor should not move more then half a pixel during the exposure time. To calculate the maximum exposure time we have the following example. The field of view is 1000x600mm and the Machine Vision Camera has 1000x600pixels resolution. This means 1pixel/1mm. If an object moves with 1m/second this will be 1000mm/second. Motion blur will be noticed if the object moves with more then half a pixel, that is 0,5 * 1pixel/1mm= 0.5mm. The maximum exposure time is (max object movement=0.5mm) / (object speed = 1000mm) = 0.0005seconds = 0.5ms. In this case the max exposure time to eliminate motion blur is 0.5x1000=500us
Pixel Binning
In image sensors the process of pixel binning refers to neighbouring pixels combining their electric charge, to change into one super-pixel, and hereby reducing the amount of pixels. This increases the signal to noise ratio (SNR). There are three kinds of pixel binning which are horizontal, vertical and full binning. Pixel binning often happens with 4 pixels (2x2) at a time. However, some image sensors are also able to combine up to 16 (4x4) pixel at the same time. Hereby the sensor increases the signal to noise ratio by 4, reducing the sample density (therefore resolution) by 4.
Pixels
Pixels are picture points from which an image is built up. The pixel is the smallest possible element of an image. Every pixel can have a random gray / color value. Jointly the pixels form the desired image. The term pixels derived from the words picture and element and is often abbreviated as ‘px’. The number of pixels of an image is called resolution. The higher the resolution, the more pixels per millimeter, making your image sharper. Resolution is often expressed in pixels per inch (ppi). The image sensor of a machine visionc camera also consist out of pixels. The number of pixels depends on the image sensor. A sensor with an image of 6000x4000 pixels contains 24 million pixels. Expressed as 24MP or 24 Mpx (megapixels). The resolution of our cameras is always mentioned in the product description.
Progressive Scanning
Progressive scanning is the technique to display, save or pass along moving images, where a frame does not exist of multiple fields, but all rows are refreshed in order. This is the opposite of the interlaced scanning method which is used with older CCD sensors. Progressive scanning is used on all CCD sensors that are used in our machine vision cameras.
Quantum Efficiency
Quantum efficiency refers to incident photon to converted electron (IPCE) ratio, of a photosensitive device such as the image sensor of a machine vision camera. 
Region of Interest
Region of Interest (ROI) of a machine vision camera is the area / part of the image sensor that is read-out. For example a vision camera has an image sensor with a resolution of 1280x1024pixels. You are only interested in centerpart of the image. You can set a ROI of 640x480 pixels inside the camera. Then, only that part of the image sensor will capture light and transmit the data. Setting a ROI in the vision camera will increase the frames per second because only a part of the image sensor will be read out, reducing the amount of data to transmit per caputered image and allowing the camera to make more images per second.    
Resolution
Resolution in the area of digital image processing is a term used to describe the number of pixels of an image. The higher number of pixels, the higher the resolution is. Resolution is expressed in the amount of pixels horizontal and vertical or the total amount of pixels of a sensor, expressed in Megapixels. An image can have a resolution of 1280x1024pixels and this is also expressed as a 1,3 Megapixel resolution.
Rolling Shutter
A rolling shutter sensor has a different method of image capturing compared to the global shutter. Namely, it exposes different lines at different times as they are being read out. Each line is being exposed in a row, each line is being fully readout before the next line is up. With a rolling shutter sensor, it requires the pixel unit only two transistors to transport an electron, reducing the amount of heat and noise. Relative to the global shutter sensor is the structure of rolling shutter simplified and therefore cheaper. The downside of the rolling shutter however is that not every line is exposed simultaneously, which will cause distortion when trying to capture moving objects.
Shading Correction
Shading correction or flat field correction is used to correct vignetting of the lens or dust particles on the image sensor. Vignetting is darkening of the image corners when compared to the centre of the image. Using shading correction / flat field correction requires the same optical setup by which the original images have been captures during the calibration of the shading correction. So, with the same lens, diaphragm, filter, and same positioning. Also, the focus which was used when making the calibration image has to be the same.
Shutter Time
Exposure Time
The shutter time or exposure time is the amount of time that light falls on the sensor of the camera. Lighting and illumination are very important elements with imagering. Your exposure time determines how much light falls on your sensor and hereby the number of details which are visible on that image. Your image can lose a lot of details with too much or too little light, because these details fall away in too dark or too bright parts of the image. Therefore, is exposure time an important element with the illumination of your images and is hereby one of the three most important elements of the exposure triangle. The exposure time not only arranges the exposure of your image, but also how focused or blurred movement in your image looks like. With a short exposure time you can capture fast movements and freeze them. With a long exposure time you can make movements look fluent. exposure time can be manually adjusted on your machine vision camera. exposure times can vary from 5us up to a second. Really depending on the type of application you will require a long or short exposure time.
Signal to Noise Ratio
The signal to noise ratio (SNR) is a measurement used to measure the quality of a signal in which a disturbing noise is present. The signal to noise ratio measures the power of the desired signal relative to the power of the present noise. The higher this value, the larger the difference between the signal and the noise, making it possible to retrieve weak signals better. As a result, a sensor with a high SNR value is better able to capture images in low light situations.