Skip to main content

Vision-Doctor

Systems

Selection of the suitable mv system

Computational Imaging

Since smartphones have had night or portrait modes, the term has become more familiar to end users. The image is not just created with a single lens and lighting or a single shot. The final image is the result of a series of shots, some of which are extremely computationally intensive.

In industrial evaluation processes, the "shape-from-shading" method is frequently used to detect scratches and minor topographical alterations. Nevertheless, a considerable number of 3D processes also utilise alternative techniques in computational imaging to generate 3D images.

Shape from Shading

The “Shape from Shading” principle is a method for precisely determining the shape and structure of object surfaces. The component is illuminated from different directions and the resulting shadow images are captured. As a rule, the images are captured sequentially. As a rule, an illumination controller and four-segment illumination are used.

A special algorithm then calculates the four individual images into one resulting image. The image processing software analyzes shadows and reflections in order to calculate the inclination and curvature of the surface. With most area scan camera-based systems, the object should not be in motion during the images.


The method is particularly suitable for quality control, as even small deviations and defects on surfaces that would be difficult to detect with the naked eye are recognized. Scratches, flaking, chipped edges or clear impact marks that can be felt with the finger are easily detected.

HDR Imaging

High Dynamic Range (HDR) refers to the capability of an image processing system to capture and display a comprehensive range of brightness levels. In comparison to traditional cameras, which are limited in their ability to display a range of brightness levels, HDR allows for the capture of details in both the brightest and darkest areas of an image.

To achieve this, an image sequence is created with different exposure times and then combined into a single image. The number of images required will depend on the intended application. Typically, this will be two, three, four or more images.

It is essential to ensure that the image is captured at a standstill. This facilitates the combination of results. It is relatively straightforward to make offsets, but image perspectives cannot be compensated for in the same way.

EDOF - Extended Depth of Field

Extended Depth of Field (EDOF) is an advanced technology in industrial image processing that enables the creation of images with an extended depth of field.
It is not possible to achieve the required depth of field using conventional optics, even with larger apertures, for smaller fields of view.

The EDOF process involves the combination of multiple images captured with varying focus settings. In order to capture the scene with different focus settings, motorised lenses or liquid lens lenses are typically required.

The combination of these images results in a single, sharp image across all depth planes. This is especially beneficial for inspecting objects with intricate geometries or significant variations in surface height.

Depth from Focus

Depth from focus is a method used in industrial image processing to determine the depth information of an object by analysing images with different focus positions.
This process results in a three-dimensional scan of the object in question.

To achieve this, a number of images of the object are captured at different focus positions. In contrast to the EDOF method, this approach focuses on achieving a shallow depth of field.

By analysing the sharpness in a number of images, the depth information can be reconstructed. The areas of the object that are in focus are within the focal plane, while the blurred areas are outside the focal plane. Next to it an image taken with laser triangulation: Clear shadowing on the right side of the components. This is not the case with a depth from focus scan.

To capture images in different depth planes, it is necessary to use lenses with variable focus options and homogeneous, diffuse light sources. This process ensures the consistency of image recordings, which can then be offset against each other.

The process is time-consuming and typically only feasible for small fields of view. Next to it an image taken with laser triangulation: Clear shadowing on the right side of the components. This is not the case with a depth from focus image!

Brightfield-darkfield combination

Bright field and dark field are two central lighting techniques used in industrial image processing to make details and features of objects visible.

Both methods have different areas of application and require special components:

In bright field illumination, the light is aimed directly at the object and the reflected light is picked up by the camera. This results in uniform illumination of the surface, making surface structures clearly visible.

With dark-field illumination, the light is directed at an angle onto the object so that only the scattered light is captured by the camera. This method is particularly useful for highlighting fine surface structures and edges that would not be visible in bright field illumination.

With the help of an illumination control and two different illuminations, a resulting image can be created that can show certain surface defects even more clearly.

Give it a try!

RGB Super Resolution

The majority of industrial colour cameras are based on a monochrome sensor with a colour filter mosaic. As only one pixel is red, one pixel is blue and two pixels are green, it is necessary to interpolate the missing colour values from the surrounding pixels. This process is highly effective across the entire surface area. However, the interpolation algorithms result in inaccurate colour reproduction or blurring, particularly at the edges of the object. This ultimately results in a significantly blurrier image than that produced by a monochrome camera.

One potential alternative is the sequential recording of three images in succession. To achieve this, diffuse red, green and blue lighting is (again) controlled by a lighting controller. This process generates the R, G, and B information for each image pixel in sequence. It is also essential that the image capture be conducted while the vehicle is stationary.

Another method that has been shown to be effective in avoiding interpolation artefacts with colour cameras is: The image is captured with a significantly higher sensor resolution, then in-camera interpolation is applied to produce a sharp image with a low target resolution.

The result of this process is an RGB image with optimal sharpness.

Need help selecting a system?

Vision-Doctor.com is a private, independent, non-commercial website project without customer support.

For the best advice, training and sales of machine vision systems please click here.

(external link partner website in Europe)

Bild Übersicht BV-Systeme / industrielle Bildverarbeitung