Calculating the resolution & measuring accuracy for machine vision based camera systems
The question which accuracy the camera or the total image processing system can achieve is not easy to answer.
The equation "accuracy = object field / camera resolution" is not quite true.
- From a physical point of view a considerably higher resolution is required in order to clearly image and digitalise an object structure. According to the Nyquist-Shannon sampling theorem, at least twice the frequency (pixel number) is required. This particularly applies if finest object structures, patterns, lines, small details, etc. must ube detected. In order to calculate a monochrome camera, set the Nyquist divisor = 2, software interpolation = none. In case of a Bayer colour camera, the resolution can be clearly reduced again (divisor = 3 or 4). You can determine the resolving power of your machine vision system with the help of test charts like the USAF 1951 target.
- State-of-the-art software methods, however, are also capable of increasing the accuracy of the system: Measuring algorithms and object finding can enhance the accuracy up to tenfold or more by means of complex interpolations, inspection of grey graduations, etc. (sub-pixeling: measuring more accurate than the actual pixel resolution). Such "mathematical promises", however, require a lot of know-how, optimisation of all components, good algorithms and eventually detailed series of tests of the measuring system. When using ideal components, for instance, set the software interpolation = 0.3 for calculation.
Note: Even when using the drop-down lists own values can be entered. Please use the first entry "user def."!
Please also read chapter "Calculating the theoretical optic limiting resolution".
Attention: These values only represent basic references. Only a machine vision expert can consider further influencing factors of hardware, software and system environment.