Multidimensional Distance Sensors

Line sensors (2-D) and area sensors (3-D) function similarly to the spot-shaped distance sensors (1-D) mentioned above. In the laser light section technique (Fig. 16a), the conventional laser triangulation technique is extended to two-dimensional measurement by displacing the laser beam with a moving (for example, a rotating polygonal) mirror. The evaluation is then performed using a matrix camera to obtain a measured result for multiple points by means of triangulation. A (light) section on the surface of the object is thus measured. A three-dimensional surface can also be measured by simply moving the coordinate measuring machine perpendicular to the section plane.

Stripe sensors (Fig. 16b) also function according to the triangulation principle. A striped pattern is projected onto the material surface and evaluated in a manner analogous to the light section technique. If the entire threedimensional surface is located inside the measuring range, no movement along the coordinate axes is required. In order to achieve a higher resolution with unique allocation of the points to their spatial coordinates, different patterns are usually projected and evaluated in succession. A sort of subpixeling can be realized via the “phase shift technique”. In principle, the patterns are shifted incrementally and evaluated in each case.

 

Fig. 16: Examples of multidimensional distance sensors; a) Laser light section, b) Fringe projection, c) Photogrammetry, d) Werth 3D-Patch.

Photogrammetric techniques (Fig. 16c) are based on acquisition of the object surface from two different directions using one image sensor for each direction. According to the triangulation principle, the space coordinates of each object feature recognized are calculated via the angular relationships. Since the object is not usually sufficiently structured, a two-dimensional grid is projected over its surface. The resulting pattern is captured by the two cameras and then evaluated. In contrast to a stripe sensor, the accuracy of the projection does not influence the measured result here.

The Werth 3D-Patch (Fig. 16d) enables the exceptionally convenient and fast three-dimensional acquisition of surfaces. It performs the autofocusing process described above simultaneously for all pixels with a moving camera. Via a single run through the required measuring range along the optical axis, a large number of measured points can be captured in just a few seconds. The chief advantage of this technique is that it requires no special hardware except for a standard image processing sensor.

A white light interferometer moved along the optical axis also enables three-dimensional measurements. The number of object points located a predefined distance away from the sensor is determined for each position of the sensor via a special interference technique. While the interferometer is being moved, point clouds are determined for various section planes and then combined similar to the Werth 3D-Patch.

With the sensor principles mentioned above, a multidimensional measurement of point clouds occurs. This is comparable to the “measurement in the image” performed with an image processing sensor. The measuring uncertainty attainable with the prescribed measuring range is, however, limited. This is due especially to the finite resolution of the sensors. A distinction must be made between structural resolution and spatial resolution.

The structural resolution defines the size of the smallest detectable or resolvable structure (Shannon’s theorem). The relative structural resolution of commonly used CCD sensors results from the relationship between the pixel size and the size of the measuring range, and corresponds to the number of pixels in the given direction (ratio of 1:1000). For example, in a measuring field with a dimension of 100 mm, only structures larger than 0.1 mm can be resolved.

In contrast, the spatial resolution determines the step width which can be used to measure the position of a structure. This is determined, for example, by the digital scale system of the coordinate measuring machine as well as by the pixel size of the sensor. Via gray value interpolation (or subpixeling), a sensor spatial resolution of approx. 1:10,000 can be attained. This ratio can be improved only by high-resolution sensor chips.

It should be noted here that a much better spatial resolution is required to attain the targeted measuring uncertainty. This means, for example, that a sensor spatial resolution of much less than 1 µm would be necessary to attain a measuring uncertainty of several micrometers. Moreover, only sensor measuring ranges less than 10 mm would be attainable. Therefore, the measurement of complex parts with larger measuring ranges makes it necessary to position the sensors with the coordinate measuring machine. This corresponds to the “measuring on the image” technique described earlier. In practice, 3-D sensors of this type are provided with larger measuring ranges (several tens of millimeters in length) and used to measure free-form surfaces and other features with large tolerances.