Microscopic vision has been widely applied in precision assembly. To achieve sufficiently high resolution in measurements for precision assembly when the sizes of the parts involved exceed the field of view of the vision system, an image mosaic technique must be used. In this paper, a method for constructing an image mosaic with non-overlapping areas with enhanced efficiency is proposed. First, an image mosaic model for the part is created using a geometric model of the measurement system installed on a X-Y-Z precision stages with high repeatability, and a path for image acquisition is established. Second, images are captured along the same path for a specified calibration plate, and an entire image is formed based on the given model. The measurement results obtained from the specified calibration plate are utilized to identify mosaic errors and apply compensation for the part requiring measurement. Experimental results show that the maximum error is less than 4 μm for a camera with pixel equivalent 2.46 μm, thereby demonstrating the accuracy of the proposed method. This image mosaic technique with non-overlapping regions can simplify image acquisition and reduce the workload involved in constructing an image mosaic.

HIGHLIGHTS

  • A method is proposed for a microscopic vision image mosaic without overlapping region in precision assembly.

  • The method has high accuracy relative to the camera resolution.

  • Mosaic error can be compensated by a special calibration plate to improve the precision of measurement.

In precision assembly, a microscopic vision system is typically used for the alignment of parts and is very important for ensuring product quality and improving production efficiency.1–4 At the same time, it can also be used for measurement of parts during inspection and for selection during assembly.5,6

In visual measurement, if the part being measured is large and the vision system has a wide field of view, it is difficult to obtain sufficiently high-resolution images to enable precise measurements. Therefore, the field of view of the vision system should be small in order to acquire an accurate image of the part being tested. Consequently, it is common to capture different parts of an image several times and then use an image mosaic method to merge partial images to obtain the entire image.7,8 Thus, the image mosaic technique can be used to balance accuracy and field of view for precise visual measurements.

Generally, image mosaic algorithms use image registration to detect overlapping areas between two partial images, and the entire image is then acquired via image fusion. Common image registration algorithms include the scale-invariant feature transform (SIFT) algorithm, the speeded up robust features (SURF) algorithm, and the features from accelerated segment test (FAST) algorithm. Image fusion is typically achieved using the weighted average method.9 

Image mosaic methods have been used for high-precision industrial visual detection. Feng et al.10 improved the efficiency of image registration by narrowing the matching region and using a local signal-intensity ratio algorithm to perform feature matching. They then applied the random sample consensus (RANSAC) method to eliminate mismatching points. Finally, they used Poisson fusion to fuse images and implement online detection of inertial confinement fusion terminal optical elements. Cai et al.11 used edge detection to segment the subregions with the most obvious gray gradient change in overlapping regions and then applied the SIFT algorithm for image registration and conducted image fusion based on a sigmoid function weight. They applied this approach to obtain an image mosaic of the inner wall of a gun barrel. Song et al.12 used an image mosaic method to detect an antenna circuit. The angle between the X and Y axes of a two-dimensional (2D) moving platform was calibrated using a calibration plate. A camera collected images with overlapping areas through a 2D moving platform, and the grating ruler recorded the position information of the moving platform simultaneously. The pixels of the mosaic image were then indexed to a specific local image to obtain the gray value of the pixel by bilinear interpolation. Wang et al.13 used the SIFT algorithm for image registration and realized an image mosaic of an entire groove structure during femtosecond laser processing. Zhou et al.14 used the SIFT algorithm for image registration, utilizing slope probability measurement and the RANSAC algorithm to eliminate error-matching points. The weighted average method was used for image fusion in the process of inspecting surface defects of steel rotary parts. Cai et al.15 used the histogram of oriented gradient (HOG) algorithm for image registration of tire molds and an averaging image fusion method to realize image fusion.

Image mosaic methods have also been used for microscopic visual measurements. Lei et al.16 applied white-light interferometry to measure the microstructure of a microelectromechanical system (MEMS), using the features of gradient and corner points to establish an evaluation algorithm in an overlapping area of two adjacent images, register the two images, and establish a transformation model for them to realize the image mosaic. Liu et al.17 proposed a feature-based multicycle image stitching algorithm for the evaluation of surface defects in large fine optics, categorizing the overlapping areas by the features they contained, stitching different types of overlapping areas in different ways, and finally obtaining a full aperture image. In addition, an image mosaic method has been used to splice the 3D data obtained from different scenes in large-scale 3D topographic measurements using the overlapping areas of local data point clouds for 3D point cloud registration, converting local data point clouds to the same coordinate system, and then obtaining a stereo image of a part.18–20 

However, most algorithms require overlapping areas of the sub-images, and rely on image registration and fusion to obtain the complete image. Despite their usefulness, these processes are prone to false matches and have a heavy computational load, leading to lower efficiency and potential mosaic gaps. Meanwhile, precision stages with high repeatability and a camera with high resolution, as well as a telecentric lens, are commonly utilized in precision assembly systems. To take advantage of these properties, an image mosaic method with non-overlapping regions suitable for precision assembly systems has been proposed, relying mainly on hardware performance rather than algorithms. Because the errors in this method are mainly due to the system rather than the algorithm and the quality of the overlapping regions, it can avoid errors caused by uncertain factors such as overlapping region quality. Thus, a specific calibration plate can be used to achieve high-quality image mosaic results. By contrast, traditional image mosaic methods rely on algorithms and sufficient quality of overlapping regions. Consequently, multiple uncertain factors affect their accuracy, making it difficult to compensate for mosaic errors. At the same time, they require a greater number of sub-images with overlapping regions, leading to lower efficiency in precision assembly. Therefore, the proposed method provides an appropriate image mosaic approach in precision assembly systems. It obtains sub-images with non-overlapping regions using precision stages to simplify the image acquisition workload and improve efficiency, and it uses a specified calibration plate to reduce mosaic error.

To illustrate this method, we took the measurement of the inner diameter of parts composed of eight-section arc structures as an example. First, a visual measurement system was set up using precision stages with three degrees of freedom, which had high repeatability and could be calibrated for installation errors using an angle calibration plate. An image mosaic model without overlapping areas was then developed. This model was based on the geometric characteristics of the visual system and the image acquisition path of the parts. Second, a visual system along the same image acquisition path was employed to capture partial images of the specified calibration plate, and the complete image was then obtained using the proposed image mosaic model. Subsequently, the measurement result was compared with the standard value to determine the errors of the mosaic method and compensate for the parts to be measured.

The remainder of this paper is organized as follows. Section II illustrates the image mosaic model and error compensation method with the measurement system installed on the X-Y-Z precision stages, using a part with eight-section arcs as an example. Section III describes using the experimental equipment used to obtain images of the specified calibration plate, and this is followed by calculation and discussion of the possible causes of mosaic errors. Section IV gives the experimental and compensated results for an actual part. Section V presents the conclusions of this study.

The shape of the part to be measured, which was composed of eight-section arc structures, is shown in Fig. 1(a). Its designed inner diameter was 22 mm, and the length of each structure was 3 mm. Its measurement accuracy was required to be 10 μm. It was assembled with a shaft to form a component, and the eight-section arc structures were required be in uniform contact with the shaft. Otherwise, an asymmetric stress would have been generated, causing excessive deformation of the part and negatively affecting the performance of the electromechanical product. Therefore, measurements had to be performed to determine whether the eight-section arc structures did indeed match the assembly requirements.

FIG. 1.

Part to be measured: (a) designed diameter; (b) diameters to be measured.

FIG. 1.

Part to be measured: (a) designed diameter; (b) diameters to be measured.

Close modal

It is obvious from Fig. 1(a) that only points on a pair of arcs in opposite positions can pass through the design center of the part. The diameters calculated using such pairs of arcs can be used to inspect and select parts for assembly, owing to the fact that a unique circle can be determined by two points passing through the center. The eight-section arc structures are numbered 1, …, 8 in the clockwise direction for convenience of description. They can be divided into four pairs of arcs in opposite positions, and we just need to calculate the inner diameters of each pair, i.e., the values of ΦD1, ΦD2, ΦD3, and ΦD4 in Fig. 1(b). Thus, the diameters of the arcs of these four pairs are the base information for checking the part.

In precision assembly, machine vision is generally used to measure the size and pose of parts. However, commercial visual measurement instruments cannot be easily integrated into an assembly system. Therefore, a microscopic visual measurement system was designed, with the structure shown in Fig. 2(a). It consisted of a visual module, an X-Y-Z motion module, and a material table. The vision module included a camera, a telecentric lens, a coaxial light source, and a ring light source mounted on the X-Y-Z motion module. The X and Y axes of the vision module used KXL series precision stages (SURUGA SEIKI) to realize motion in the plane, and the Z axis used a KZL series stage to adjust the focal length. The repeatability of each precision stage was 0.5 μm. The machine vision unit used a camera (MER-630-60U3M-L, Da Heng) with a resolution ratio of 3088 × 2064 and a pixel size of 2.46 × 2.46 μm2, as well as a telecentric lens (MML1-HR65D, Moritex) with a magnification of 1 and a working distance of 65 mm. Thus, the field of view of the visual measurement system was ∼7.5 × 5 mm2, which was smaller than the size of the part. Therefore, the camera had to capture different partial arc images, with the image mosaic method then being used to obtain the complete information about the part.

FIG. 2.

Microscopic visual device and installation angles: (a) coordinate system of visual device; (b) angles between visual system and world coordinate system.

FIG. 2.

Microscopic visual device and installation angles: (a) coordinate system of visual device; (b) angles between visual system and world coordinate system.

Close modal

To reduce computational cost and simplify the workload involved in acquiring partial images, we propose an image mosaic method with non-overlapping regions. The principle underlying this method is as follows.

As shown in Fig. 2(a), the coordinate system of the camera is denoted by xwoyw, and that of the X-Y-Z motion platform by xvoyv. In real applications, both the motion platform and the camera exhibit installation errors, as a result of which the X and Y precision stages are not perpendicular to each other. Consequently, there exist angular deviations between the xv and xw axes, as well as between the yv and yw axes. (In practice, there is a also deviation in the optical axis owing to installation errors of the camera. However, this is a high-order error, and the displacement in the Z direction is unchanged in the measurements. Therefore, the deviation of the optical axis can be neglected.) As shown in Fig. 2(b), the angle between the xv and xw is denoted by α, and that between the yv and yw axes by β, with α being positive when xv rotates clockwise relative to xw, and β being positive when yv rotates counterclockwise relative to yw.

The angle was calibrated using an angle calibration plate with an accuracy of 1 μm. The calibration principle for α is shown in Fig. 3. First, the center axes of the field of view, which were similar to the crosshairs, were shown onscreen by the program, as shown in Fig. 3(a). Next, the angle calibration plate was carefully positioned within the field of view. To ensure accuracy of calibration, both the horizontal and vertical directions of a square on the calibration plate must be precisely aligned with the crosshairs, as shown by position 1 in Fig. 3(b). This step is crucial for achieving the desired level of precision. Then, the X axis of the visual system is moved a distance ε, making the vertical axis of the crosshair overlap with the vertical line of the other square of the calibration plate and causing the field of view to shift from position 1 to position 2 in Fig. 3(b). The centerline offset distance δ in the yw direction can be obtained from the images of the two positions. Finally, α can be calculated as arcsin(δ/ε).

FIG. 3.

Calibration principle for α: (a) crosshairs of field of view; (b) calibration principle.

FIG. 3.

Calibration principle for α: (a) crosshairs of field of view; (b) calibration principle.

Close modal
An image mosaic model can be established after obtaining the installation angles of the precision stages. The transformation between two images captured at different positions in the system can be described as follows:
(1)
where (x0, y0) represents the coordinate point of the image acquired at the initial position and (x1, y1) the coordinate point of the image acquired when the visual device moves distances dx and dy in the X and Y directions, respectively, with dx and dy being positive when the direction of motion is along the positive direction of the respective coordinate axis. α and β denote the installation angles of the visual device in the X and Y directions, respectively.

Taking account of the fact that the camera can capture only one arc structure at a time and considering the structure of the part and the hysteresis errors of the precision stages, the image acquisition path is shown in Fig. 4. Hysteresis error is eliminated by returning to the initial position several times. The initial position of the measurement is located near the center of the part and is denoted by A0. The image acquisition path is A0, A1A2, A0, A3A4, A0, A5A6A7A8, where the positions of the images are denoted by A1, A2,…, A8 in order. When the visual system collects the image, we denote by dy1, dy2, dy5, dy6, and dy8 the displacements due to motion in the Y direction and by dx3, dx4, dx5, and dx7 those due to motion in the X direction.

FIG. 4.

Image acquisition paths: (a) from A1 to A4; (b) from A5 to A8.

FIG. 4.

Image acquisition paths: (a) from A1 to A4; (b) from A5 to A8.

Close modal

As shown in Fig. 5, the image mosaic coordinates are established according to the image acquisition path, where uov denotes the coordinate system of the mosaic image, u0o0v0 the initial position of the camera, and u1o1v1, …, u8o8v8 the coordinates of images collected by the camera for the eight-section arc structures.

FIG. 5.

Image mosaic coordinates for the part.

FIG. 5.

Image mosaic coordinates for the part.

Close modal

We denote by (x0, y0) the origin of the coordinate of the initial position of the visual device, which is a constant independent of the size of the part, and by (x1, y1), …, (x8, y8) the origins of the coordinates of the eight images collected by the visual system along the image acquisition path. Using Eq. (1) and the image acquisition path, the origins of the coordinates of the image mosaic are calculated for the eight images collected by the camera, as shown in Table I. A complete image can be obtained by using the origins of the image mosaic coordinates.

TABLE I.

Origins of coordinates for image mosaic.

No.xy
A1 x0 + dy1 sin β y0 + dy1 
A2 x1 + dy2 sin β y1 + dy2 
A3 x0 + dx3 y0 + dx3 sin α 
A4 x3 + dx4 y3 + dx4 sin α 
A5 x0 + dx5 + dy5 sin β y0 + dy5 + dx5 sin α 
A6 x5 + dy6 sin β y5 + dy6 
A7 x6 + dx7 y6 + dx7 sin α 
A8 x7 + dy8 sin β y7 + dy8 
No.xy
A1 x0 + dy1 sin β y0 + dy1 
A2 x1 + dy2 sin β y1 + dy2 
A3 x0 + dx3 y0 + dx3 sin α 
A4 x3 + dx4 y3 + dx4 sin α 
A5 x0 + dx5 + dy5 sin β y0 + dy5 + dx5 sin α 
A6 x5 + dy6 sin β y5 + dy6 
A7 x6 + dx7 y6 + dx7 sin α 
A8 x7 + dy8 sin β y7 + dy8 

The image mosaic model with non-overlapping regions can be summarized as follows. Taking advantage of the high repeatability of the precision stages, the vision module utilizes them to move definite distances to capture local images of the part. In practice, however, owing to installation error, the precision stages are not in their ideal positions. This implies that there are angles between them. We therefore used an angle calibration plate to obtain the installation angles of the precision stages. Subsequently, based on the displacement due to motion and the geometric angles of the installation, an image mosaic model without overlapping regions was established to reduce the workload involved in image acquisition and mosaic construction. Thus, an image containing all information about the part can be acquired using the image mosaic model.

Figure 6 shows the experimental equipment. The microscopic vision system can be combined with the manipulator arm and other part loading tables to form a precision system for automatic assembly of parts. The manipulator arm can be used for operations on parts, such as their picking, transportation and assembly. In this case, the visual system can be used not only to inspect and select parts, but also to achieve alignment during the assembly process. The experimental equipment incorporated an industrial personal computer (IPC) with associated software to implement image acquisition, control of the brightness of the light source, and control of the motion of the precision stages to obtain the experimental images.

FIG. 6.

Experimental equipment.

FIG. 6.

Experimental equipment.

Close modal

On the basis of the principle of angle calibration illustrated in Fig. 3, the installation angles of the precision stages were obtained as α = 0.227° and β = 0.007° in Fig. 2(b).

The high repeatability of the stages is beneficial for the accuracy of the image mosaic. However, the systematic error caused by the movement of the precision stages cannot be ignored. Because the motion platform has high repeatability (0.5 μm), in applications, the precision stages perform the same motion tasks for the same kinds of parts. Therefore, the error generated by the precision stages is relatively stable and small for each measurement. This error can be mitigated by averaging multiple measurements, leading to a more precise value.

To determine the mosaic errors of this system, we designed a specified calibration plate whose structure and inner diameter dimensions approximated the image of the part. As shown in Fig. 7(a), its inner diameter was 21 980 μm and its dimensional accuracy was 1 μm. The visual system followed the same motion path to measure both the calibration plate and the actual part. The mosaic of the specified calibration plate was calculated using the data presented in Table I. Thus, the inner diameters of the calibration plate were computed, and by comparing them with the standard values, it was possible to determine the error of the image mosaic.

FIG. 7.

(a) Specified calibration plate. (b) Diameters to be measured.

FIG. 7.

(a) Specified calibration plate. (b) Diameters to be measured.

Close modal

The experimental results were described by assigning numbers to the arc structures of the calibration plate in a clockwise manner, mirroring the approach used for the actual part, and dividing them into four pairs with opposite positions. The respective inner diameters are denoted by ΦD1, ΦD2, ΦD3, and ΦD4 in (I), (II), (III), and (IV) of Fig. 7(b).

For the specified calibration plate, partial images were captured by the experimental device using the image acquisition paths described in Fig. 4, as shown in Fig. 8(a). By applying the proposed image mosaic method, the entire image could be acquired, as shown in Fig. 8(b).

FIG. 8.

Images of calibration plate: (a) partial images; (b) image mosaic result for the calibration plate.

FIG. 8.

Images of calibration plate: (a) partial images; (b) image mosaic result for the calibration plate.

Close modal

By using the visual system to measure the calibration plate 10 times and processing the entire image, the inner diameters were calculated using the least-squares method. The results are presented in Table II.

TABLE II.

Results for the calibration plate.

No.Diameter (μm)
ΦD1ΦD2ΦD3ΦD4
21 977.91 21 979.12 21 982.81 21 976.18 
21 977.89 21 978.18 21 982.99 21 976.58 
21 979.69 21 978.14 21 982.78 21 976.30 
21 977.58 21 978.24 21 983.13 21 976.81 
21 977.94 21 979.38 21 983.14 21 977.97 
21 979.57 21 979.44 21 983.62 21 977.48 
21 977.78 21 978.44 21 983.52 21 976.64 
21 978.26 21 978.12 21 983.32 21 976.82 
21 978.48 21 978.23 21 983.52 21 976.83 
10 21 978.08 21 978.80 21 983.93 21 977.06 
Average 21 978.32 21 978.61 21 983.28 21 976.87 
Standard 21 980 21 980 21 980 21 980 
Error −1.68 −1.39 3.28 −3.13 
No.Diameter (μm)
ΦD1ΦD2ΦD3ΦD4
21 977.91 21 979.12 21 982.81 21 976.18 
21 977.89 21 978.18 21 982.99 21 976.58 
21 979.69 21 978.14 21 982.78 21 976.30 
21 977.58 21 978.24 21 983.13 21 976.81 
21 977.94 21 979.38 21 983.14 21 977.97 
21 979.57 21 979.44 21 983.62 21 977.48 
21 977.78 21 978.44 21 983.52 21 976.64 
21 978.26 21 978.12 21 983.32 21 976.82 
21 978.48 21 978.23 21 983.52 21 976.83 
10 21 978.08 21 978.80 21 983.93 21 977.06 
Average 21 978.32 21 978.61 21 983.28 21 976.87 
Standard 21 980 21 980 21 980 21 980 
Error −1.68 −1.39 3.28 −3.13 

According to the results in Table II, the error in ΦD3 is larger than that in ΦD1. Because the value of ΦD3 is determined by the positions of A3 and A4 in Table I, the error in ΦD3 is mainly caused by the calibration error in α. Meanwhile, the error in ΦD1 is primarily caused by the calibration error in β, which is based on the positions of A1 and A2 in Table I. For α = 0.227° and β = 0.007°, i.e., for α relatively large and β very small, the deviation of the calibration angles for α is larger than that for β. Thus, the error in ΦD3 is larger than that in ΦD1. At the same time, the errors in ΦD3 and ΦD1 are positive and negative, respectively. It is likely that the deviations of α and β have the opposite effects. The value of ΦD2, which is affected by both α and β, is determined by the accuracy of positions A5 and A7 in Table I. Perhaps this is why the error in ΦD2 is small. The value of ΦD4 is determined by the accuracy of the positions of A6 and A8 in Table I. It is likely that the hysteresis error in A8 is the main reason for a relatively large error in ΦD4. It can be seen from Table II that the maximum error of the image mosaic is less than 4 μm, whereas the pixel equivalent of the camera is 2.46 μm, indicating the accuracy of the image mosaic method.

The experimental apparatus similarly collected partial images for a part, as shown in Fig. 9(a). The entire image can be produced using the image mosaic method, as shown in Fig. 9(b). The values of ΦD1, ΦD2, ΦD3, and ΦD4 were calculated by processing the image mosaic results.

FIG. 9.

Images of the part: (a) partial images; (b) image mosaic result for the part.

FIG. 9.

Images of the part: (a) partial images; (b) image mosaic result for the part.

Close modal

The actual measurement of the part was done ten times. Using the errors in Table II, we applied compensation to the results for the part to improve measurement accuracy. The results are presented in Table III.

TABLE III.

Results for the part with compensation applied.

No.Diameter (μm)
ΦD1ΦD2ΦD3ΦD4
21 835.61 21 960.69 21 959.56 21 901.01 
21 836.04 21 960.98 21 959.79 21 901.30 
21 836.07 21 960.81 21 959.66 21 901.06 
21 837.64 21 960.86 21 959.90 21 901.46 
21 836.09 21 961.06 21 959.68 21 901.80 
21 836.13 21 960.68 21 959.59 21 901.63 
21 836.91 21 961.66 21 961.68 21 901.73 
21 837.88 21 964.06 21 960.73 21 901.96 
21 838.25 21 964.83 21 963.98 21 904.00 
10 21 837.62 21 962.71 21 962.19 21 902.13 
Average 21 836.82 21 961.83 21 960.68 21 901.81 
Results 21 838.50 21 963.22 21 957.40 21 904.94 
No.Diameter (μm)
ΦD1ΦD2ΦD3ΦD4
21 835.61 21 960.69 21 959.56 21 901.01 
21 836.04 21 960.98 21 959.79 21 901.30 
21 836.07 21 960.81 21 959.66 21 901.06 
21 837.64 21 960.86 21 959.90 21 901.46 
21 836.09 21 961.06 21 959.68 21 901.80 
21 836.13 21 960.68 21 959.59 21 901.63 
21 836.91 21 961.66 21 961.68 21 901.73 
21 837.88 21 964.06 21 960.73 21 901.96 
21 838.25 21 964.83 21 963.98 21 904.00 
10 21 837.62 21 962.71 21 962.19 21 902.13 
Average 21 836.82 21 961.83 21 960.68 21 901.81 
Results 21 838.50 21 963.22 21 957.40 21 904.94 

The corrected average values are the measurement results for the part. If the results are within the tolerance range of the design size, then the part can be assembled. Otherwise, we must replace the part and conduct another measurement.

An image mosaic method with non-overlapping regions has been proposed. This method relies on the use of microscopic vision in precision assembly and compensation using a specified calibration plate. The repeatability of each precision stage was 0.5 μm for a camera with pixel equivalent 2.46 μm, and the dimensional accuracy of the specified calibration plate was 1 μm. Experimental results demonstrated a maximum deviation of less than 4 μm, which meets the assembly requirements of the parts being measured. The time needed for a single actual measurement was less than 120 s, which is adequate for practical applications. In summary, the proposed image mosaic method not only guarantees accuracy, but also reduces the workload involved in collecting partial images and the computational expense of image mosaic construction.

Thus, the image mosaic method with non-overlapping regions described in this paper provides a simple and precise technique for measurements in precision assembly.

This work was supported by the Liaoning Revitalization Talents Program (Grant No. XLYC2002020) and the Major Project of Basic Scientific Research of Chinese Ministry (Grant No. JCYK2016205A003).

The authors have no conflicts to disclose.

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
Xiao
S
,
Li
Y
.
Visual servo feedback control of a novel large working range micro manipulation system for microassembly
.
J Microelectromech Syst
2014
;
23
:
181
190
.
2.
Liu
S
,
Xu
D
,
Zhang
D
,
Zhang
Z
.
High precision automatic assembly based on microscopic vision and force information
.
IEEE Trans Autom Sci Eng
2016
;
13
:
382
393
.
3.
Ye
X
,
Liu
P
,
Zhang
Z
,
Shao
C
,
Li
Y
.
Error sensitivity analysis of a microassembly system with coaxial alignment function
.
Assem Autom
2016
;
36
:
25
33
.
4.
Xu
D
. Measurement and control based on microscopic vision.
Beijing
:
National Defense Industry Press
.
2014
.
5.
Rejc
J
.
Robust, cheap and efficient vision system for mechanical thermostat switch sub-assembly inspection
.
Int J Precis Eng Manuf
2019
;
20
:
67
78
.
6.
Liu
Y
,
Li
S
,
Wang
J
,
Zeng
H
,
Lu
J
.
A computer vision-based assistant system for the assembly of narrow cabin products
.
Int J Adv Des Manuf Technol
2014
;
76
:
281
293
.
7.
Li
T
,
Qiu
Z
,
Tang
J
.
Research on measurement method of grinding wheel profile based on image mosaic
.
Meas Sci Technol
2020
;
31
:
035402
.
8.
Tang
Y
,
Zhang
J
,
Yue
M
,
Fang
X
,
Feng
X
.
Temperature and deformation measurement for large-scale flat specimens based on image mosaic algorithms
.
Appl Opt
2020
;
59
:
3145
3155
.
9.
Pandey
A
,
Pati
U
.
Image mosaicing: A deeper insight
.
Image Vis Comput
2019
;
89
:
236
257
.
10.
Feng
B
,
Chen
F-d
,
Zhang
J-l
,
Sun
H-y
,
Peng
Z-t
,
Liu
G-d
.
Online inspection of final optics based on image mosaic for ICF system
.
Opt Precis Eng
2014
;
22
:
555
561
.
11.
Cai
H
,
Wu
X
,
Zhuo
L
,
Huang
Z
,
Wang
X
.
Fast sift image stitching algorithm combining edge detection
.
Infrared Laser Eng
2018
;
47
:
1126003
.
12.
Song
Y
,
Chen
W
,
Ye
Y
,
Liu
J
,
Huang
B
,
Guo
T
,
Zhao
J
.
Sub-pixel stitching method for large object with duplicate texture of automatic visual inspection
.
Acta Opt Sin
2014
;
34
:
0315002
.
13.
Wang
F
,
Tu
P
,
Wu
C
,
Chen
L
,
Feng
D
.
Multi-image mosaic with sift and vision measurement for microscale structures processed by femtosecond laser
.
Opt Lasers Eng
2018
;
100
:
124
130
.
14.
Zhou
A
,
Shao
W
and
Guo
J
.
An image mosaic method for defect inspection of steel rotary parts
.
J Nondestruct Eval
2016
;
35
:
60
.
15.
Cai
N
,
Chen
Y
,
Liu
G
,
Cen
G
,
Wang
H
and
Chen
X
.
A vision-based character inspection system for tire mold
.
Assem Autom
2017
;
37
:
230
237
.
16.
Lei
Z
,
Liu
X
,
Zhao
L
,
Chen
L
,
Li
Q
,
Yuan
T
and
Lu
W
.
A novel 3D stitching method for WLI based large range surface topography measurement
.
Opt Commun
2016
;
359
:
435
447
.
17.
Liu
D
,
Wang
S
,
Cao
P
,
Li
L
,
Cheng
Z
,
Gao
X
and
Yang
Y
.
Dark-field microscopic image stitching method for surface defects evaluation of large fine optics
.
Opt Express
2013
;
21
:
5974
5987
.
18.
He
W
,
Zhong
K
,
Li
Z
,
Meng
X
,
Cheng
X
,
Liu
X
and
Shi
Y
.
Accurate calibration method for blade 3D shape metrology system integrated by fringe projection profilometry and conoscopic holography
.
Opt Lasers Eng
2018
;
110
:
253
261
.
19.
Yin
S
,
Ren
Y
,
Guo
Y
,
Zhu
J
,
Yang
S
and
Ye
S
.
Development and calibration of an integrated 3D scanning system for high-accuracy large-scale metrology
.
Measurement
2014
;
54
:
65
76
.
20.
Sabino
D
,
Pereira
J
and
Poozesh
P
.
Digital image-stitching techniques applied to dynamic measurement of large structures
.
J Braz Soc Mech Sci Eng
2018
;
40
:
236
.

Yawei Li received B.E. and M.S. degrees in mechanical engineering from the North University of China, TaiYuan, China, in 2013 and 2016, respectively, and he is currently pursuing a Ph.D. degree in mechatronic engineering with the Dalian University of Technology, Dalian, China. His current research interests include precision measurement and assembly.

Xiaodong Wang received a B.E. degree in process and equipment of machinery manufacturing from the Nanjing University of Aeronautics and Astronautics, Nanjing, China, in 1989, an M.S. degree in mechanics from Harbin Engineering University, Harbin, China, in 1992, and a Ph.D. degree in mechanical and electrical control and automation from the Harbin Institute of Technology, Harbin, in 1995. He is currently a Professor with the Dalian University of Technology. His current research interests include automatic precision assembly and measurement.

Tao Wang received a B.E. degree in mechanical engineering from Shandong University, Weihai, China, in 2017, and he is currently pursuing a M.S. degree in precision instrument and machinery with the Dalian University of Technology, Dalian, China. His current research interest is automatic precision assembly.

Yi Luo received a B.E. degree in chemical mechanical engineering from the Dalian University of Technology, Dalian, China, in 1994, and a Ph.D. degree in mechanical design and theory from Shanghai University, Shanghai, China, in 2001. She is currently a Professor with the Dalian University of Technology. Her current research interests include automatic precision assembly and polymer MEMS device fabrication.