Development of Machine Vision System for Pen Parts Identiﬁcation under Various Illumination Conditions in an Industry 4.0 Environment

: The fourth Industrial Revolution, well-known as “Industry 4.0”, based on the integration of information and communication technologies, has introduced signiﬁcant improvements in manufacturing. However, vision systems still experience various impracticalities in dealing with the effect of complex lighting on the systems platform. Therefore, a machine vision system for automatic identiﬁcation of pen parts under varying lighting conditions at a digital learning factory is proposed. The developed vision system presented a straightforward approach by effectively minimizing the environmental lighting effect on the identiﬁcation process. First, the obtained information of the designed vision framework is exported to a program, where a reduction of non-uniform illumination is achieved through the implementation of Retinex image enhancement techniques. Then, the color-based Fuzzy C-means (FCM) algorithm, including improved mark watershed segmentation, is employed for pen parts object classiﬁcation. Finally, the position features of the selected pen part are reported. The process applied to a total number of 210 upper pen parts (caps) and 241 lower pen parts (tubes) images under different lighting scenarios. Results indicate that average parts identiﬁcation precision for cap and tube parts is different and equals to 98.64% and 95.26%, respectively. The present methodology provides a promising scheme that can be feasibly adapted for other industrial Color-based object recognition applications.


Introduction
Visual computing presents an intelligent feature, enabling technologies for the realization of an industrial manufacturing system [1]. Increased requirements in terms of autonomy and flexibility in industrial productivity with the upcoming era of industry 4.0 have brought an increasing demand for visual computing. Among the most critical knowledge advantaged by visual computing related to new technologies of Industry 4.0, computer vision techniques are proved to be instrumentals [2]. Applications such as visual inspection, parts identification, and automation are mainly achieved through a carefully designed computer vision system. Regarding the general concept of smart design and production, providing precise visual feedback of the environment is a critical stage for enhancing manufacturing quality and speed [3]. Therefore, computer vision implemented by machine vision method can be integrated with robotic processes to provide efficient and intelligent decision-making abilities for smart manufacturing [4]. The machine vision system consisting of vision sensors, illumination, and computers is a technology enabling machines to get a visual representation of the environment. However, despite the general advances in machine vision systems, mainly driven by enhanced processing power, better sensors, and better lighting components, further enhancement corresponding to the current challenges are is still demanded [5]. Due to the general dependency of automatic application performance and quality assurance on machine vision system surroundings, promoting the adaptability of the vision system to environmental variability is the primary prerequisite. Accordingly, lighting conditions (day/night time illumination, sunlight, shadows, and reflections) and random parts occlusion patterns are the main factors that can influence the efficiency of a machine vision system, by having a direct impact on the accuracy, robustness, computing cost and real-time capability. Hence, the development of the vision system compatible with platform environmental conditions plays a critical role in enhancing and enabling the realization of flexible, intelligent production systems. Therefore, considering an appropriate machine learning technology combined with a suitable technical configuration will allow the development of reliable parts identification systems integrated into an industrial assembly line.
In the past few decades, in a different field such as agriculture and industrial manufacturing, computer vision methods have been widely researched for object recognition and inspection process [6][7][8][9]. In the study of computer vision methods, many supervised and unsupervised machine learning techniques have been developed including fuzzy logic theory [10,11], Artificial Neural Networks (ANN) [12], Support Vector Machines (SVM) [13], and deep learning [14]. Generally, convolutional neural network (CNN) based methods are powerful techniques for modeling complex recognition processes in applications with large and representative amounts of data. Ghazi et al. proposed a CNN system for identifying the plant species by applying data augmentation techniques, using the plant task datasets [15]. Grinblat et al. developed a neural network for the successful identification of three different legume species to visualize the relevant vein patterns based on the morphological patterns of leaves' veins [16]. However, a common challenge facing industrial research experiments is the lack of an adequate amount of data containing the various object features and environmental circumstance variations. Moreover, obtaining excessive dataset for current advanced algorithms (e.g., deep learning) increase the processing time and computational complexity [17]. Therefore, some recent works addressed the requirement of a large dataset by pre-training a network on either using general open access data [18] or by generating synthetic data [19].
Several studies are also conducted using a combination of machine vision methods. For instance, Razmjooy et al. developed a machine vision method for inspecting potato using Color-based classifiers and mathematical binarization [20]. An effort for fruit detection using a combination of color and texture features is carried out by applying color-based K-means clustering and Circular Hough Transform (CHT) [21]. In a performance evaluation of fuzzy clustering techniques, John et al. have developed a machine vision method using an FIR (Far Infrared Radar) camera to detect pedestrians based on adaptively Fuzzy C-means clustering and CNN [22] technique. A Fuzzy Color Difference Histogram (FCDH) based background subtraction approach was proposed by Panda et al. for the detection of moving objects. The use of FCDH in background subtraction reduced the number of false errors due to the illumination variation and indicated an efficient improvement in moving object detection [23]. Other recent studies have employed Fuzzy C-means algorithms for automated classification and segmentation techniques [24,25]. Ouma and Hahn developed a vision-based detection method using morphological reconstruction and Fuzzy C-means clustering. The approach showed that the wavelet-FCM clustering is a suitable method for the recognition of the incipient potholes from 2D vision images [26]. In the present study due to the lack of comprehensive datasets including the diversity of pen parts colors and parts distribution conditions, an unsupervised Fuzzy C-means clustering algorithm is considered. In recent years, among the various developed fuzzy clustering algorithms, Fuzzy C-means clustering is one of the highly used and successful classical technique for color-based segmentation [27]. It obtains the feature point's membership by optimizing the objective function, then by clustering the feature points to the degree specified by membership grade the goal of automatic classification can be achieved.
In this paper, a generalized pen parts identification approach under various illumination conditions is proposed by developing a robust machine vision system. The initial step consists of reducing inhomogeneous illumination by implementing a hardware framework accompanied by image preprocessing techniques. In this step, the traditional Retinex theory using a camera response model is applied to the obtained image to adjust each pixel exposure ratio. These procedures result in providing the same amount of illumination for all collected images under varying lighting conditions as well as releasing the system from most ambient lighting effects such as shadows and glare. Then, the image post-processing approach including morphology reconstruction combined by mark watershed segmentation is performed. As the final step, the color-based FCM algorithm is applied to the obtained image from the previous step, resulting in pen parts features recognition. Finally, the robustness and effectiveness of the proposed approach are demonstrated through a successful pen parts recognition under various challenging conditions (inhomogeneous illumination and sever random parts occlusion). The present study focus is on color-based parts identification, which is to the best of the authors' knowledge, the first study to date that has examined the sequence of color-based fuzzy classification method with color enhancement strategies.
The main contributions of the present work are as follows: (1) The developed approach provided a straightforward vision system mechanism adaptable in varying laboratory lighting conditions with the minimum computational complexity. (2) This study innovatively combined the low-light image enhancement techniques with a designed hardware-based vision system to obtain the pen parts images robust to environmental illumination variations. (3) An effectively color-based FCM parts classification is derived by adding the overlapping object recognition through the concavity point detection to the mark watershed segmentation algorithm. (4) Finally, the method is validated using the different challenging laboratory lighting conditions for cap and tube parts feature identification. (5) The current work introduced an efficient solution for color-based object recognition that can be feasibly adapted for other industrial object detection applications The remainder of the paper is organized as follows. Section 2 presents the methodology, including the problem description, proposed hardware-based and software-based method based on low-light image enhancement, morphological reconstruction, and the improved FCM algorithm based on the combination of overlapping object recognition and mark watershed segmentation. Section 3 discusses the experimental results of the automatic pen parts identification under varying lighting conditions. Section 4 provides the conclusions.

Materials and Methods
The rise of smart manufacturing systems has introduced significant advances in the autonomy and flexibility of Industry 4.0 applications [28]. With the introduction of Cyber-Physical Production Systems (CPPS), smart product design based on highly customized devices has elicited much interest from industry and academia. The application of CPPS in manufacturing shop floors, together with machine learning applications provides intelligent production units [29]. Accordingly, in the present platform, the focus of CPPS is on the utilization of an efficient and customer-oriented order processing system, by employing practical hardware and software-based approach [30]. The present research is carried out at the Digital Learning Factory of the Institute of Mechatronic Systems (IMS) at Zurich University of Applied Science (ZHAW), which is deployed as a research platform for CPPS. The digital learning factory is an Industry 4.0 production system equipped with industrial components, and it is used as a learning laboratory and demonstrator for research, development, and teaching. Using the example of an assembly plant, all essential aspects of an Industry 4.0 production and its networking within the value chain can be simulated, tested, and investigated in detail. The digital learning factory represents a complete, industry-oriented production system for customizable consumer products (in this case multi-colored pens) that can be ordered via a web or smartphone app. The production unit includes 4 of 19 everything that makes up an Industry 4.0 production line, i.e. Material handling and transportation, RFID technology for product recognition, human-robot collaboration, artificial intelligence algorithms for data analysis, cloud manufacturing strategies, service and maintenance strategies, state-of-the-art user interfaces (such as Microsoft HoloLens AR glasses, etc.) and extensive product customizing capabilities. All production units are designed as cyber-physical systems (CPS). Figure 1a shows the overall framework of the present Industry 4.0 research laboratory. Figure 1b shows the pen tube and pen cap workstations where the developed machine vision system, implemented and tested. Therefore, the accurate identification of the pen parts in each of these stations is a key step for developing a successful smart production platform.

Description of the Problem
In the pen parts identification process, the color of the pen parts is the main recognition factor for the vision system's final decision. Random changes in image brightness are caused by natural and artificial highlights and shadows and lead to significant regional color dispersion in images. Therefore, with the unpredictable changes in lighting conditions during a working day, a smart vision system capable of performing accurate color detection robust to environmental variation is a prior requirement. Here, the experimental vision system is mainly composed of an industrial camera with an adjustable ring-shaped led illuminator, located above the parts plate, as shown in Figure 2.
In the current setup based on configuration design, there are two main challenges: Primary, the vision system designed framework leads to complete exposure of the station to environmental illumination. Moreover, the camera and surrounding flash polarizer location increase the influence of surrounding lighting by inducing reflection, glares, and shadow on the parts. Considering the problem described, the performance of the initial version of the vision system is assessed under different uncontrolled illuminations. As shown in Figure 3, lighting conditions such as diffuse artificial light (an LED light projector was selected to generate an uneven distribution of the luminous intensity on the pen parts), natural sunny daylight, artificial lighting, and dark conditions are evaluated. In all conditions, the exposure time for the camera is set to 300 ms. As illustrated in Figure 3, two identical connected silver caps are marked in different illumination scenarios. As an additional challenge, connected caps appeared in multiple ranges of color in a different situation. Lighting conditions causing gold and silver parts having an overlapping color threshold, which will increase the difficulty of the color recognition algorithms. Furthermore, illumination variation makes semitransparent parts  having a similar color to opaque ones, as well as causing a different range of a color corresponding to its distance from external lighting. Secondly, in each station, the caps and tubes are dispersed randomly, which leads to a high possibility of having occlusion conditions. Thus, parts feature Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 21 April 2020 doi:10.20944/preprints202004.0387.v1 recognition such as the cap's clip side, exact gravity center and, orientation in an occlusion condition in real-time introduce an additional difficulty for the machine vision system. To address this problem, a fast, reliable recognition method, as well as a sophisticated vision system platform adaptable to the environmental variation, is introduced in the following sections.

Hardware-based Approach
To overcome the mentioned problems, in the hardware approach, a modification of the vision system adaptable to lighting variation is introduced. As a first step, by employing a diffuse backlight illumination to the vision settings, color constancy in captured images is achieved. Adding backlight to the setup leads to a reduction of reflection and shadows, as well as providing high contrast images for the edge detection algorithm. A further change is made by providing a transparent nonreflective plate in a short distance above the backlight. An overview of the proposed vision system hardware configuration can be found in Figure 4. To further improve the performance of the proposed hardware setup, camera exposure time is set to 80 ms manually. The exposure time is the time in which a camera sensor is exposed to the light. Applying low exposure time leads to capturing low light images with minimum affection for environment lighting. Then, the obtained images are manually enhanced through a camera-based method in a subsequent preprocessing step. As already mentioned, based on the diversity of perceived parts color from its actual color as well as conflicting color range tone under varying illumination, a reference color bar is provided in the hardware setup, as it is shown in Figure 4. Finally, based on the designed hardware configuration, the following sections will provide an image processing algorithm along with a machine learning method for the development of an entire parts identification vision system.

Software-based Approach
The proposed software-based method consists of pre-processing and post-processing steps. In the pre-processing step, the illumination estimation methods are employed to calculate the exposure ratio maps and enhance the low-light input image using the camera response model and the calculated exposure maps. Theoretically, the main idea of this step is removing reflection and maintaining the color constancy. Low-light image enhancement is achieved using the Retinex image enhancement algorithm based on a camera response model (CRM). Hence, the processed image is used as input of the post-processing step. Figure 5 shows the software-based approach overview. Accordingly, in the post-processing step, an obtained enhanced image is operated through the mathematical morphology and concavity point detection algorithms for highlighting the individual parts. Then, the Color-based Fuzzy C-means algorithm, including improved mark watershed segmentation, is used for the pen parts color recognition. As a final step, based on the user-selected color, the closest pen part to the robot Tool Center Point (TCP) by classification of the color is extracted.

Retinex Camera-based Image Enhancement
In this work, based on the primary comparison study that has been performed on the several state-of-the-art low light image enhancement algorithms [31][32][33], the proposed method by Ying et al. is selected [34]. In this approach, it is employed the traditional Retinex model to obtain the illumination map, where the exposure ratio is estimated for each pixel through an illumination estimation. Then, the Beta-Gamma camera response model is used to adjust each pixel to the desired exposure according to the estimated exposure ratio map. Based on the Retinex theory [35], the illumination and reflectance components respectively represent the illumination dynamic range and the intrinsic property of objects. The Retinex theory is formulated as: Where P P denotes the input image, I and R respectively denote the illumination and reflectance components and represents the element-wise multiplication. Accordingly, the corresponding acquired image (In) is decomposed into illumination and reflection respectively indicated by and . Then, the CRM function and output image can be derived from input image as below: where, Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 21 April 2020 doi:10.20944/preprints202004.0387.v1 Where ε denotes a small constant and the camera model parameters a and b are respectively set to the -0.3293 and 1.1258, which accurately determined by fitting selected camera model to the 201 real-world camera response curves in the proposed approach by Ying et al. is selected [34].
Representative results for the combination of preprocessing step and modified version of the hardware configuration are presented in Figure 6. Tiles of the captured images in Figure 6a, and Figure  6b respectively show the performance of the enhancement method under artificial light and sunny daylight, before and after the preprocessing step. Although input images demonstrate the considerable color difference in two different lighting conditions, the stated approach was able to make images robust to tone variation, and lighting glare affects. Furthermore, there are also observed that the dark and high reflected areas within the image which are attributed to noise and background information are significantly improved and the sample's color constancy is preserved.

Morphological Reconstruction
The morphological reconstruction is a powerful operation in mathematical morphology, which mainly used to simplify images and preserve the main features [36]. Therefore, employing the morphological operation to fix characteristics and eliminate the invalid features of the image is the key to maintaining the integrity of the information in the image segmentation process. Meanwhile, in the pen parts identification process, individual objects feature specification is a precedential step for the color-based classification. Since the effect of the background on the pen parts is already eliminated, separate parts distinction in overlapping position is required. Thus, the morphological operation is used as the basic operation, to provide high contrast for representing the object's information. Erosion, dilation, opening operation, and closing operation are considered as the basis of mathematical morphology that used to remove disturbances with a variant extent. Typically, applying traditional morphological opening, erosion removes small objects, and the subsequent dilation attempts to restore the sharp of objects. In this way, employing the morphological operation, not only exceed the preformation of the following edge detection by preserving the color constancy but also morphological reconstructed image is used as a marker image in the watershed segmentation step to reduce over-segmentation caused by image noise and details. The erosion, dilation, opening, and closing operations on grayscale image F(x, y), erosion operator F e , and dilation operator F d are defined as follows: where B(m, n) stands for a or a structural element,Θis the erosion operation,⊕ is the dilation operation. Therefore, morphological open and closed operations and operators F o and F c is expressed as: To focus on the description of the proposed approach, Figure 7 provides an overview of the post-processing steps for the final pen parts recognition task. As can be seen in Figure 7a, the distribution of color values, reflections, and glare on the tube parts is reduced in a pre-processing step. The improved image is subjected to the following post-processing steps. Considering Figure 7b, the enhanced image is converted to a filtered image using the morphological operation. It can be seen that the resulting tube image from an open and closed operation becomes more prominent and separated.

Mark Watershed Segmentation using Overlapping Objects Recognition
In this step, the watershed segmentation method is used to segment the pen part images into superpixel areas, based on the topology and gradient magnitude of the image. Compared with other image segmentation algorithms, the watershed algorithm has a significant performance in getting closed boundaries, which highly matters for following image processing tasks, such as calculating area and analyzing characteristics of regions. The watershed algorithm was originally proposed by Digabel and Lantuejoul in 1977 [37] and was later improved by Grau et al. [38] in 2004 using adaptive information. This method can be considered as a morphologically based technique, starting from the global minimum that the pixels are stored in increasing order of gradient values, which comes  from a flooding idea. Then, the algorithm floods basins in the gradient image until basins attributed to different gradients meet on watershed lines. Here, the gradient of the image is derived from the grayscale image, resulting in Figure 7c. According to the extracted contour, it can be found that obtained boundaries are not closed to accurately segment the image to individual objects. Therefore, the following watershed segmentation results in an overlapping object segmentation, as shown in Figure 7d. To address this problem, the marker-based watershed method is required to give better results by using external markers. Therefore, to derive pen parts external markers, a two-layer watershed segmentation is proposed by incorporating adaptive overlapping object segmentation.
In the first step, connected components are recognized by utilizing Matlab region props operation on the obtained watershed image, which excludes parts based on their extracted features, as shown in Figure 7e. Then, overlapping object segmentation is performed on the excluded objects, based on the presented method by Zafari et al. [39]. This approach mainly relies on edge information extraction of approximately elliptical overlapping shapes. As shown in Figure 7f, the partial overlapping of the tube parts leads to shape with concave edge points, named as seed points that correspond to the intersections of the object boundaries. Therefore, in the second step, these concave seed points are used to segment the contours of overlapping objects for separating the connected component. For seed point extraction, bounded erosion and fast radial symmetry transform using the ellipse fitting process, mentioned in detail by Zafari et al. are performed. As a result, in Figure 7f, overlapping components are divided into separated parts by implementing the overlapping segmentation method. Subsequently, the separated component integrated with the primary segmented parts to be used as the external marker for the second watershed segmentation. Therefore, after performing an erosion operation on them, the combination of the external marker with the image gradient is used for the subsequent mark watershed segmentation, as it is shown in Figure 7g. Finally, a marker-based watershed transform is performed, causing accurate tube parts segmentation. The identified tube parts are illustrated by utilizing a color labeling operation in Figure 7h. At this point, a color histogram of each segmented tube parts is generated based on the original RGB images as shown in Figure 7i.

Superpixel-based Fuzzy C-means Clustering
Fuzzy C-means clustering is an unsupervised soft segmentation method that has been used extensively for its comparative image segmentation and simple implementation [40][41][42]. FCM clustering is derived based on the idea of uncertainty of belonging, using a membership grading, and can be more instinctive in comparison to hard clustering [43]. Lei et al. [44] presented a Superpixel-based Fast FCM algorithm (SFFCM) for color-based image segmentation. Through the proposed algorithm, watershed transformation based on multi-scale morphological reconstruction integrated the color histogram of superpixel results in the objective function of FCM to improve the clustering result and decrease computational time. Although this approach provides an acceptable result for color image segmentation, the watershed transformation method suffers from the absence of marker initialization and may experience difficulties with the segmentation of highly overlapped objects in which a strong gradient is not present. Hence, based on the two-layered mark watershed segmentation provided in previous steps, the Superpixel-based Fast FCM algorithm is applied for color-based pen parts classification.
Generally, fuzzy superpixel based clustering [45,46] takes place in recent researches to improve the execution efficiency of color segmentation algorithms. Superpixel defined as a large number of small and independent areas with different sizes and shapes derived from an image [47]. Fuzzy cluster analysis allows gradual memberships of data points to clusters measured as degrees in [0,1]. This provides the flexibility to express the fact that data points can belong to more than one cluster. Meanwhile, these membership degrees deploy a much finer degree of the model detail. In the previous step, mark watershed segmentation using overlapping objects recognition is adopted to obtain preliminary over-segmentation that provides the FCM technique individualized local objects information. Then, the color histogram of the generated superpixel is computed. With the integration of the color histogram into the clustering procedure, a low computational complexity is achieved by reducing the number of different pixels in a color image. Finally, the obtained superpixels spatial information is incorporated into the objective function of the FCM algorithm, to ultilize the color-based classification step. The mathematical representation and implementation of the superpixe based FCM are briefly explained below : where, n is the total number of superpixels area, m is the number of clusters, d ji is the membership between the ith superpixel and the jth cluster centroid, l is the weighting exponent, x p denotes a pixel in an internal color image, S i represents the number of pixels in the ith superpixel area (∂ l ),(1 ≤ i ≤ n), and c j is the jth cluster centroid.
The above optimization problem can be solved by converting to an unconstrained optimization problem using the Lagrange multiplier method, which minimizes the following objective function, where, λ is a Lagrange multiplier. By taking partial derivative of Jel with respect to dji and cj, respectively, we will get, According to Equations (10) and (11), the membership function d ji and cluster center v j are drived as follows: Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 21 April 2020 doi:10.20944/preprints202004.0387.v1 Therefore, to identify the demanded part, the derived FCM framework is applied to classify the parts into the individual group based on their obtained color histogram. The result of the Color-based fuzzy clustering is shown in Figure 7j.

Shape Features Recognition
In this step, position features of the demanded color with the minimum distance to the robot TCP, which is usually located in the top right corner of the plate, should be detected. As mentioned in previous sections, to overcome the difficulties of defining color thresholds that would be able to consider all illumination changes, a reference color bar is established on the right side of the pen parts plates. In this experiment, ten different colors are defined as reference color bar and a number between 1 to 10 is assigned to each color for pen parts color classification. Accordingly, based on the fuzzy classification result, each part is classified into one of the reference color bar groups, as shown in Figure 7k. Subsequently, the target part is extracted among specified clustered color groups with the minimum distance from the robot TCP, as shown in Figure 4. Then, based on Matlab region properties, the parts center of gravity, the direction, and in the case of caps, the clip side is derived from the binary image of the corresponding part.

Experimental results and discussion
In this section, the proposed approach is evaluated on pen parts images with different degrees of intensity inhomogeneity over 24 hours. All the experiments have been carried out with MATLAB2018b on a laptop with an Intel (R) Core (TM) 2.70 GHz CPU, 8 GB RAM, and Windows 10 64-bit. The implementation of the proposed methodology and the caps classification result is schematically presented in Figures 8f. The first row includes the evolutionary process of morphological filtering and marker-based watershed transform on the reconstructed gradient image. The second row demonstrates color labeling of mark watershed segmentation result and Fuzzy C-mean clustering. Finally, in the third-row, cap parts center, direction, clip side, and the color number based on the reference color bar are identified. As shown in Figures 8f and 7k, the presented method can achieve more promising parts identification accuracy under non-uniform illumination. Due to the illumination condition, and the high degree of occlusion, three missed caps were detected Figure 8. Similarly, regarding the segmentation result, one tube position was mistakenly recognized Figure 7k. In overall, the proposed pen part identification approach is indicated to have an optimal compromise result between accuracy, cost, and applicability for the current digital factory. For further evaluation of the identification rate provided by the algorithms, precision and recall are calculated based on Equations (14) and (15), the performance of algorithms on both cap and tube parts was evaluated as follows: Recall = N TP N TP + N FN (15) Where N TP is the number of correctly detected parts (pen parts are considered correctly detected if the position features of parts as well as demanded color number regarding to reference color bars are extracted correctly);N FP is the number of incorrectly detected pen parts (whereas although a detection is produced, the given color number or position features did not satisfy the demand);N FN is the number of parts that are tagged but had not been detected (whether detection is missed, or produced but failed to satisfy the demanded position and color criteria). To report the ability of the pen parts identification method based on the Equation (14) and (15), four of the critical illumination conditions are summarized in Table 1 and Table 2. A total number of 210 caps and 241 tubes were used in the validation dataset, which also had included various challenging occlusion situations. The overall detection results indicate that cap parts had higher accuracy compare to tube parts in all conditions. In general, it can be explained due to several facts; First, a smaller number of caps overlap due to their geometry after the morphological reconstruction compared to tubes, which increases the segmentation accuracy. Secondly, the cap parts have a greater surface area compared to the tube, causing the color histogram to be more accurate as well as a less overall surface percentage is affected by glares and reflections. From the presented results, it is also observed that there is a notable difference between recall ratio and precision results. It shows that the error percentage generally comes from missing parts. Moreover, it can be explained since missed parts more likely happen in an overlapping situation or demanding lighting conditions. Meanwhile, from the study hypothesis, to develop a robust vision system with minimum false recognition percentage, it is observed that the presented approach is biased in detecting correct parts by minimizing false Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 21 April 2020 doi:10.20944/preprints202004.0387.v1 detection rather than missing part. It can be explained due to the fact that missed parts would not affect the overall pen assembling procedure, whereas incorrectly detected pen parts provide the wrong manufacturing result. Thus, the present method is mainly evaluated based on the precision value which is shown to be a promising survey for assembly line part detection. The average CPU run-time for the cap and tube detection, respectively are 2.186s and 3.056s, which about 1.025s of this time is allocated to the preprocessing step. Although all the illumination conditions showed acceptable results, still tube parts identification under dark natural environment required further development, as well as improvement of the computational processing speed for real-time processing in future research, is demand. However, the proposed approach not only addressed the overlapping recognition challenge but also reduced the interference of varying illumination. Finally, by combining the image enhancement methods and Fuzzy C-means algorithm, including improved mark watershed segmentation algorithm, the pen parts identification approach achieved satisfactory results of an overall precision for cap and tube parts of 98.64% and 95.26%, respectively. A novel approach for pen parts identification is presented, which simplifies the challenges attributed to the incorporation of complex, inhomogeneous illumination and shaded pen parts into software and hardware-based methodology. In this context, constant colors and eliminating reflections and glare in the unlimited environment can be obtained by minimizing the camera exposure time in sequence with Retinex low light image enhancement in the pre-processing step. The effective color recognition is a critical advantage under a non-uniform lighting environment when desired parts are identical in geometry and other features. The proposed method successfully identified pen parts under various illumination conditions in terms of applying artificial light and diffuse artificial light, individually and in combination with a natural sunny environment as well. To further illustrate the performance of the proposed method, qualitative evaluation of cap and tube parts identification result in terms of color and position features recognition are showed in Figures 9 and 10.  It can be seen that, while the method presents the visually promising result under different lighting conditions, it also preserved the pen parts color constancy in various conditions. In general, the results are quite reasonable considering that the algorithm does not require any high computational cost and time as well as it does not require any prior hardware or software assumptions such as controlling environmental lighting conditions. Nevertheless, the proposed strategy is, in particular, useful for near real-time object detection based on color recognition, when objects position and features (in this case caps clips side) can be precisely estimated throughout the post-processing algorithm. Whereas, the post-processing algorithm segments and labels the pen parts separately using the concave points of overlapping parts and two-layered mark watershed segmentation. Generally, supervised classification approaches such as a hybrid-based approach for object detection outperform unsupervised based methods. However, these methods require an extensive and representative training data set when it comes to various uncontrolled lighting and parts distribution. Hence, the main advantage of the proposed color-based FCM classification approach, is that the pen part identification methodology, on one hand, is independent of the image resolution, training data and feature selection in comparison to some of the state-of-the-art object detection approaches and on the other hand, it requires significantly lower computational time and cost along with, it is efficient even in the presence of severe variations in illumination conditions. The application of the proposed methods is not limited to the effective object identification in indoor environments but can be extended to benefit object recognition in the outdoor environment such as fruit plant detection in agriculture fields [48,49].

Conclusions
In this work, a novel machine vision approach for color-based pen parts identification under various lighting conditions is proposed. The main focus of this study was in developing a machine vision system to achieve an accurate decision for robot parts recognition task in an industry 4.0 research environment. As a result, in the primary step, a hardware vision configuration followed by image enhancement techniques was selected to constrict the environmental lighting effect on pen parts colors. Applying the mentioned procedure improved the recognition algorithm performance by eliminating the influence of shadows, glare, and color reflection on the background of the part and also provided high contrast for subsequent parts features recognition. In the parts detection phase, the Color-based Fuzzy C-means classification method based on the combination of mark watershed segmentation and morphological reconstruction was obtained. Whereas, the overlapping objects Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 21 April 2020 doi:10.20944/preprints202004.0387.v1 were segmented through mathematical morphological procedure and concavity point recognition to improve the following segmentation process. Later, color and position recognition results from all pen parts samples, including 240 tubes images and 210 caps images with a dominant number of randomly disordered parts indicated that the proposed methodology achieved the excellent 98.64% and 95.26% overall precision for cap and tube parts under different illuminations, respectively. Nevertheless, although a highly satisfactory detection accuracy, as well as the least computational complexity, was obtained, computational time improvement is still needed to meet the demands of industrial real-time procedures. Therefore, while the proposed two-layered mark watershed segmentation based FCM clustering method performs well in classifying the pen part in general, the calculation time can be further improved by membership filtering of the included color features in the future works. Finally, the proposed methodology is not necessarily limited to pen parts identification, but it is expected to be generally applicable to many other similar industrial color-based parts recognition, particularly with a challenging environmental and/or artificial illumination. Moreover, it can be embedded in object recognition research subjects in the scope of Industry 4.0 applications that apply to areas such as unsupervised surveillance or semi-autonomous control if it is real-time capable.