Preprint
Article

An Ergonomic Risk Optimization System Based on 3D Human Pose Assessment and Collaborative Robot

Submitted:

27 October 2023

Posted:

30 October 2023

You are already at the latest version

Abstract
The present work fronts the human pose estimation problem, developing a method that enables automated ergonomic risk assessment. A research methodology is developed to calculate joint angles from digital snapshots or videos by using computer vision and machine learning techniques to get a more accurate ergonomic risk assessment. Starting from an ergonomic analysis, the study explores the use of a semi-supervised training method to detect the skeletons of workers. The research methodology developed aims to infer the positions and angles of the joints, to calculate the criticality index based on the RULA scores and fuzzy rules. Then, to prevent work-related musculoskeletal disorders (WMSD), improve production capacity and decrease the ergonomic risk, the system uses joint with a collaborative robot to support the worker in carrying out the most critical operations. The method has been tested on a real industrial case in which manual assembly activities of electrical components are conducted. The approach developed can overcome the limitations of recent developments based on computer vision or wearable measurement, sensors by performing an assessment with an objective and flexible ap-proach to postural analysis development.
Keywords: 
;  ;  ;  ;  

1. Introduction and Literature Review

A wide range of workplace health problems may be due to Work-related Musculoskeletal Disorders (WMSD). Possible causes of WMSD include specific aspects relative to i) work environment, ii) type of activities and iii) occupational postures of human body [1]. Such disorders may cause inflammatory or degenerative conditions of body functional structures, such as nerves, tendons, ligaments and muscles [2]. Several authors have shown that WMSDs are a major cause of injury in modern industries, producing an overall loss of productivity in developed countries [3,4]. The core of ergonomics, occupational health and safety (OHS) programmes identify these sources of injury as ergonomic risk factors [5]. In many industrialized countries, mechanical overload, repetitive work, and prolonged postures in non-ergonomic postures are widely recognized as risk factors for upper limb and lumbar spine injuries [6,7]. Therefore, a robust tool for estimating and workers' posture monitoring may be critical for the prevention of musculoskeletal disorders. Such a tool may highlight and assess all imbalances between workplace requirements and capabilities of workers, to prevent WMSD. Through observational techniques, one can roughly determine the posture of workers. Due to its simplicity and efficiency, the Rapid Upper Limb Assessment (RULA) is one of the observed procedures that the most part of safety/ergonomic professionals use in industrial settings [6,8,9]. The main reason lies in the possibility of quick and reliable assessments of the upper body [10,11]. Thus, based on the angle of the worker's joints, the so-called grand score, or overall numerical score [1] reflects the degree of postural load for a musculoskeletal system. This matter is a result of the relative locations of body’s parts. A final score is determined using specific algorithms. This score may be used to appraise any potential dangers associated with an activity. However, RULA and other observational techniques show two significant drawbacks. First, experienced assessors are required, which may not be the most cost-effective option. Secondly, the final score may be subject to the inconsistency brought about by the subjectivity of the evaluators [12].
Therefore, since they rely on observation techniques that require an ergonomic analyst to the real-time observation of the work or from recorded video [13] they can be affected by human error, producing results with low consistency and repeatability. These limits can be reduced or eliminated using advanced technologies [14,15]. Specifically, the impact of automated data collecting, and analysis may affect a new class of data-driven applications. In this context, technological advancements in hardware sensors and machine learning (ML) offer new opportunities for ergonomics. On this line, [16] and [17] used inclinometers and accelerometers in their study, while [18] intelligently used simple RGB colour cameras. Yet, these intrusive direct measuring and wearable technology may limit or impact the free development of work activities [18,19].
Although these potential limits, modern technologies based on Computer Vision (CV) and Machine Learning (ML) are able to give accurate recognition and analysis of human posture. This analysis can help in choosing of ergonomic observation-based risk and the relative assessment methods. Along this research stream a number of authors [15,20,21,22,23] use CV systems as color and depth devices (RGB-D) to analyze ergonomics. Yet, up-to-date CV-based approaches do not meet yet the requirements relative to the correct management of complexities associated with real-world environments such as uneven lighting and occlusion [24,25]. Recently, several authors required also workers to be confined to typical situations such as changes in outside light and constrained to a limited range of movements. In addition, they suggested that changes in the angle of view of the camera affected the accuracy and precision of the results [12,15,24,26]. These studies, relative to the automation of the posture assessment task, require additional equipment and are difficult to adapt to general industrialists [15,27,28,29].
A number of approaches use computer vision algorithms to assess postural risk and to forecast RULA scores. In this context, Convolution Neural Networks (CNNs) were used in a study by [27] to predict kinematic data based on images and on a network. In this approach the output provided by the RULA score were also classified. In WMSD risk prediction, [14] compared the most widely used supervised machine learning classifiers, such as the Random Forest algorithm, the Naive Bayes classifier, the Decision Tree algorithm, the K-Nearest Neighbors algorithm, and the Neural Networks (NNs).
As previously stated, in posture assessment problems CV approaches may grant a better measurement accuracy, as well as smaller impacts on workers and working environment [16,17,18]. The CV approach allows to appraise the posture, retrieving measures about body joints more accurately than wearable sensors. This matter is due to the possibility to identify accurately the body parts sizes and proportions and to locate the body into the environment. By using wearable sensors, the distance between the joints would have to be estimated, which would lead to a greater error. This approach also leads to smaller impacts on the workers in case of technology upgrades or environment changes.
In CV, the task of human posture assessment is to identify human body joints (knees, elbows, shoulders, etc.), in a digital image and then search for a specific pose that matches the observed joint in the space of possible joint postures. The use of artificial intelligence tools such as CNNs has increased the robustness of these methods. In this research stream there here have been several advances regarding the estimation of human pose, especially on 2D images based on data collected on a large scale and on deep learning techniques. Yet, the performance of the 3D human pose estimation remains satisfactory due to the lack of enough in-nature 3D data sets. [30,31] proposed an algorithm for fusing multi-viewpoint video (MVV) with inertial measurement unit (IMU) sensor data to accurately estimate 3D human pose.
Modern open source software tools, such as OpenPose [9], allow for real-time joint and limb detection from digital images and videos. OpenPose is a bottom-up approach to estimating the pose of multiple people that takes an entire image as input to a two-branch CNN to jointly predict confidence maps for body part detection and fields of affinity of the parties for their association. Given an image as input, the network returns a list of detected bodies, each with its own skeleton of previously defined joints. Several works, such as VideoPose3D [32] enriched the research area of posture assessment focusing on 3D estimation models through pose estimation for a more realistic and complete skeleton keypoint representation, enabling the application of such models in many domains, included industrial production.
In recent years, collaborative robots are becoming more and more popular in the manufacturing industry and are a solution to reduce these risks [33].
Within a production system implementing collaborative robots, human operators can be supported in the physical workload with flexibility and in different tasks. The agility and cognitive abilities of human operators and the repeatability and load capacity of robots, have a positive impact on productivity, flexibility, safety and costs [34].
This study aims to explore the use of artificial intelligence for human pose estimation in 3D using individual snapshots or video sequences. A new methodology is developed for the assessment of body joint angles through Computer Vision which performs a 3D representation of the human skeleton with 17 key points. The criticality index is appraised considering all the operations that human operator conducts. Then, the collaborative robot is implemented to reduce ergonomic stress and to increase production capacity.
The paper is organized as follows. After the literature review, section 3 describes the methodology used. Section 4 details the case study conducted. In this section, the criticality indices are evaluated, and the possible solutions are defined. Section 5 discusses the results obtained, while Section 6 concludes the work with its main limitations and future directions of the study.

2. Methods

2.1. New research framework

This study is focused on postural optimization based on AI joint with collaborative robots (cobot). Starting from a 3D reproduction of the human skeleton with 17 keypoints, the joint angles are computed. Then, a criticality index (Ic) is defined to i) measure the ergonomic stress for each operation conducted by the worker and ii) to define the impact on the ergonomic assessment of each upper-body region. The second part of the research explores the use of a collaborative robot to reduce and/or eliminate those Elementary Operations (EOi) with the highest values of criticality indices.
The research methodology is divided in four steps (Figure 1). The first one identifies all the EOi and defines the domains of ergonomic analysis. In the second phase a novel method is explored to assess ergonomic risk in which the 3D human pose is analysed through computer vision and machine learning techniques.
In the third step we assess the criticality analysis for the EOi through RULA joint with a fuzzy inference engine (FIE). This tool computes the total criticality index (IcTOT) for each EOi.
In the fourth step the criticality classes are calculated. In this step we explore the use of the cobot for those EOi whose indices fall into the highest criticality classes. The last portion of the research, provides a new risk assessment that considers the operation with the collaborative robot to appraise the reduction of criticality classes for those EOi implemented by the cobot.

2.2. Detailed phases

The main steps are the following:
  • Worker activities are divided into EOi [35]. These EOi define the input of the ergonomic analysis.
  • Introduction of a computer vision-based system for 3D human pose estimation that overcomes the partial findings of [1,36], and the relative limitations in computing the joint angles angles due to the use of 2D pose estimation models, that results in less accurate angles computing.
  • Total criticality indices for the EOi are balanced through the Fuzzy interface. These indices summarize the worker's ergonomic stress during manufacturing operations.
  • Cobot implementation for elementary operations with higher criticality class.
Step 1. EOi Identification
In this phase, elementary operations [37] are identified through a video recording of production activities during a work shift [35].
The main issue is to analyze the EOi to appraise the joint angles, also considering the cyclic operations conducted within the cycle time and the non-cyclic operations. In this phase repetitive operations are also considered, as they can cause ergonomic problems [38].
Step 2 Human Pose Assessment
The 3D CV is explored to assess the ergonomic risk assessment for each EOi. Through the application of the approach presented in Video Pose3D [32] the keypoints of human body are detected from digital images or videos. VideoPose3D uses a system based on dilated temporal convolution on 2D keypoints trajectories to estimate 3D keypoints coordinates. Given an input image or a video, the network provides a list of detected body keypoints, referenced as in Table 1 and Figure 2.
The model used in this study is a 17-joint skeleton, in which the skeletal the keypoints are listed in as shown in Figure 2. For each joint, the VideoPose 3D provides i) a vector with its relative position in the image and ii) the confidence of the estimation, ranging from 0 (null) to 1 (complete). From these information, we calculate the overall confidence of the skeletal detection as an average of the confidences of the joint estimate, which will be used for filtering out noisy or spurious detections.
An example of this step is given in Table 1. The left elbow angle (EL), for is calculated from the positions of the left shoulder, elbow, and wrist, which correspond to skeleton joints #11, #12, and #13.
To compute the ergonomic risk value, the threshold values of the joint angle per skeleton must be described. These thresholds are explicit for some joint angles considering the RULA method (e.g. elbows and neck), but not for others. Thus, to define these threshold values the approach of [14,38] has been used. The results are shown in Table 2.
Step 3. The total criticality indices
This portion of the research explores the uses of a FIE for calculate the total criticality indices considering some ergonomic indicators.
From the literature it can be said that fuzzy logic allows to simulate complicated processes and to front problems with qualitative, vague, or uncertain information [40]. Recently, there have been several applications of this methodology in the field of safety and risk analysis, such as system reliability and risk assessment [41,42,43] and analysis of human reliability [44,45,46,47]. [48] use this methodology to assess the risk of human error. [49] propose a framework based on fuzzy logic to address the inaccuracy of input data regarding ergonomic evaluation due to human subjectivity in field observation. A further ergonomic assessment based on fuzzy logic is proposed by [50] with the aim of assessing and defining the level of risk for the manual handling of loads and the severity of the impact on the health of workers. Therefore, in this study we use this methodology to determine the total criticality index.
A Fuzzy system consists of four basic units, a knowledge base and three computational units (fuzzification, inference, and defuzzification).
  • A knowledge base contains all information about a system, allows other entities to process input data and obtain output. This information can be divided into i) the database and ii) the rule base. The first one contains the descriptions of all variables, including membership functions, while the second one contains the inference rules.
  • Since the input data is almost always crisp and the fuzzy system works on "fuzzy" sets, a conversion is required to translate a standard numeric data into a fuzzy data. The operation that implements this transformation is called Fuzzification. It is conducted using membership functions of the variables being processed. To de-fuzzy the input value, the membership degree is set for each linguistic term of the variable.
  • The phase in which the returned fuzzy values are converted ​​into usable numbers is called defuzzification. In this phase we start with a particular fuzzy set obtained by inference. This fuzzy set may be often irregularly shaped due to the union of the results of various rules [51] and find a single sharp value that best represents this set want to decide. The resulting values ​​represent the final output of the system and are used as control actions or decision parameters.
The fuzzy engine (FE) has been implemented with the Fuzzy toolbox of Matlab. The FE processes five variables, according to [39] and Table 2. These variables define the measure of postural stress for the upper limb and in particular for the elbow, shoulder, neck and trunk while another index refers to the high repetition of the same movements.
After the identification of the EOi, the human pose for each EOi is analysed and the joint angle is automatically considering the RULA method and the data of Table 2. This joint angle is the input value for the criticality index calculation. The ergonomic indicators in this study have been evaluated through Video Pose3D, which uses a system based on time convolution on the trajectories of 2D key points to estimate the coordinates of 3D key points. The main advantage of this phase is that, unlike RULA or REBA methods, no training is required to obtain the final result because all calculation steps are performed by the FIE.
The fuzzy interface translates numerical values into linguistic values that are associated with fuzzy sets.
For each ergonomic input parameter, a membership function with five labels is defined [39]. Thus, the input domain into regions where crisp values can be associated with a fuzzy number using the membership function.
This fuzzy number is expressed using three triangular-central membership functions and two trapezoidal functions at the domain's boundaries. The membership functions are symmetrical with respect to the value 0.5 [52]. This value indicates how each variable belongs to various fuzzy sets when its membership degree changes.
Since the input values ​​have different ranges, they are normalized to the maximum value in the [0,1] range before the fuzzy conversion. The value of the ergonomic index is defined in the range [low, medium, high]. Hence, if the value of the ergonomic index is in the high range, the normalization routine always returns the value 1. The fuzzy interface uses the knowledge base to calculate the input variables.
The input values are interpreted in terms of fuzzy sets (Figure 3).
For example, considering the ergonomic indicator for the right shoulder variable. Since the measured angle is 63,151° the normalized value is 0.7. The fuzzy interface associates, through the membership function in Figure 3, the ergonomic indicators to the VH set with degree 0.23 and to the H set with degree 0.8.
In the second step all the rules are considered in which our variable is associated with VH or H as shown in the following example:
Rule #1 if (neck is VL) and (right_elbow is VL) and (right_Shoulder is VH) and (Spine is VL) and (Repetition of the same movements is VL) then (IcTOT is L)
Rule #2 if (neck is VH) and (right_elbow is VH) and (right_Shoulder is H) and (Spine is VH) and (Repetition of the same movements is VH) then (IcTOT VH)
Considering the rule#2 if the input parameters are:
Ic neck = 1
Ic right_elbow=1
Ic spine=1
Ic Repetition of the same movements =1
Ic right_Shoulder = 0,702
Based on the membership function output, a numerical value is obtained for each index by reading the value on the x-axis corresponding to the assigned membership degree and assigned label.
To get a single IcTOT output value, a weighted average is performed with respect to the membership grades:
IcTOT = (1*1+1*1+1*1+1*1+ 0,702*0,8) / (1+1+1+1+ 8) = 0,95
The rules are built to express in linguistic terms the requirement to outline strongly if even just one domain is critical. The de-fuzzy interface translates the result of the inference process, expressed in terms of degree of membership of a fuzzy set, into a numeric form.
The operative fuzzy implementation consists of the following steps
  • loading the fuzzy inference file .fis (the .fis file contains all the system settings saved through the Fuzzy Logic Toolbox) into the Matlab workspace
  • reading and normalizing the input array from the workspace (collecting the ergonomic parameters measured for each elementary operation i.e. neck bending angle and shoulder angle)
  • computing IcTOT through the fuzzy inference engine for each variable analysed.
Step 4. Criticality classes and Cobot implementation
The outputs of step 4 are the EOi with the highest the criticality classes.
In this portion of the study the cobot is implemented for those operations with the greatest criticality classes.
The total criticality index associated with a criticality class is defined and the EOi in which the workers assume a critical posture are defined. Three criticality classes are defined through triangle-shaped and/or trapezoidal-shaped membership functions consisting of three labels, each associated with one color. The green color corresponds to a low criticality index, the yellow color to a medium criticality index and the red color to a high criticality index.
For the improvement of working conditions, [53,54,55], proposed human-robot collaboration with promising results in reducing the workload and risks associated with WMSD. [54] highlights the importance of industrial collaborative robots, defined as cobots, for the reduction of ergonomic problems in the workplace deriving from physical and cognitive stress and for the improvement of safety, productivity and quality. Thanks to a close interaction between the machine and the operator, these tools allow high precision, speed and repeatability which have a positive impact on productivity and flexibility.
Therefore, in this step, after the implementation of the cobot, the values of the criticality indices are calculated, along with the assessment of certain production parameters, including the impact on production capacity and ergonomic stress. The methodology is iterated if there are other EOi in which the cobot can be implemented. Automatically ergonomic assessments in various working environments will aid in the prevention of occupational injuries. Furthermore, rather than the entire operation period, the high-risk elementary operation could be identified immediately and provided to the inspector for further evaluation.

4. Industrial application

The test bed of the research methodology is a shop floor producing Low voltage breakers. These products (Figure 4) are composed of an electrical set (coil and electronic switch) enclosed in an iron case. The experimental campaign has been conducted within an assembly activity for trip coil production (Figure 4).
The ergonomic assessment is conducted considering the following criticality indices, as ergonomic parameters in input:
  • trunk bending angle
  • neck bending/rotation angle
  • left or right elbow angle
  • spine,
  • number of repetitions of the same movements
The entire working activity of the assembly process is described in Figure 5, where the frames of the EOi of the worker are shown. As a first step, the analysis regarded the set-up activity. This activity is divided into fourteen EO. This section of the research provides an accurate ergonomic assessment method by introducing posture estimation algorithms based on bi-dimensional (2D) video. To monitor the postures of the employees, we have developed an ergonomic evaluation algorithm able to give in output the value of the joint angles.
Starting from the video of the whole working cycle, each EOi has been isolated by identifying the portion of the video where a single repetition of the operation can be isolated. This process generates a set of frames (images) for each elementary operation of the work cycle. Each frame-set has been used as the input for the detection of the 3D human pose, computed for each EOi. Figure 6 shows an example of human poses computation, including the predicted 2D keypoints for the EO12. In the same Figure 6 the key frames of the image acquired are shown. From these frames we can see the joint angle based on the information acquired from the human skeleton incorporated, with keypoints prediction. These predictions are the coordinates (see Figure 6) of 17 key points. Starting from these coordinates, an algorithm processes the predictions for each frame of each EOi. The output of this phase are the joint angles computed in the worst case for each EOi. Table 3 reports the values arising from the ergonomic assessment for each EOi.
Before running the fuzzy inference engine, it is necessary to normalize the inputs as regards the maximum values of their respective ranges. Table 4 shows the values normalized regarding five ergonomic parameters.
As an example, let’s consider the elementary operation EO1, for which the input parameters for the fuzzy inference engine are the following:
Right elbow = 0,571
Right shoulder = 0,761
Bend neck = 0,655
Spine = 0,137
Repetition of the same movements = 0,630
According to the membership functions, the fuzzy system assigns a label and a membership degree to each input. All of the rules in which the variables have these labels are considered by the decisional logic.
The system infers a numeric value for each index based on the output membership function by reading the value on the x-axis corresponding to the assigned membership degree and label.
In the example of Figure 7, only rule #21 is completely met. Furthermore, it is clear that some of the rules are being followed at least partially. This occurs when the input's red line intersects the trapezoidal function at one of its slopes. Individual outputs must be summed to produce a single value.
The centroid calculation method is used for the defuzzification step.
The methodology allows to appraise the highest ergonomic risk value for each EOi. Table 5 shows the IcTOT postural criticality index calculated through the integration of the computer vision and the FIE for all the EOi, with the respective criticality class. With the results of the system, the ergonomic manager can define the corrective actions with high priority that are required for the EO12 and EO14.
To reduce the ergonomic risk and to move towards intelligent production and Industry 4.0, in recent years cobots have been spreading in particular for manual operations in production. In production systems that implement cobots, human operators and robots collaborate safely on different work activities. Hence, the dexterity and cognitive abilities of human operators, the repeatability of operations and the payload capacity of robots can be effectively integrated to achieve high productivity, flexibility, less ergonomic risk, greater safety and lower costs.
Regarding the EO12, we can act on the machine by replacing it with an automatically operated press to reduce the criticality index and improve the quality of the ergonomic level. Then, the EO14 has been improved with the implementation of a collaborative robot that helps the operator in positioning the assembled component in the box as shown in the Table 6.
To verify the benefits of implementing the cobot, the new values of the total criticality index were calculated. Table 7 summarizes the results obtained and we may argue that the cobot reduce the criticality class of the EO14 operation to the green class, for which immediate actions are not necessary.

5. Discussion

After the determination of the criticality class for each elementary operation, some production parameters have been considered within a work shift. These parameters allow also to measure the impact of the implementation of the cobot on the stress deriving from the EOi conducted by the operator. The overall ergonomic score (OES) considering all the EOi has been appraised through the approach of [39], as follows:
O E S = i P E S i 3 *   N U M D O M * N U M _ E L E M _ O P S
Where:
  • o PES is the sum of each score obtained for a single elementary operation
  • o NUM_DOM is the number of selected ergonomic domains
  • o NUM_ELEM_OPS is the number of the EOi.
Figure 8 reports the values of OES normalized with respect to its maximum value, before and after the implementation of the cobot.
Another interesting result regards the variation in production capacity for each work shift. The implementation of the cobot allows the operator to start assembling a new component while the cobot performs the positioning phase of the component in the "finished products" box. Therefore in Figure 9 we report the production capacity trend before and after cobot implementation.
From Figure 8 and Figure 9 it is possible to see that the cobot affects both the total ergonomic score and the productive capacity. The OES is reduced by 13%, while production capacity increases from 673 pieces to 900 pieces produced during a work shift. In addition, Figure 8 shows the difference in production for each slot of work. At the beginning of the work shift, there is a strong help from the co-robot, but as the operator's stress increases, the difference in pieces produced drops considerably. The production difference in the first slot is 34,3% while in the last work slot it decreases by up to 20%. This difference occurs because the EO11 has not yet been modified and the operator is subjected to a stress which decreases the production capacity during the time slots. Therefore, future work may consider corrective action on the EO11 such as a semi-automatic press to further reduce OEE and increase production capacity.
Unlike previous vision-based methods [21], the method can provide ergonomic assessments at the joint level, which can result in more accurate assessments and more beneficial for specific corrective actions. The reason lies in the new method developed for the analysis of postural data. This study used a temporal convolutional model that takes 2D key point sequences as input and 3D human pose estimation as output. This convolutional model allows the parallel processing of multiple frames unlike recurring networks. The benefits of this work include: ii) the collection of posture data and ergonomic risk assessment in a non-intrusive way without interfering with the normal activities of the workers ii) the provision of 3D posture data instead of 2D posture data, so that joint angles can be measured with accurate precision and iii) adaptability of the model to different 2D key point detectors and effective management of large contexts through dilated convolutions.

6. Conclusions and limitations

The topic of human-robot collaboration, and its possibilities in industrial scenarios is of great interest and at the centre of the debate within the scientific community. This is mainly due to the matter of fact that collaborative robotics is unanimously considered one of the most promising technologies for manufacturing companies that aspire to more flexible, adaptive and efficient production systems. In this research the use of a neural model integrated with a cobot has been explored to appraise 3D human poses and to optimize them. The architecture created is based on temporal information with dilated convolutions through trajectories of 2D key points for the automatic calculation of joint angles. The ergonomic analysis has produced a dataset capable of providing objective values for postures. The acquisition and processing of data completely independent from the evaluation phase. For the improvement of human stress and some production parameters, the collaborative robot was implemented, which can be intended the main enabling technologies of adaptive systems based on flexibility, reconfigurability and production efficiency. The main contribution of this work lies in the calculation of the joint angle based on the 3D human pose estimation in workplace. A second contribution of this work is the IcTOT calculation which takes considers various ergonomics indicator through a fuzzy inference system. The IcTOT defines the most critical elementary operation on which to implement the collaborative robot to evaluate ergonomic stress and production capacity. These results overcome studies of [1,36,56] in which posture scores are appraised by detecting 2D coordinates of body joints. The automation of observation-based techniques for posture assessment using the Computer Vision has been able to eliminate errors due to human subjectivity and to reduce the times needed for posture evaluation in workplaces.
Our research results have been validated by comparing them with risk classes computed with classical method based on tabular values and nominal angle values for each elementary operation.
Although this research has shown significant results, it has certain limitations.
The first limitation regards the workplace analysis when there are interactions between several human operators, or if there are objects that may partially occlude the area to be analysed.
The second limitation regards the analysis of the workplace where there are more people, while the model developed in this study gives the human pose considering only one operator. Future work may include a 3D pose estimate for multiple people.
The third limitation stems from the fact that, first it is necessary to record the video of all operations, then to give it as input in our model. The next research could process video sequences in real-time systems with an automatic alerting system for the most critical operations.
In addition our research methodology could be tested on different personal protective equipment that the operator wears, or considering different environmental conditions, to evaluate if they can influence the obtained results.

Author Contributions

Conceptualization, M.M., C.R., and V.B.; methodology, M.M., C.R., F.G. and V.B.; software, F.G. and V.B.; validation, M.M. F.G. and V.B.; formal analysis, M.M. F.G. and V.B.; investigation, M.M. and C.R.; resources, M.M. F.G. and V.B.; ; data curation, M.M., C.R., F.G. and V.B.; writ-ing—original draft preparation, M.M. F.G. and V.B.; writing—review and editing, M.S L.T.; visualization, M.S L.T. M.M. F.G. and V.B.; su-pervision, M.S L.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Massiris Fernández, M., Fernández, J. Á., Bajo, J. M., & Delrieux, C. A. Ergonomic risk assessment based on computer vision and machine learning. Computers & Industrial Engineering, 2020, 149, 106816. [CrossRef]
  2. Middlesworth, M. How to Prevent Sprains and Strains in the Workplace, 2015.
  3. Van Der Beek, A. J., Dennerlein, J. T., Huysmans, M. A., Mathiassen, S. E., Burdorf, A., Van Mechelen, W., & Coenen, P. A research framework for the development and implementation of interventions preventing work-related musculoskeletal disorders. Scandinavian Journal of Work, Environment & Health, 2017, 526-539. [CrossRef]
  4. Ng, A., Hayes, M. J., & Polster, A. Musculoskeletal disorders and working posture among dental and oral health students. In Healthcare, MDPI, 2016, 4 (1), p. 13. [CrossRef]
  5. Luttmann, A., Jager, M., Griefahn, B., Caffier, G., Liebers, F. Preventing musculoskeletal disorders in the workplace, World Health Organization, 2003.
  6. Roman-Liu, D. Comparison of concepts in easy-to-use methods for MSD risk assessment. Applied ergonomics, 2014, 45(3), 420-427. [CrossRef]
  7. Ha, C., Roquelaure, Y., Leclerc, A., Touranchet, A., Goldberg, M., & Imbernon, E. The French musculoskeletal disorders surveillance program: pays de la Loire network. Occupational and environmental medicine, 2009, 66(7), 471-479. [CrossRef]
  8. Jeong, S. O., & Kook, J. CREBAS: Computer-Based REBA Evaluation System for Wood Manufacturers Using MediaPipe. Applied Sciences, 2023, 13(2), 938. [CrossRef]
  9. Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, Honolulu, Hawaii, July 21, 26, pp. 7291-7299.
  10. Kee, D. An empirical comparison of OWAS, RULA and REBA based on self-reported discomfort. International journal of occupational safety and ergonomics, 2020, 26(2), 285-295.
  11. Kong, Y. K., Lee, S. Y., Lee, K. S., & Kim, D. M. Comparisons of ergonomic evaluation tools (ALLA, RULA, REBA and OWAS) for farm work. International journal of occupational safety and ergonomics, 2018, 24(2), 218-223.
  12. Li, X., Han, S., Gül, M., Al-Hussein, M., & El-Rich, M. 3D visualization-based ergonomic risk assessment and work modification framework and its validation for a lifting task. Journal of Construction Engineering and Management, 2018, 144(1), 04017093. [CrossRef]
  13. Andrews, D. M., Fiedler, K. M., Weir, P. L., & Callaghan, J. P. The effect of posture category salience on decision times and errors when using observation-based posture assessment methods. Ergonomics, 2012, 55(12), 1548-1558. [CrossRef]
  14. Sasikumar, V. (2018). A model for predicting the risk of musculoskeletal disorders among computer professionals. International Journal of Occupational Safety and Ergonomics, 2018, 26(2), 384-396. [CrossRef]
  15. Plantard, P., Shum, H. P., Le Pierres, A. S., & Multon, F. Validation of an ergonomic assessment method using Kinect data in real workplace conditions. Applied ergonomics, 2017, 65, 562-569. [CrossRef]
  16. Nath, N. D., Akhavian, R., & Behzadan, A. H. Ergonomic analysis of construction worker's body postures using wearable mobile sensors. Applied ergonomics, 2017, 62, 107-117.
  17. Jayaram, U., Jayaram, S., Shaikh, I., Kim, Y., & Palmer, C. Introducing quantitative analysis methods into virtual environments for real-time and continuous ergonomic evaluations. Computers in industry, 2006, 57(3), 283-296. [CrossRef]
  18. Zhang, H., Yan, X., & Li, H. Ergonomic posture recognition using 3D view-invariant features from single ordinary camera. Automation in Construction, 2018, 94, 1-10. [CrossRef]
  19. Yu, Y., Yang, X., Li, H., Luo, X., Guo, H., & Fang, Q. Joint-level vision-based ergonomic assessment tool for construction workers. Journal of construction engineering and management, 2019, 145(5), 04019025. [CrossRef]
  20. Vignais, N., Bernard, F., Touvenot, G., & Sagot, J. C. Physical risk factors identification based on body sensor network combined to videotaping. Applied ergonomics, 2017, 65, 410-417. [CrossRef]
  21. Yan, X., Li, H., Wang, C., Seo, J., Zhang, H., & Wang, H. Development of ergonomic posture recognition technique based on 2D ordinary camera for construction hazard prevention through view-invariant features in 2D skeleton motion. Advanced Engineering Informatics, 2017, 34, 152-163. [CrossRef]
  22. Battini, D., Persona, A., & Sgarbossa, F. Innovative real-time system to integrate ergonomic evaluations into warehouse design and management. Computers & Industrial Engineering, 2014, 77, 1-10. [CrossRef]
  23. Xu, X., Robertson, M., Chen, K. B., Lin, J. H., & McGorry, R. W. Using the Microsoft Kinect™ to assess 3-D shoulder kinematics during computer use. Applied ergonomics, 2017, 65, 418-423. [CrossRef]
  24. Fang, W., Love, P. E., Luo, H., & Ding, L. (2020). Computer vision for behaviour-based safety in construction: A review and future directions. Advanced Engineering Informatics, 2020, 43, 100980. [CrossRef]
  25. Liu, M., Han, S., & Lee, S. Tracking-based 3D human skeleton extraction from stereo video camera toward an on-site safety and ergonomic analysis. Construction Innovation, 2016, 16(3), 348-367.
  26. Seo, J., Alwasel, A., Lee, S., Abdel-Rahman, E. M., & Haas, C. A comparative study of in-field motion capture approaches for body kinematics measurement in construction. Robotica, 2019, 37(5), 928-946. [CrossRef]
  27. Li, L., Martin, T., & Xu, X. A novel vision-based real-time method for evaluating postural risk factors associated with musculoskeletal disorders. Applied Ergonomics, 2020, 87, 103138. [CrossRef]
  28. Peppoloni, L. , Filippeschi, A., Ruffaldi, E., & Avizzano, C. A. A novel wearable system for the online assessment of risk for biomechanical load in repetitive efforts. International Journal of Industrial Ergonomics, 2016, 52, 1-11. [CrossRef]
  29. Clark, R. A., Pua, Y. H., Fortin, K., Ritchie, C., Webster, K. E., Denehy, L., & Bryant, A. L. Validity of the Microsoft Kinect for assessment of postural control. Gait & posture, 2012, 36(3), 372-377. [CrossRef]
  30. Trumble, M., Gilbert, A., Hilton, A., & Collomosse, J. Deep autoencoder for combined human pose estimation and body model upscaling. In Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 784-800.
  31. Von Marcard, T., Henschel, R., Black, M. J., Rosenhahn, B., & Pons-Moll, G. Recovering accurate 3d human pose in the wild using imus and a moving camera. In Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 601-617.
  32. Pavllo, D., Feichtenhofer, C., Grangier, D., & Auli, M. 3d human pose estimation in video with temporal convolutions and semi-supervised training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7753-7762.
  33. El Makrini, I., Merckaert, K., De Winter, J., Lefeber, D., & Vanderborght, B. Task allocation for improved ergonomics in Human-Robot Collaborative Assembly. Interaction Studies, 2019, 20(1), 102-133. [CrossRef]
  34. Parra, P. S., Calleros, O. L., & Ramirez-Serrano, A. Human-robot collaboration systems: Components and applications. In Int. Conf. Control. Dyn. Syst. Robot, 2020, Vol. 150, pp. 1-9. [CrossRef]
  35. Battini, D., Faccio, M., Persona, A., & Sgarbossa, F. New methodological framework to improve productivity and ergonomics in assembly system design. International Journal of industrial ergonomics, 2011, 41(1), 30-42. [CrossRef]
  36. Nayak, G. K., & Kim, E. Development of a fully automated RULA assessment system based on computer vision. International Journal of Industrial Ergonomics, 2021, 86, 103218. [CrossRef]
  37. Shuval, K., & Donchin, M. (2005). Prevalence of upper extremity musculoskeletal symptoms and ergonomic risk factors at a Hi-Tech company in Israel. International Journal of Industrial Ergonomics, 2005, 35(6), 569-581. [CrossRef]
  38. Das, B., Shikdar, A. A., & Winters, T. (2007). Workstation redesign for a repetitive drill press operation: a combined work design and ergonomics approach. Human Factors and Ergonomics in Manufacturing & Service Industries, 2007, 17(4), 395-410. [CrossRef]
  39. Savino, M. M., Battini, D., & Riccio, C. (2017). Visual management and artificial intelligence integrated in a new fuzzy-based full body postural assessment. Computers & Industrial Engineering, 2017, 111, 596-608. [CrossRef]
  40. Klir, J., Yuan, B. Fuzzy Sets and Fuzzy Logic: Theory and Applications. 1995, Prentice Hall, New Jersey.
  41. Li, H.X., Al-Hussein, M., Lei, Z., and Ajweh, Z. Risk identification and assessment of modular construction utilizing fuzzy analytic hierarchy process (AHP) and simulation. Canadian Journal of Civil Engineering, 2013, 40(12): 1184-1195. [CrossRef]
  42. Markowski, A.S., Mannan, M.S., Bigoszewska, A., Fuzzy logic for process safety analysis. Journal of Loss Prevention in the Process Industries, 2009, 22 (6), 695–702.
  43. Nasirzadeh, F., Afshar, A., Khanzadi, M., and Howick, S. Integrating system dynamics and fuzzy logic modelling for construction risk management. Construction Management and Economics, 2008, 26(11): 1197-1212. [CrossRef]
  44. Zioa, E., Baraldia, P., Librizzia, M. A fuzzy set-based approach for modeling dependence among human errors. Fuzzy Sets and Systems, 2008, 160, 1947– 1964. [CrossRef]
  45. Marseguerra, M., Zio Enrico Librizzi, M. Human reliability analysis by fuzzy ‘‘CREAM”. Risk Analysis, 2007, 27 (1), 137–154.
  46. Kim, B. J., & Bishu, R. R. Uncertainty of human error and fuzzy approach to human reliability analysis. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 2006, 14(01), 111-129. [CrossRef]
  47. Konstandinidou, M., Nivolianitou, Z., Kiranoudis, C., Markatos, N. A fuzzy modeling application of CREAM methodology for human reliability analysis. Reliability Engineering and System Safety 2006, 91, 706–716. [CrossRef]
  48. Li, P. C., Chen, G. H., Dai, L. C., & Li, Z. Fuzzy logic-based approach for identifying the risk importance of human error. Safety science, 2010, 48(7), 902-913. [CrossRef]
  49. Golabchi, A., Han, S., & Fayek, A. R. A fuzzy logic approach to posture-based ergonomic analysis for field observation and assessment of construction manual operations. Canadian Journal of Civil Engineering, 2016, 43(4), 294-303. [CrossRef]
  50. Contreras-Valenzuela, M. R., Seuret-Jiménez, D., Hdz-Jasso, A. M., León Hernández, V. A., Abundes-Recilla, A. N., & Trutié-Carrero, E. Design of a Fuzzy Logic Evaluation to Determine the Ergonomic Risk Level of Manual Material Handling Tasks. International Journal of Environmental Research and Public Health, 2022, 19(11), 6511. [CrossRef]
  51. Campanella, P. (2021). Neuro-Fuzzy Learning in Context Educative. In 2021 19th International Conference on Emerging eLearning Technologies and Applications (ICETA), 2021, pp. 58-69. IEEE.
  52. Bukowski, L., & Feliks, J. Application of fuzzy sets in evaluation of failure likelihood. In 18th International Conference on Systems Engineering (ICSEng'05), 2005, pp. 170-175. IEEE.
  53. Villani, V., Sabattini, L., Czerniaki, J. N., Mertens, A., Vogel-Heuser, B., & Fantuzzi, C. Towards modern inclusive factories: A methodology for the development of smart adaptive human-machine interfaces. In 2017 22nd IEEE international conference on emerging technologies and factory automation (ETFA), 2017, pp. 1-7. IEEE.
  54. Cherubini, A., Passama, R., Crosnier, A., Lasnier, A., & Fraisse, P. Collaborative manufacturing with physical human–robot interaction. Robotics and Computer-Integrated Manufacturing, 2016, 40, 1-13. [CrossRef]
  55. Tsarouchi, P., Makris, S., & Chryssolouris, G. Human–robot interaction review and challenges on task planning and programming. International Journal of Computer Integrated Manufacturing, 2016, 29(8), 916-931. [CrossRef]
  56. Li, L., & Xu, X. A deep learning-based RULA method for working posture assessment. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2019, Vol. 63, No. 1, pp. 1090-1094. Sage CA: Los Angeles, CA: SAGE Publications.
Figure 1. Research Methodology.
Figure 1. Research Methodology.
Preprints 88905 g001
Figure 2. Skeleton and body joints.
Figure 2. Skeleton and body joints.
Preprints 88905 g002
Figure 3. Input membership function of right shoulder.
Figure 3. Input membership function of right shoulder.
Preprints 88905 g003
Figure 4. Trip coil assembly.
Figure 4. Trip coil assembly.
Preprints 88905 g004
Figure 5. Frames of Elementary Operations.
Figure 5. Frames of Elementary Operations.
Preprints 88905 g005aPreprints 88905 g005bPreprints 88905 g005c
Figure 6. Video frames with 2D pose overlay and 3D reconstruction.
Figure 6. Video frames with 2D pose overlay and 3D reconstruction.
Preprints 88905 g006
Figure 7. Rule Viewer of the MATLAB Fuzzy Logic Toolbox for EO14.
Figure 7. Rule Viewer of the MATLAB Fuzzy Logic Toolbox for EO14.
Preprints 88905 g007
Figure 8. OES before and after cobot implementation.
Figure 8. OES before and after cobot implementation.
Preprints 88905 g008
Figure 9. Production Capacity for one work shift before and after cobot implementation.
Figure 9. Production Capacity for one work shift before and after cobot implementation.
Preprints 88905 g009
Table 1. Joint angles from 17 skeleton data.
Table 1. Joint angles from 17 skeleton data.
Angle name Acronym Involved joints
Left elbow EL ∠13, 12, 11
Right elbow ER ∠14, 15, 16
Left shoulder SL ∠12, 11, 08
Right shoulder SR ∠15, 16, 08
Left shoulder 2 SL2 ∠11, 08, 07
Right shoulder 2 SR2 ∠14, 08, 07
Left knee KL ∠04, 05, 06
Right knee KR ∠01, 02, 03
Neck twisting NT ∠10, 09 08
Neck bending left NB ∠09, 08, 11
Neck bending right NBR ∠09, 08, 14
Neck flexion NF ∠09, 08, 00
Trunk twisting right TT ∠11, 00, 04
Trunk twisting left TTL ∠14, 00, 04
Trunk Bending TB ∠04, 00, 07
Table 2. The ergonomic domains and criticality index.
Table 2. The ergonomic domains and criticality index.
Domain group Ergonomic indicator Score
Low Medium High
Upper Limb (UL) Trunk bending angle (degree) (0° - 15°) (15°- 30°) (> 30°)
Left or Right elbow angle (degree) (0° - 15°) (15°- 40°) (> 45°)
Left or Right shoulder (degree) (20° - 45°) (45° – 90°) (> 90°)
Neck bending or rotation angle (degree) (0° - 10°) (10°- 20°) (> 20°)
Forearm rotation angle (degree) (0° - 90°) (> 90°) (> 90° and crossed)
Spine (degree) (0° - 20°) (20° – 60°) (> 60°)
Wrist bending angle (degree) The value is calculated considering the ulnar or radial deviation (inward or outward rotation) according to RULA and Health Safety Executive guidelines (2014) (0°) (+15°; -15°) (> 15°)
Stereotypy, loads, typical actions (TA) Arm position for material withdrawal (Without extending an arm) (Extending an arm) (Two hands needed)
Trunk Rotation (degree) (0° - 45°) (45°- 90°) (> 90°)
Repetition of the same movements (RM) The parameter refers to the high repetition of the same movements (From 25% to 50% of the cycle time) (From 51% to 80% of the cycle time) (>80%)
Table 3. Input values of step 3.
Table 3. Input values of step 3.
Rightelbow Leftelbow Rightshoulder Leftshoulder Bendneck Spine Repetition of the same movements
EO1 102,695 97,214 68,531 69,910 26,200 8,224 264,6
EO2 107,364 90,895 68,257 69,259 29,533 7,857 260,4
EO3 112,442 106,222 68,574 69,125 30,267 6,867 281,4
EO4 110,278 102,848 64,148 58,482 27,388 3,154 327,6
EO5 111,060 97,371 63,036 60,891 24,281 2,560 218,4
EO6 111,554 90,522 63,151 59,894 25,081 4,731 243,6
EO7 104,628 96,774 63,587 61,163 27,969 5,033 327,6
EO8 104,122 99,810 63,661 61,779 27,365 6,354 277,2
EO9 109,308 113,187 64,225 60,825 22,259 2,860 294,0
EO10 84,007 95,473 67,993 59,761 15,216 5,207 259,4
EO11 90,000 100,099 69,199 14,468 14,468 4,822 278,8
EO12 125,132 92,035 83,918 64,064 15,285 9,599 336,0
EO13 108,646 92,460 68,662 61,729 13,165 6,608 285,6
EO14 38,817 77,647 63,000 69,242 21,789 26,694 315,0
Table 4. Results of ergonomics assessment.
Table 4. Results of ergonomics assessment.
Elementary operation Ergonomic parameters
Right Elbow Right Shoulder Bend Neck Spine Repetition of the same movements
Normalized Normalized Normalized Normalized Normalized
EO1 0,571 0,761 0,655 0,137 0,630
EO2 0,596 0,758 0,738 0,131 0,620
EO3 0,625 0,762 0,757 0,114 0,670
EO4 0,613 0,713 0,685 0,052 0,780
EO5 0,617 0,700 0,607 0,0427 0,520
EO6 0,620 0,702 0,627 0,0789 0,580
EO7 0,581 0,707 0,699 0,0839 0,780
EO8 0,578 0,707 0,684 0,1059 0,660
EO9 0,607 0,714 0,556 0,0477 0,700
EO10 0,467 0,755 0,380 0,0868 0,618
EO11 0,500 0,769 0,362 0,0804 0,664
EO12 0,695 0,932 0,382 0,1600 0,800
EO13 0,604 0,763 0,329 0,1101 0,680
EO14 0,882 0,700 0,595 0,4449 0,750
Table 5. IcTOT values.
Table 5. IcTOT values.
Elementary operation IcTOT Criticality class
EO1 0,69 Yellow
EO2 0,69 Yellow
EO3 0,7 Yellow
EO4 0,65 Yellow
EO5 0,41 Yellow
EO6 0,44 Yellow
EO7 0,7 Yellow
EO8 0,63 Yellow
EO9 0,60 Yellow
EO10 0,33 Green
EO11 0,39 Yellow
EO12 0,77 Red
EO13 0,44 Yellow
EO14 0,833 Red
Table 6. Cobots implementation.
Table 6. Cobots implementation.
Elementary operation Corrective actions proposed Corrective Action
EO14 Implementation of a collaborative robot Preprints 88905 i001
Table 7. New criticality indices after corrective action.
Table 7. New criticality indices after corrective action.
Elementary operation. IcTOT Criticality class
EO1 0,69 Yellow
EO2 0,69 Yellow
EO3 0,7 Yellow
EO4 0,65 Yellow
EO5 0,41 Yellow
EO6 0,44 Yellow
EO7 0,7 Yellow
EO8 0,63 Yellow
EO9 0,60 Yellow
EO10 0,33 Green
EO11 0,39 Yellow
EO12 0,77 Red
EO13 0,44 Yellow
EO14 0 Green
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.

Downloads

150

Views

61

Comments

0

Subscription

Notify me about updates to this article or when a peer-reviewed version is published.

Email

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated