A new hybrid dynamic FMECA with decision-making methodology: a case study in an agri-food company

The Failure Mode and Effect Analysis (FMEA) is often used to improve a system's reliability. This paper proposes a new approach that aims to overcome the most critical defects of the traditional FMEA. This new methodology combines the Entropy and Bwm methodology with the EDas and System Dynamics, FMECA: The EN-B-ED Dynamic FMECA. The main innovation’s point of the proposed work is the presence of an unknown factor (Cost) in order to take into consideration the economic aspect; the evaluation of the four-factor through both an objective method (Entropy method) and a subjective method (BWM); the ranking method used (EDAS method), much more accurate than RPN; the development of a dynamic criticality analysis to take in consideration the dynamic aspect of the system. This work aims to give manufacturing companies an easy and replicable method to analyze the possible failure modes and prevent the fault.

The most cited shortcomings concern the absence of weights on R, S and D factors, the absence of economic factors, the absence of scientific basis in the RPN calculation formula, and many duplicates in RPN results.
In literature, FMEA has often been supported with Multi-Criteria Decision Methods (MCDM) to overcome these shortcomings.
In this paper, an innovative method called "EN-B-ED Dynamic FMECA" is presented.
The innovation's points of the presented work are: • the addition of a factor related to the cost of the failure; • the combination of two multi-criteria decision methods (the Entropy method and the Best Worst Method) to calculate the weights of the criteria, in order to take into consideration both the objective than the subjective data; • the addition of System Dynamics gives the model dynamism and evaluates the system as a complex set of elements and not as in the traditional FMEA as a set of distinct and separate components.
A case study carried out on a machine of an important Italian company in the agri-food sector is presented to evaluate the proposed mode's robustness.
The paper is organized as follows. Section Error! Reference source not found., a brief report on the state of the art of the FMEA, is proposed with particular attention to the developments proposed in conjunction with MCDM. In Section Error! Reference source not found., the problem is defined generically. Then, in Section Error! Reference source not found., the methodology is proposed. Section 6 describes the case study. Finally, the last chapter, Section Error! Reference source not found., focuses on conclusions and proposals for future work developments.

Literature review
A complete FMEA analysis consists of 4 steps (Stamatis, 2003) : • Identify all failure modes that have occurred or potential failure of a system. • Identify the causes and effects of faults.
• Ranking the identified failure modes through RPN (Risk Priority Number) • Take corrective action.
In the third step, the RPN makes the FMEA a quantitative method.
In order to carry out a correct FMEA analysis and identify all possible failure modes, a diversified team of people with different backgrounds (e.g., mechanical design, software, production, maintenance) is usually involved in doing this, as this increases the probability that all failures will be identified and the effects correctly estimated (Cristea & Constantinescu, 2017).
The introduction of Criticality Analysis (CA implements the FMEA in FMECA This CA analysis in the original methodology is based on the calculation of the RPN risk priority index. The RPN index is the product of three factors (Ciani et al., 2019): • Probability (O) is the probability that a failure mode will occur. It is, therefore, strongly linked to the failure rate of the component. • Severity (S) is related to the effect/impact of the fault model. • Detectability (D) indicates the ability to diagnose the fault mode before its effects occur on the system.
The conventional method of RPN calculation has been widely analyzed in literature for several reasons.
The most significant shortcomings identified on the RPN method are: 1. The factors O, S, and D have the same importance, the same weight ) (Carmignani G., 2009) (Liu et al., 2011) 2. Different values of O, S and D can produce the same RPN value even if they hide a different risk )  3. It is difficult to assess the three risk factors. The information present in an FMEA analysis is often uncertain and could be expressed through linguistic variables, making it difficult or even impossible to evaluate O, S, and D (Xu et al., 2002) with certainty and directness.
4. The mathematical formula of RPN calculation has no scientific basis.
5. The conversion of scores is different for the three factors; in some cases, it is linear, and in others, it is non-linear.
6. The RPN method is completely ignoring the importance of corrective actions; it is calculated only from a risk point of view (Pillay & Wang, 2003) (Carmignani G., 2009) 7. The range of numbers covered by the RPN formula is minimal. There are many holes not covered by RPN. In the range from 1 to 1000, only 120 numbers can be produced by RPN.
8. The interdependencies between the various error modes and the effects are not taken into account.
9. The mathematical form adopted by RPN is highly sensitive to variations in the valuation of individual factors.  (Braglia, 2020).
• TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method: its first use was in 2003 (Braglia et al., 2003). To overcome some limitations of using the conventional FMECA analysis method, they proposed a new way to calculate the RPN based on the TOPSIS method with fuzzy logic. The uses of the TOPSIS method in real cases are not few. We find the TOPSIS methodology applied in a food sector company to reduce and stabilize maintenance costs (Selim, Yunusoglu, & Yilmaz Balaman, 2016). In 2017 it was used to monitor possible failure of a submarine control module (Kolios et al., 2017).
• PROMETHEE (Preference Organization Ranking Model) method: The thing that distinguishes PROMETHEE from pairwise comparison methods is that there is a specific preference function and not a single global utility function here for each comparison. (Lolli, Ishizaka, Gamberini, Rimini, & Messori, 2015) • VIKOR (VIseKriterijumska Optimizacija Kompromisno Resenje) method: it is a compromise method that has been used alongside the FMEA several times. In 2012 Liu used the method, under fuzzy logic, to rank the failure modes (Liu H.-C. et al., 2012) • BWM ((Best Worst Method) ) method: it has often been used in conjunction with other methods, such as with Vikor (Tian, Wang, & Zhang, 2018)  • Fusion FMEA method: it's proposed an information fusion FMEA method based on 2-tuple linguistic information and interval probability. The 2-tuple linguistic set theory is adopted to change the heterogeneous information into interval numbers. Meanwhile, the interval probability comparison method is applied to analyze failure modes. The uniqueness of this method is that it takes account of heterogeneous information rather than single type information. (Ouyang et. Al 2021).
In general, the authors focused on three main shortcomings of the traditional FMEA.
• First is how the RPN index is calculated; many researchers have focused on determining a replicable method to identify different weights to be assigned to various criteria.
• Secondly, many scholars have focused on finding a method that would allow the proper evaluation of alternatives in the case of linguistic variables and, therefore, in cases of uncertainty. • Finally, in recent years, the academic world has tried to solve another critical shortcoming of the traditional FMEA, the lack of some factors, first of all, the cost. Therefore, many methods involve the use of several factors.

Problem definition
The EN-B-ED Dynamic FMECA can be used to prevent a failure in a process or machinery. In this study is presented an EN-B-ED Dynamic FMECA on machinery.
When the machinery is broken down into its functional units, its main components are  In Table 2, a list of the nomenclature used in this methodology is presented.

Proposed methodology
Each step of this proposed method will be explained in detail in this section.  The first steps are the same as a traditional FMEA analysis: • Forming a multidisciplinary team of specialists • Identify critical areas of the system and analyze them.
• Fill in a table with the possible failure modes, their possible causes and their possible effects. • Define criteria that best describe the risk associated with FMs • Evaluate each criterion for each possible cause of the fault

Evaluation of the factors O, S, D and C
The first innovation point of EN-B-ED Dynamic FMECA analysis concerns the criteria used.
In addition to the three traditional FMEA criteria, O, S and D, a fourth one has been added, the cost C, to consider the costs arising from the fault occurrence.

This term considers
• Costs of non-production where ℎ is the hourly cost of the production labor, is the number of workers, is the time of the maintenance intervention to restart the machine, is the average quantity produced per hour, is the average price of the finished product, ℎ is the hourly overtime cost of the production labor. So the first term takes into account the cost of the production unable to work during downtime, the second term takes into account the hidden costs resulting from the lack of production, and the last term instead takes into account the cost to be incurred if it is decided to pay overtime to recover lost production.
• Labour costs = ( ℎ ) where ℎ is the hourly cost of maintenance workers, is the number of maintenance workers and is the maintenance intervention time.
• Costs of spare parts used = ∑ =1 where is the cost of the i-th spare part and is the number of spare parts i used

= + +
In Error! Reference source not found., Error! Reference source not found., and Error! Reference source not found.the three scales used for O, S, and D are exposed (Nuchpho, Nansaarng, & Pongpullponsak, Modified fuzzy FMEA application in the reduction of defective poultry products, 2019).

Method (BWM)
Once obtained the starting matrix in which the possible alternatives are on the rows, the criteria on the columns, and the evaluations constitute the heart of the matrix, each criterion's weight must be identified.
In the traditional FMEA, a severe weakness, much discussed over the years, is the lack of different weights for the criteria; one factor may be predominant compared to the others.
This problem has been solved using a combination of the Entropy method and BWM to obtain the weights. This choice was made to not use methods that use language variables because these are difficult to manage and require very good experience to be used at best.
There is a risk of considering some subjective values incorrectly as it is very much related to individual skills (Wang et al., 2013) Although experts' subjective opinion is intrinsically present in the data structure, the combination with BWM has been made to take more account of business ideas (Lo & Liou, 2018). Moreover, using the two methods makes the method replicable in any company, even those where no objective maintenance data is available.
After calculating the weights with the two methods, a simple average or a weighted average of the values can be done; this depends mainly on how much the business ideas influence the maintenance aspects.
The Entropy method, according to (Trinkūnienė et al. 2017), will be applied as follows: • The data of the matrix will be normalized to ensure a homogeneous and direct comparison between the criteria: • Entropy is calculated for each criterion: • The values are calculated: • Finally, weights are calculated: The greater the weight of criterion j, the more critical will be the criterion j. If the values of criterion j are almost equal, then it will be assigned a small weight because the entropy method is an objective method based solely and exclusively on the data structure and is in no way influenced by managerial policies (Trinkūnienė, Podvezko, & Zavadskas, 2017).
The BWM will be applied as follows: • The most important criterion and the least important criterion will be identified • Preferences of the most important criterion are expressed over the others by giving a number from 1 to 9. You get a line vector • Preferences of the least important criterion are expressed by giving a number from 1 to 9. A column vector is obtained • Finally, a problem of optimisation of the type is solved: to obtain the weights of the criteria.

Calculation of final weights
Once ( ) e ( ) have been calculated, the final weights must be determined in the following way: Where E is the weight you want to give to data from maintenance and BW is the weight you want to give to subjective data from experts.

Application of the EDAS method to rank alternatives
The EDAS method (evaluation based on distance from the mean solution) is a relatively recent MCDM problem-solving technique. It derives from considerations made on two methods widely used in applications: TOPSIS (which has already been extensively discussed) and WSM (Weighted Sum Method). It consists of calculating the AV j (mean value) for each j-th criterion and evaluating each alternative's distance from this value.
The steps to apply the method are illustrated below, in a According to (Trinkūnienė, Podvezko, & Zavadskas, 2017), 7 steps must be followed: Calculate the average solution for each criterion • Calculate the mean positive distance for the benefit and disadvantage criteria: • Calculate the mean negative distance for the benefit and disadvantage criteria: • Using the weights of the previously calculated criteria, the weighted sums are calculated, • Using the weights of the previously calculated criteria, the weighted sums are calculated,

15
• The weighed sums are normalized The ranking of the possible causes of failure is evaluated following descending order of the AS index.

Criticality Analysis (CA)
The CA consists of a qualitative, quantitative, or semi-quantitative analysis used to identify the critical causes of a system failure or those where it is convenient to intervene most urgently. There are several methods to perform this analysis. One of these is to use the Hazard Score Matrix, in which only Severity and Occurrence are considered and whose product is compared either with threshold values or through Pareto analysis (Vala, Chemweno, Pintelon, & Muchiri, 2018), according to which 20% of the causes of faults cause 80% of total faults.
The Pareto principle of 80-20 applied to the EDAS method's ranking to identify the most important critical issues.
In traditional FMECA, once the causes of critical failures have been identified, they are analyzed individually and in a static way, i.e., without seeing how they evolve.
Furthermore, in doing so, the interdependencies between the different failure modes are not taken into consideration (Carmignani G., 2009) (Xu, Tang, Xie, Ho, & Zhu, 2002). This is a weakness strongly discussed by scholars, the lack of dynamism. In doing so, one never looks at the machine's totality and how its conditions change over time.
In order to overcome this problem, a criticality analysis with a System Dynamics model is carried out.
System Dynamics is an approach oriented to consider a system not as a set of single independent components but as a single complex system in which there are causal relationships that feed over time. Usually, simulation software is used to capture the system's relational aspects and study its behavior over time. The concept of time is essential because, in classic FMECA, the causes of failure are considered one at a time, without considering how one influences the other and how the sum of their contributions accumulates over time, increasing the system's criticality.
In order t o use System Dynamics correctly, it is first of all necessary to understand and be able to adequately represent the system's behavior, finding and highlighting how the elements are reciprocally connected.
Then the criticality analysis will be conducted in a dynamic way in order to examine how the simultaneity of the various causes of failure and their interactions influence the total probability of failure.

Definition and impact assessment of corrective actions
Once the system's criticalities have been identified, technicians and experts must be brought together to identify the corrective actions to be taken to lower the risk associated with the system. In this case, thanks to the use of System Dynamics, the technicians do not look only at the critical component but look at the totality of the system, questioning the influence that the various components have on each other and thus identifying corrective actions capable of lowering the entire failure risk associated with the machine.
Once corrective actions are identified, they are implemented and evaluated. To do this, the cycle illustrated in Figure is repeated.
This generates an infinite cycle aimed at continuously improving the efficiency of the system.

Case study
Our case study focuses on studying particular machinery called TR-CS re-coupling in a manufacturing company in the agro-food sector. Figure 2 is shown a top view of it.

Figure 2 TR-CS top view
This machine allows the transfer from one chain to another automatically.  It is of fundamental importance that the chain hook and the release station's trolley are correctly aligned. If this does not happen, the machine's movement will risk damaging or even breaking the chicken knuckles.
From the hook to the trolley, the transfer is allowed by an extraordinary guide positioned near the point of contact between the hook and trolley.
In this way, the chicken is positioned on the transfer trolley.
When the trolley comes out of the drag disc, this is accelerated so that the distance between the products becomes greater. The trolley is driven using a toothed belt to the weighing unit ( On the right side, the calibration line side, the process is mirrored to the one just described. The trolley is then aligned with the calibration chain's hook (in Figure indicated with number 7), and a guide, positioned appropriately, allows the chicken to be hooked to the calibration chain. Here, the guide's work is less onerous compared to that of the release guide because the chicken does not have time to "weld" to the trolley, and therefore it is easier to detach it.
The process is equipped with a series of sensors that identify the "0" hook of the chains that allow counting the chickens, weighing, and therefore matching all the product data.
Other sensors instead detect the presence of the chicken.
The coordination of the two signals makes it possible to know in the i-th hook of the tunnel chain or the calibration hook if a chicken is present and exactly which chicken is present.

Multidisciplinary team creation, machine breakdown into functional units, identification of FMs and CFs
The first step of our analysis is to form an interdisciplinary team. It is essential to bring together people with different technical backgrounds and several years of experience to identify all possible ways of system failure.
The team, during a couple of meetings, also thanks to the help of the machine manuals, has broken down the machine into six functional units:

Evaluation of the factors O, S, D, and C
After this first work of breaking down the machinery and identifying faults and causes, the team focused on evaluating the four factors O, S, D, and C. To do this; they followed their experience and a series of data from the maintenance management software.
In Appendix A -FMECA table, there is the starting matrix of the case study.

Method (BWM)
The next step of the proposed method involves applying the Entropy method to calculate the criteria' weights.
In Figure , a screenshot of the excel sheet is used. For a complete analysis, you can refer to Appendix B -ENTROPY.

Figure 5 Excel worksheet extract by entropy method
Once the criteria weights have been calculated with the Entropy method, you can proceed to calculate the weights with the BWM.
In Figure 6, there is a screenshot of the excel sheet used. For a complete analysis, you can refer to Appendix C -BWM.

Figure 6 Extract Excel worksheet for BWM
Once you have obtained the weights with the BWM, you can move on to the next step.

Calculation of final weights
Having the weights of the two criteria, now all that remains is to choose their relative importance to proceed with the final calculation of the weights.
In our case, listening to the opinion of the team, it was decided to give the same weight to the two methods so: Therefore, in our case study, the two methods have the same importance, so both the subjective data expressed a priori by the experts and the objective maintenance data were considered equally important.
Once the definitive weights of the criteria have been calculated, you can move on to the application phase of the EDAS method to calculate the ranking of the alternatives.

Application of the EDAS method to rank alternatives
Before reaching the criticality analysis, the last step of our methodology involves the use of the EDAS method to calculate the ranking of alternatives.
To apply the EDAS method, excel was used, resulting in a worksheet full of information.
For simplicity of representation, an extract of the fundamental part, that relating to the calculation of the Appraisal score and ranking is reported in 7.
For a complete analysis, you can refer to Appendix D -EDAS.

Criticality analysis (CA)
In our case study, the criticality analysis was carried out using software widely used in System Dynamics models' simulations. The software is Vensim PLE x64.
Before moving on to the construction of a CLD and then to the simulation, it is necessary to identify the critical events using the Pareto principle. This famous principle has often been used in criticality analysis (Lipol & Haq, 2011).
The Pareto principle, also called the law of 80-20, states that 20% of possible causes are responsible for 80% of system failures.
Therefore, in our case, having identified 99 causes of failure, the first 20 of the ranking made in paragraph 5.7 are our critical events.
The critical events found with the analysis carried out are those that the experts a priori indicated as critical events in the machine operation's deployment. Furthermore, the model proposed, in this study, leads to actual results.
For ease of representation and analysis, it has been decided to exclude the causes CF55/CF57/CF58/CF70/CF71/CF72/CF73/CF80 from the analysis. The choice was taken into consideration since the causes are the same. Furthermore, all the actions and analyses that will be carried out on the causes related to the tunnel chain and the chicken release station can be proposed again exactly for the causes CF55/CF57/CF58/CF70/CF71/CF72/CF72/CF73/CF80 related to the calibration chain and the chicken release station.
Besides, the causes related to the chicken release station and the calibration chain are the most serious for the reasons mentioned above: • -The tunnel chain being considerably longer than the sizing chain; if it breaks, it causes more damage at the economic level.
• -The release guide, on the other hand, unlike the hooking guide, is subject to greater stress and therefore more prone to failure As already mentioned in this work, a dynamic simulation program is used to simultaneously study the causes of failure and not take them individually as in the traditional FMEA (Lipol & Haq, 2011).
The first step is to represent the causal relationships between the variables present graphically. This diagram is called the Causal Loop Diagram (CLD). Thanks to CLD, it is possible to identify possible strengthening or balancing cycles (C., DR, B., & WS, 2018).
These cycles are significant as they tell us if two or more failures increase each other over time or eliminate each other.
In Figure , an illustration of our Causal Loop Diagram is presented.
To better understand the proposed CLD, a list is proposed that explains the representation: • The causes of failure have been entered without a box; • The failure modes have been entered in the circles; • Effects have been placed in rectangles; • Maintenance actions have been placed in hexes. Since each cause has its stochastic properties, the probability distribution that best suits the different causes of failure must be identified: • Due to the causes CF11 / CF86 / CF91 (Blackout), CF2 (incorrect feed speed), CF3 (Incorrect product life forecast), CF20 (Inadequate tolerances), being purely random events, the model that best interprets the behavior is the exponential one.
Where λ is the frequency with which the event occurs.
• For the causes CF18 / CF21 (Wrong adjustment), CF19 (Excessive vibrations), CF38 (Over-stressing), CF1 (Insufficient lubrication), CF4 (Over-stressing), being causes of failure of mechanical components, the model that best represents their behavior is Weibull's: The model under analysis with the relationships is shown in Figure 9: As shown in Figure 9, no maintenance variables have been inserted in the model, not because the maintenance activities have not been taken into account, but because they have been incorporated within the failure rate functions of the various failure modes.
In particular, for the FM9 failure mode, misalignment of the guide an IF THEN ELSE cycle, with a cycle time of two quarters, has been set. This means that thanks to the adjustment operations carried out every two months, the conditions of the guide, seen from the point of view of alignment, are returned to the 0 state, as shown in Figure 1.

Figure 1 AS-IS FM9 Guide misalignment
For the failure mode FM10, shown in Figure 2, on the other hand, a cycle time of 2 years has been set. This is because the guide is changed every two years, so its wear conditions are reset.  Once the trends of the various failure rates have been identified, all that remains is to identify that of the machine.
To do this, the total failure rate is obtained from the probability that all events can occur.
The events can be considered as s-independent.
The complete system status is presented in From the graph (  Figure ), it can be seen how the maintenance actions greatly influence the system's failure rate.
You can also guess, looking at all the graphs, that the failure machine rate, the failure rate of the entire machine, is greatly influenced by the failure rate of the FM9 guide misalignment. This is because it is the most frequent failure that occurs.
The simulation just made regarding the AS-IS status of the system, therefore considering the maintenance policies adopted in the company.
In the following, some corrective actions and their possible results are presented

Definition and impact assessment of corrective actions
As seen, our machine's critical events concern the tunnel chain, the transfer chain, and the release guide.
After simulating our system's performance with the Vensim software's help, it was found that the component that has the greatest impact on the trend of the machine failure rate is the chicken release guide.
This component performs a fundamental task and must always be aligned in the right position. Unlike the chicken hooking guide, which plays a similar role, its task is made more complicated by the fact that the chickens arrive from the cooling tunnel where they have stayed at least 3 hours. This period of time causes the chicken to fasten to the hook of the tunnel chain, arriving at the chicken release area, the guide must undergo strong stress to be able to detach the chicken from the hook and must always be aligned correctly; otherwise, it runs the risk of spoiling or dropping the chicken.
To avoid fewer failures, lower the chicken guide failure rate, you might think about increasing the guide adjustment frequency. This first action allows us to lower the probability of the failure occurring drastically. Table 3 and Figure , the failure rate goes from a maximum value of 6.4 to a maximum of 0.8.

As shown in
With this first corrective action, an 87.5% improvement in the guide failure rate is therefore obtained.  Going, therefore, to see how the situation changes in the case of the failure rate of the entire system, a significant improvement is also noted here as can be seen in Table 4 and  Other actions that can be implemented to lower the entire system's failure rate are to try to increase the detectability of some failure modes before they occur. In particular, two scenarios can be evaluated to reduce the failure rate of the chains: The first scenario concerns the breakage of the electrical parts due to the blackout.
Obviously, nothing can be done about it because it depends on external causes. The only actions that could be implemented to eliminate those failure modes are to install some emergency generators. However, this is a very expensive action that is not worth the risk due to the very low frequency of blackouts; The second scenario to reduce the failure rate of the chains concerns the usury of them.
It could be established either a greater number of checks by the maintenance technicians to identify signs of wear of the chains or establish automatic checks with sensors capable of identifying chain length variations.
The hypothesis is that it is possible to detect the fault of 90% thanks to one of these two choices.
In Figure and Table 5, it can be seen how the situation changes.  The improvements are very little perceptible. There is a minimum improvement of about 5% from the frequency of the chains' breakings that is already minimal.
Indeed, it is advisable to increase the inspections by the maintenance technicians in order to be able to intervene promptly at every slightest deviation of the chain from the initial conditions in order to avoid accelerated wear, but it is not advisable to install automatic detection systems such as sensors because they are still expensive systems.
The first safe step on which to intervene is the chicken release guide.
With a few corrective actions, great results are achieved in machine operation, thus limiting production stops to a minimum.

Conclusion
Risk management within companies is increasingly fundamental, even more in manufacturing companies with an almost saturated production cycle.
In fact, it is fundamental to prevent the occurrence of a failure as much as possible because it leads in most cases to a loss of production and, therefore, to severe economic losses.
In the presented work, a development of the traditional Failure Mode and Effect Analysis is proposed where some innovative aspects have been added to try to eliminate some deficiencies present in the traditional FMEA.
A fourth factor, the cost, has been added to consider the economic aspects and take into account production aspects, which are absent in the traditional FMEA. Besides adding the cost factor, the four factors are weighted thanks to the use of two MCDM: the objective Entropy method to derive the weights directly from the data structure; the BWM method to derive the weights of the factors from the subjective evaluations of the experts.
Then, to obtain the final weights, a weighted average has been used to allow an easy transition from objective to a more subjective evaluation.
Once the weights were obtained, the classic RPN formula was not used to rank the alternatives to overcome some shortcomings. First of all is that the mathematical formula to calculate RPN lacks scientific support (Gargama & Chaturvedi, 2011). The traditional RPN calculation generates non-continuous numbers, and there are many gaps in the scale of obtainable values (Carmignani G., 2009), and the traditional formula makes the index very sensitive to minor variations of single factors (Ekmekçioǧlu & Kutlu, 2012). The ranking was carried out thanks to the EDAS method, which allows for a more continuous risk index with a much lower probability of duplicates than the traditional method.
Furthermore, the method used is widely used in the academic field, so it also has scientific support. After calculating our alternatives' ranking, the famous Pareto law of 80-20 is used to choose the critical events.
The criticality analysis has been carried out using software (Vensim PLE x64) to simulate System Dynamics models. This last very innovative step allowed us to evaluate the causes of failure considering their dynamic aspect and the relationships existing between the various faults, which is impossible in a standard FMEA analysis where the causes are analyzed individually and statically.
The last step of the work consists of identifying corrective actions and evaluating the implementation of these actions.
It has been seen that an improvement in the machine failure rate of about 65% can be achieved. It is caused by the chickens' guide, subject to high stress, arrives at a certain moment when it undergoes a considerable increase in the probability of failure, increasing the frequency of adjustment it is possible to intervene before the probability of failure rises too high.
As a final step, a sensitivity analysis has been carried out to provide greater support for the work carried out.
The corrective actions proposed in this work have not been implemented.
Future development of this work could be to carry out new analysis on the machine after implementing the corrective actions mentioned above.
Other developments could be to broaden the scope of analysis to other causes of failure and identify further corrective actions. And then repeat the cycle.