Preprint
Article

This version is not peer-reviewed.

Adaptive OEE: A FUCOM-TOPSIS Framework for Context-Driven Equipment Effectiveness

Submitted:

10 April 2026

Posted:

13 April 2026

You are already at the latest version

Abstract
Overall Equipment Effectiveness (OEE) is the dominant metric for manufacturing productivity, computed as the multiplicative product of Availability (A), Performance (P), and Quality (Q). Despite its widespread adoption, the classical OEE formula embeds a structural limitation: the three components are treated as equally important regardless of operational context, a fixed-weight assumption that systematically distorts maintenance prioritisation in environments with asymmetric operational priorities. No published framework has formally addressed this limitation through a structured, auditable multi-criteria weighting model. This paper proposes Adaptive OEE, a FUCOM-TOPSIS framework that replaces the fixed A×P×Q product with a context-driven weighting model. FUCOM elicits context-specific weights for A, P and Q from expert judgement using only n−1 pairwise comparisons with guaranteed consistency, while TOPSIS ranks equipment assets under the weighted criteria, producing a closeness coefficient comparable across assets and contexts. Three illustrative case studies covering availability-dominant, performance-dominant, and quality-dominant contexts demonstrate that the classical OEE ranking is not preserved under any weight configuration, with Divergence Index values ranging from 0.667 to 1.333. Divergence is most severe when one component carries strongly asymmetric weight, precisely the condition equal weighting cannot accommodate. The principal contributions are the formalisation of the equal-weighting assumption as a measurement-theoretic deficiency, the replacement of multiplicative aggregation with a weighted distance measure preserving the A/P/Q decomposition, and the introduction of the Divergence Index as a quantitative measure of context-insensitive rank displacement.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

Overall Equipment Effectiveness (OEE) is widely recognised as the gold standard for measuring manufacturing productivity. Introduced by Nakajima [1] within the Total Productive Maintenance (TPM) framework, OEE decomposes equipment losses into three multiplicative components, Availability (A), Performance (P), and Quality (Q), yielding the composite score OEE = A × P × Q. Since its formalisation in the late 1980s, OEE has achieved broad adoption across discrete and process manufacturing, serving as a foundational key performance indicator in lean manufacturing, Six Sigma, World Class Manufacturing, and Industry 4.0 digital transformation initiatives [2,3]. Academic interest in OEE has increased substantially over the past decade, with associated research keywords evolving from maintenance and production to lean manufacturing and optimisation [3]. An OEE score of 85% is widely cited as the world-class benchmark for discrete manufacturing, while the global average across most industrial sectors lies between 55% and 65% [4,5], indicating persistent and measurable gaps between actual and potential equipment performance.
Despite its widespread use, the classical OEE formula embeds a structural assumption that has received insufficient critical attention in the literature, specifically that the three components A, P and Q are treated as equally important and combined with implicit fixed weights of 1/3 each, irrespective of the operational context in which the equipment operates. This assumption is operationally unjustifiable across different industrial environments.
To illustrate this structural issue, three representative industrial contexts are examined, each characterised by a distinct dominant loss factor.
In pharmaceutical manufacturing, Quality constitutes the operationally critical component. Regulatory frameworks enforced by agencies such as the Food and Drug Administration impose strict conformance requirements on every production unit; non-conforming batches are subject to mandatory quarantine, investigation, and disposal [6]. Reported industry data indicate that the average quality score in pharmaceutical manufacturing is approximately 94%, implying a 6% non-conformance rate that may trigger regulatory inspections, warning letters, or product recalls [5]. The financial and reputational consequences of quality failures in this sector are disproportionate relative to equivalent losses in Availability or Performance, which result in recoverable throughput reductions rather than regulatory liability. The classical OEE formula assigns equal arithmetic weight to all three components, thereby failing to reflect this asymmetry.
In continuous-flow refinery operations, Availability constitutes the operationally critical component. Process shutdowns in continuous production systems propagate simultaneously across interconnected upstream and downstream units, generating economic losses at rates that are structurally incomparable to equivalent performance or quality degradations [7]. Restart procedures following unplanned shutdowns incur additional costs associated with energy consumption, material losses, and equipment thermal cycling [7]. Performance and quality variations in refinery operations are typically absorbed within operational tolerances and do not produce losses of comparable magnitude. The equal-weight OEE formulation does not capture this difference in loss severity between components.
In high-speed automotive assembly, Performance constitutes the operationally critical component. These production systems are engineered around a defined cycle time, and minor speed reductions or micro-stoppages that fall below the threshold for classification as availability losses nonetheless generate cumulative throughput deficits across tightly synchronised stations [8]. Automated inspection systems maintain quality losses at negligible levels, and predictive maintenance programmes limit unplanned downtime [2,8]. Assigning equal weight to A, P and Q in this production environment systematically underestimates the contribution of performance losses to total production inefficiency.
In each context, the component that dominates operational loss is different, yet the classical OEE formula assigns identical weight to all three. This fixed-weight assumption is not a neutral modelling choice; it constitutes a structural limitation with direct and measurable consequences for maintenance prioritisation decisions.
Significant efforts have been made to extend the original OEE framework. Extensions such as Total Equipment Effectiveness Performance (TEEP), Production Equipment Efficiency (PEE), Overall Asset Effectiveness (OAE), Overall Factory Effectiveness (OFE), and Overall Production Effectiveness (OPE) have emerged to address broader perspectives on equipment and factory-level performance [2]. While PEE introduces weights to the sub-indicators so that A, P and Q are not assigned equal importance as in classical OEE, no structured, formal method is prescribed for deriving those weights from the operational context, and the assignment of PEE weights remains ad hoc, expert-dependent, and unsupported by any consistency validation mechanism. No published work has formally proposed replacing the fixed A×P×Q product with a structured Multi-Criteria Decision Making (MCDM) weighting model that derives context-specific weights with guaranteed consistency and minimum expert burden. Existing extensions modify the scope of measurement but none provides a systematic, auditable methodology for determining how the three components should be weighted relative to each other in a given operational context. This methodological gap motivates the present work.
This paper addresses three research questions. The first concerns the formal characterisation of the structural limitations of the classical OEE formula and the theoretical basis upon which a multi-criteria weighting approach is justified. The second examines how the Full Consistency Method (FUCOM) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) can be integrated to derive context-sensitive weights for the OEE components with minimum expert elicitation burden and mathematically guaranteed consistency. The third investigates the conditions under which the proposed Adaptive OEE framework produces rankings that differ materially from those obtained under classical OEE and analyses the implications of such divergence for maintenance prioritisation decisions.
This paper makes three original contributions. The equal-weight assumption embedded in the classical OEE formula is formally characterised as a structural limitation, and its quantifiable consequences for maintenance prioritisation decisions are systematically identified. The Adaptive OEE framework is proposed as a FUCOM–TOPSIS model for context-driven equipment effectiveness measurement, designed to be applicable across heterogeneous industrial environments. Three illustrative case studies involving equipment assets evaluated across availability-dominant, performance-dominant, and quality-dominant operational contexts are presented, demonstrating the specific conditions under which Adaptive OEE produces rankings that diverge from those of classical OEE and quantifying the associated decision impact through the Divergence Index.
The remainder of this paper is organised as follows. Section 2 reviews the classical OEE framework and its documented limitations. Section 3 presents the Adaptive OEE framework and introduces the FUCOM and TOPSIS methods that form the methodological basis of the proposed approach, detailing its architecture, weighting procedure, and scoring mechanism. Section 4 describes illustrative case studies, reports the results obtained across the three operational contexts, and compares Adaptive OEE rankings against those produced by the classical formula. Section 5 discusses the findings and addresses the practical implications for industrial asset management. Section 6 concludes the paper and identifies directions for future research.

2. Literature Review

2.1. Documented Limitations of Classical OEE

Despite the OEE widespread adoption across manufacturing sectors, a substantial body of critical literature has documented structural deficiencies that constrain its validity as a standalone decision-support instrument, particularly when the aim is to reflect operational priorities with sufficient fidelity to support maintenance and resource-allocation decisions.
The first and most fundamental limitation concerns false precision through equal weighting. The classical OEE formula assigns equal implicit weight to availability, performance and quality, contrary to the well-established operational reality that these three dimensions of loss are qualitatively and strategically distinct [9,10]. There is no theoretical or empirical basis for asserting that a one-percentage-point deterioration in availability is equivalent in strategic impact to a one-percentage-point deterioration in quality. Several authors have noted that the rationale underlying the multiplication of the three sub-indicators remains unspecified in the original formulation and that the relative importance of each dimension is simply not considered [11]. Baykasoğlu demonstrated that, when all three OEE variables individually exceed 0.70, the multiplicative formula can yield a composite score that does not faithfully represent actual equipment performance, and systematically understates it relative to any weighted-sum formulation [11]. Muchiri and Pintelon [2], whose review in the International Journal of Production Research remains the most comprehensive treatment of the metric’s genealogy, showed both where the measurement framework captures production losses and where it diverges from actual industrial practice. The absence of a principled weighting mechanism means that OEE produces a single composite score that may simultaneously conceal and misrepresent the operational criticality of individual loss categories, a problem not resolved by any algebraic manipulation of the basic A x P x Q structure.
The second class of limitations involves systematic blind spots in the measurement scope. Muchiri and Pintelon [2] explicitly noted that the OEE framework does not capture losses attributable to external causes such as logistics failures, supplier disruptions, utility shortages or lack of commercial demand. Planning delays, which render equipment idle for intervals not classifiable under any of the six canonical losses, are similarly invisible to the OEE lens. Stefana et al. [12], in their 2024 development of the Resource Overall Equipment Cost Loss (ROECL) indicator published in the International Journal of Productivity and Performance Management, confirmed that costs and organisational losses outside the A, P, Q triplet remain invisible to OEE-based assessments, and demonstrated that the lowest-OEE machine is not necessarily responsible for the largest economic losses. Raju et al. [13] further identified that organisational and planning causes of sub-optimal performance fall entirely outside OEE’s measurement perimeter in multi-machine environments.
A third limitation is the aggregation distortion arising from the multiplicative structure of the formula. Because OEE is computed as a product rather than a weighted sum, a marginal deterioration in one factor is amplified non-linearly in the composite score. Different combinations of A, P and Q values can yield identical OEE scores, meaning that the metric is not injective: the same output can correspond to fundamentally different operational states, thereby undermining its diagnostic utility [11]. Tsarouhas [14], in a detailed eight-month case study on an automated production line published in the International Journal of Productivity and Performance Management, confirmed that speed losses and equipment breakdowns collectively account for more than 90% of total OEE losses in food manufacturing contexts, illustrating precisely the kind of structural dominance that the multiplicative formula amplifies rather than contextualises.
The fourth structural limitation is context insensitivity. The classical 85% world-class benchmark was developed for high-volume, dedicated discrete manufacturing lines in the Japanese automotive sector. Applied across industries, this threshold is widely acknowledged to be inapplicable without contextual adjustment [15,16]. Ng Corrales et al. [3], in their systematic review of 186 peer-reviewed articles published in Applied Sciences, confirmed that OEE benchmarks vary significantly across manufacturing verticals and that the same numerical value carries different operational meaning depending on product mix, batch complexity, automation level and regulatory regime. In pharmaceutical manufacturing, process industries exhibit inherently lower availability due to mandatory cleaning cycles and batch validation, circumstances under which a value of 70% may represent excellent performance. Lucantoni et al. [17], in their 2024 machine-learning-based OEE improvement framework published in the International Journal of Quality and Reliability Management, further confirmed that the inability to contextualise OEE values across different process types is a persistent barrier to its effective deployment as a decision-support tool.
Taken collectively, these four documented limitations establish that classical OEE, while operationally useful as a first-pass diagnostic tool, is insufficient as the sole basis for informed strategic decisions on maintenance prioritisation, resource allocation or continuous improvement.

2.2. Extensions of OEE in the Literature

The documented inadequacies of classical OEE have stimulated a substantial body of work producing alternative metrics, each addressing a specific perceived deficiency while leaving the fundamental equal-weighting problem substantially intact. The systematic review by Ng Corrales et al. [3], analysing 186 peer-reviewed articles from Web of Science and Scopus, confirmed that academic interest in OEE-derived models has grown substantially, but identified no established consensus on an inter-component weighting methodology that is simultaneously theoretically grounded, empirically validated and practically applicable.
The earliest and most widely adopted extension is Total Effective Equipment Performance (TEEP), which modifies the denominator of the formula to encompass total calendar time rather than merely planned production time [2]. TEEP captures planned downtime as an additional loss category, giving visibility to utilisation losses that OEE by design excludes, and is routinely employed as a strategic capacity-planning metric. While TEEP expands the measurement scope, it retains the equal-weight multiplicative structure within the operational time window it shares with OEE.
Production Equipment Effectiveness (PEE), proposed by Raouf [18] in the International Journal of Operations and Production Management, represents the first formal recognition of the weighting problem within the OEE family. Raouf argued that quality, performance and availability should not be treated as equally important and proposed the use of the Analytic Hierarchy Process (AHP) to assign differential weights, yielding the weighted exponentiation formula PEE = A^w1 x P^w2 x Q^w3. Critically, however, while PEE acknowledges the unequal-weight problem, Raouf did not specify a rigorous computational procedure for deriving the weights, leaving the method dependent on expert judgement without a structured elicitation framework [11].
Overall Asset Effectiveness (OAE) and Overall Production Effectiveness (OPE) were developed to extend the measurement scope from individual equipment to the asset or production-system level, explicitly incorporating commercial and logistical losses that OEE ignores [2]. These metrics represent a change in measurement scope but preserve the multiplicative structure and do not revisit the equal-weighting assumption applied within each sub-category.
Overall Equipment Effectiveness of a Manufacturing Line (OEEML), introduced by Braglia, Frosolini and Zammori [19] in the Journal of Manufacturing Technology Management, extends OEE to jointly operated machine sets by proposing an alternative losses classification that separates losses directly attributable to individual equipment from those distributed across the line. OEEML successfully highlights bottleneck inefficiencies and supports root-cause identification at the line level. However, as the authors themselves acknowledged, the metric fails to account for the extent to which effectiveness is sustained by in-process inventories and requires complementary metrics to estimate associated costs [19], in these works the inter-component weighting issue is not addressed.
Overall Resource Effectiveness (ORE), proposed by Garza-Reyes [20] in the Journal of Quality in Maintenance Engineering, recognised that factors such as raw material efficiency and the wider production environment may contribute significantly to process performance in ways not captured by OEE’s three conventional factors. Empirical and simulation-based investigation confirmed that ORE provides more complete diagnostic information for some processes [20]. Stefana et al. [12] extended this cost-based direction through ROECL, but similarly did not address the weighting of A, P and Q relative to one another.
Overall Weighting Equipment Effectiveness (OWEE), introduced by Wudhikarn [21] at the IEEE International Conference on Industrial Engineering and Engineering Management, applied the Rank-Order Centroid (ROC) method to assign differential weights to OEE elements. While OWEE represents a meaningful advance in recognising the need for weighting, the ROC method produces weights that are insensitive to tie-breaking when factors share the same rank, and its approximation error increases with the number of criteria ranked [11]. Extensions incorporating fuzzy logic for weighted OEE computation have more recently been proposed precisely because neither PEE nor OWEE adequately handles the uncertainty and context-dependence inherent in expert weight elicitation [11].
The Tang [22] review and compatible weighting approach, published in the Lecture Notes in Production Engineering series in 2024, provides the most recent systematic treatment of OEE weighting methods and concludes that FUCOM-based approaches represent a theoretically superior alternative to both AHP-based PEE and ROC-based OWEE, given FUCOM’s ability to guarantee mathematical consistency with only n-1 pairwise comparisons. Pamucar, Stevic and Sremac [23] formally established FUCOM in 2018 in the journal Symmetry as a method that simultaneously minimises expert elicitation burden and guarantees the deviation from full consistency, properties directly relevant to the OEE weighting problem.
The critical observation that unifies all of these extensions is that, while each addresses real and important deficiencies of classical OEE, none resolves the foundational equal-weight structural limitation from a multi-criteria decision-making perspective. Extensions such as TEEP, OAE and ORE modify the scope or the numerator of the measurement framework but retain the implicit equal-weight assumption within the core multiplicative structure. PEE and OWEE acknowledge the weighting problem but rely on methods that are either insufficiently specified or theoretically limited in their ability to represent the relative importance of operational dimensions in a structured, replicable and context-sensitive manner [3,11,22].

2.3. Critical Analysis

The literature review identifies three gaps that directly motivate the proposed framework.
Although the structural limitations of classical OEE have been documented across multiple contributions [2,11,12,13,14,17,18], no prior work has formalised the conditions under which the equal-weight assumption is statistically and operationally inadmissible. The present work provides that formal argument, establishing the theoretical basis for replacing the multiplicative A x P x Q composite with a weighted formulation.
Moreover, existing weighting approaches are individually deficient: PEE [18] introduced AHP-based weights but without a rigorous elicitation procedure; ROC-based OWEE [21] captures only rank ordering and cannot represent cardinal preference intensities; and fuzzy extensions [11] address uncertainty without guaranteeing consistency. None satisfies simultaneously the three requirements of minimum expert elicitation burden, mathematical consistency guarantee, and context-sensitivity. Pamucar et al. [19] demonstrated that FUCOM meets all three conditions; its integration with TOPSIS for equipment ranking has no precedent in the OEE literature.
Also, divergence between weighted and unweighted OEE rankings has been observed empirically [17,19,20], but no study has characterised the threshold conditions under which this divergence becomes materially consequential for maintenance prioritisation decisions. The present work fills this gap, bridging the theoretical justification of weighting and the practical specification of the FUCOM-TOPSIS procedure.

3. Adaptive OEE Framework Proposal

This section presents the Adaptive OEE framework, a structured decision-support model designed to address the three structural limitations of classical Overall Equipment Effectiveness identified in Section 2: the implicit equal weighting of availability, performance and quality; the aggregation distortion arising from the multiplicative formula; and the context insensitivity of the universal 85% benchmark. The framework integrates two established methodologies from the multi-criteria decision-making literature, the Full Consistency Method (FUCOM) for weight elicitation and the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) for equipment effectiveness scoring, into a coherent approach that preserves the diagnostic structure of classical OEE while correcting its measurement-theoretic deficiencies.

3.1. Framework Architecture

The Adaptive OEE framework proposes a new equipment effectiveness indicator, the Adaptive OEE score, defined as the weighted proximity of an equipment asset to a theoretically perfect performance state. The framework consists of two sequential modules. The first module applies the Full Consistency Method to derive a context-specific weight vector w = (w_A, w_P, w_Q) reflecting the relative strategic importance of the three OEE components in the operational context under analysis. The second module applies TOPSIS with fixed ideal poles to compute a closeness coefficient C_i that constitutes the Adaptive OEE score for the asset. The weight vector produced by the first module enters the second module as a required input; no scoring is performed prior to weight elicitation. The framework may be re-executed whenever the operational context changes, generating a revised weight vector and revised Adaptive OEE scores. Figure 1 illustrates the proposed framework.
The proposed framework addresses three of the four structural limitations identified in Section 2 as follows.
The equal weighting limitation is resolved by replacing the implicit uniform weight structure of classical OEE with a context-specific weight vector w = (w_A, w_P, w_Q) derived through FUCOM. The weight vector satisfies a mathematical consistency condition and reflects the operational priorities of the context under analysis.
The aggregation distortion limitation is resolved by replacing the multiplicative formula with a weighted Euclidean distance measure. The Adaptive OEE score C_i is injective by construction: distinct component profiles yield distinct closeness coefficients, preserving the full diagnostic information of the A, P, Q decomposition.
The context insensitivity limitation is resolved by grounding the scoring model in a context-specific weight structure rather than a universal performance threshold. The score C_i is interpretable in absolute terms as the weighted proximity of an asset to the theoretically perfect performance state {1, 1, 1}, without reference to any industry-derived benchmark.

3.2. Module 1 - FUCOM Weight Elicitation

Let C = {A, P, Q} denote the set of OEE components. The decision-maker first establishes a priority ranking of the components consistent with the operational context, expressed as an ordered sequence:
Preprints 207668 i001 (1)
where C_j(k) denotes the component ranked k-th and > denotes strict preference. Where two components are judged equally important, they occupy the same rank position. For each pair of consecutively ranked components, the decision-maker assigns a comparative priority ratio:
Preprints 207668 i002 (2)
where φ_k ∈ [1,9] expresses how many times more important component C_j(k) is relative to C_j(k+1). The full set of ratios {φ_1, φ_2} constitutes the comparative priority vector. The FUCOM optimisation model derives the weight vector w = (w_A, w_P, w_Q) by minimising the degree of full consistency χ subject to the following constraints:
Preprints 207668 i003 (3)
Preprints 207668 i004 (4)
Preprints 207668 i005 (5)
Preprints 207668 i006 (6)
Preprints 207668 i007 (7)
The optimal solution w* = (w_A*, w_P*, w_Q*) is obtained at χ* = 0 when the elicited ratios satisfy exact transitivity, and at χ* > 0 otherwise. The scalar χ* constitutes the degree of full consistency of the elicited judgements. To better understand the the FUCOM applicability the following illustrative example is used. Consider a context in which availability is judged most important, followed by performance, followed by quality. The decision-maker elicits:
Preprints 207668 i008
The transitivity condition requires w_A / w_Q = φ_1 · φ_2 = 6. Substituting into the normalization constraint w_A + w_P + w_Q = 1 and imposing exact consistency (χ = 0):
Preprints 207668 i009
The resulting weight vector is:
Preprints 207668 i010
This solution is unique and obtained without iteration. The DFC of zero confirms that the elicited ratios {φ_1 = 2, φ_2 = 3} are mutually consistent and no approximation is required.

3.3. Module 2 - TOPSIS Scoring

Let I = 1 , , m denote the set of equipment assets under evaluation. Each asset i   is characterised by its observed OEE component values A i , P i , Q i , where each component is defined on [0,1] in accordance with the standard OEE measurement framework. The decision matrix X is constructed as:
Preprints 207668 i011 (8)
Step T1 - Augmented decision matrix. Two reference rows are appended to X : a positive ideal solution A + = 1 , 1 , 1 and a negative ideal solution A = 0 , 0 , 0 . The augmented matrix X ˜ of dimension m + 2 × 3   is:
Preprints 207668 i012 (9)
Step T2 - Normalisation. Each element of X ˜ is normalised by the Euclidean norm of its column:
Preprints 207668 i013 (10)
Step T3 - Weighted normalised matrix. The FUCOM weight vector w * = w A , w P , w Q from Module 1 is applied uniformly across all rows of R , including the ideal poles:
Preprints 207668 i014 (11)
Step T4 — Distances to ideal solutions. For each asset i :
Preprints 207668 i015 (12)
Preprints 207668 i016 (13)
where v j + and v j denote the weighted normalised values of A + and A respectively.
Step T5 - Adaptive OEE score.
Preprints 207668 i017 (14)

4. Adaptive OEE Framework Validation

To evaluate the performance of the proposed Adaptive OEE framework and demonstrate its behaviour relative to classical OEE, three illustrative case studies are presented. Each case study represents a distinct operational context characterised by a different strategic priority, yielding a different FUCOM weight vector. The three contexts are designed to span the space of plausible weight configurations and to illustrate how identical equipment data can lead to materially different effectiveness assessments depending on the operational priorities of the decision-maker.

4.1. Case Study 1 - Availability-Dominant Context

The first context represents a capital-intensive production environment in which unplanned equipment downtime constitutes the primary source of operational loss. In such settings, typical of continuous process industries, heavy manufacturing, or asset-critical infrastructure, the financial and operational consequences of availability failures are disproportionately severe relative to marginal losses in performance rate or product quality. Maintenance strategy is oriented towards maximising equipment uptime, and the organisation’s key performance targets are expressed primarily in terms of availability metrics.
Applying the FUCOM elicitation procedure, the decision-maker establishes the priority ranking A P Q and provides comparative priority ratios reflecting a strong dominance of availability over the remaining components. The resulting weight vector assigns availability a weight of 0.667, acknowledging its dominant role as the primary driver of operational effectiveness. A unit reduction in availability produces a loss contribution six times greater than an equivalent reduction in quality and three times greater than an equivalent reduction in performance. Performance is assigned a weight of 0.222, acknowledging its secondary relevance in environments where throughput is constrained by equipment uptime rather than speed. Quality is assigned the lowest weight of 0.111, consistent with settings in which product conformance is maintained through process controls that operate largely independently of equipment scheduling decisions.

4.2. Case Study 2 - Performance-Dominant Context

The second context represents a high-throughput discrete manufacturing environment operating under tight delivery schedules and sustained customer demand pressure. In such settings, characteristic of automotive supply chains, electronics assembly, or fast-moving consumer goods production, the primary operational concern is the rate at which equipment converts available running time into finished output. Equipment availability is managed through structured preventive maintenance programmes and, while necessary, does not represent the binding constraint on operational performance. Product quality is maintained through embedded process controls and is not the principal source of competitive differentiation.
Applying the FUCOM elicitation procedure, the decision-maker establishes the priority ranking P A Q and provides comparative priority ratios reflecting the centrality of throughput in the organisation’s operational objectives. The resulting weight vector assigns performance rate a weight of 0.600, reflecting its role as the dominant determinant of operational output. A unit reduction in performance produces a loss contribution twice as large as an equivalent reduction in availability and six times larger than an equivalent reduction in quality. Availability is assigned a weight of 0.300, acknowledging its necessary but non-dominant role in throughput-oriented environments where downtime is largely planned and predictable. Quality is assigned the lowest weight of 0.100, consistent with settings in which first-pass yield is structurally high and quality losses represent a residual rather than a principal source of effectiveness deterioration.

4.3. Case Study 3 - Quality-Dominant Context

The third context represents a regulated manufacturing environment in which product conformance is the overriding operational requirement. In such settings, exemplified by pharmaceutical manufacturing, aerospace component production, or medical device assembly, quality failures carry regulatory, safety, and reputational consequences that far outweigh the costs associated with reduced throughput or episodic downtime. Equipment availability and performance rate are maintained at satisfactory levels through established maintenance and operational programmes, but any deterioration in quality rate triggers immediate corrective action, potential batch rejection, and regulatory notification obligations.
Applying the FUCOM elicitation procedure, the decision-maker establishes the priority ranking Q A P and provides comparative priority ratios reflecting the non-negotiable primacy of product conformance in the organisation’s operational context. The resulting weight vector assigns quality a weight of 0.600, reflecting its dominant role in regulated production environments. A unit reduction in quality produces a loss contribution twice as large as an equivalent reduction in availability and six times larger than an equivalent reduction in performance. Availability is assigned a weight of 0.300, acknowledging that equipment uptime remains operationally necessary but is subordinate to conformance requirements; in regulated environments, equipment is frequently taken offline voluntarily to preserve quality rather than run at the risk of non-conforming output. Performance is assigned the lowest weight of 0.100, consistent with settings in which operating speed is deliberately conservative to minimise process variability and protect product quality.
The weight vectors derived through the FUCOM elicitation procedure for each of the three operational contexts are summarised in Table 1. The classical OEE reference row is included to make explicit the uniform weight assumption implicit in the multiplicative formula, against which the context-specific weight vectors are contrasted.

4.4. Decision Matrix Definition and Classical OEE Baseline

The three assets are evaluated on the same observed OEE component values across all case studies. The raw decision matrix X is common to the three contexts; what varies across case studies is exclusively the FUCOM weight vector applied in Module 2.
The observed component values are defined as follows. Asset 1 presents high availability but moderate performance and quality, representing equipment that is consistently available but not fully optimised in speed or output conformance. Asset 2 presents moderate availability, high performance, and high quality, representing equipment that runs efficiently when available but is subject to more frequent downtime. Asset 3 presents moderate values across all three components, representing a balanced but unexceptional operational profile. The decision matrix X is:
Preprints 207668 i018
The corresponding classical OEE scores, computed as OEE i = A i × P i × Q i , are:
Preprints 207668 i019
The ranking Preprints 207668 i020 will serve as the reference against which the Adaptive OEE rankings under the three context-specific weight vectors are compared in Section 4.5. The asset profiles are deliberately constructed so that the classical ranking is not preserved under all three weight configurations, thereby illustrating the diagnostic value of the proposed framework. The augmented decision matrix X ˜ , including the fixed ideal poles A + = 1 , 1 , 1 and A = 0 , 0 , 0 , is identical across the three case studies:
Preprints 207668 i021
The TOPSIS scoring procedure is applied to X ˜ independently for each case study using the respective FUCOM weight vector w k * , k = 1 , 2 , 3 . The full computation is developed in Section 4.5.

4.5. Technique for Order of Preference by Similarity to Ideal Solution

The Euclidean column norms of X ˜ are:
Preprints 207668 i022
The normalised matrix R = X ˜ / x j is shared across all case studies; what varies is the weighted matrix V = R w k * applied in each context.

4.5.1. Case Study 1 - Availability-Dominant Context

Applying w 1 * = 0.667 , 0.222 , 0.111 , the weighted ideal poles and asset distances are:
Preprints 207668 i023
Adaptive OEE ranking under CS1:
Preprints 207668 i024
The high availability weight (0.667) elevates Asset 1 from last position in the classical ranking to first in the Adaptive ranking. Asset 2, despite its high performance, is penalised by its low availability (0.62), which constitutes the dominant loss dimension in this context. The Divergence Index is:
Preprints 207668 i025

4.5.2. Case Study 2 - Performance-Dominant Context

Applying w 2 * = 0.300 , 0.600 , 0.100 , the weighted ideal poles and asset distances are:
Preprints 207668 i026
Adaptive OEE ranking under CS2:
Preprints 207668 i027
The high-performance weight (0.600) promotes Asset 2 to first position, consistent with its very high-performance rate (0.95). Asset 3, despite leading the classical ranking, falls to last position because its low performance rate (0.65) is the dominant loss dimension in this context. The Divergence Index is:
Preprints 207668 i028

4.5.3. Case Study 3 - Quality-Dominant Context

Applying w 3 * = 0.300 , 0.100 , 0.600 , the weighted ideal poles and asset distances are:
Preprints 207668 i029
Adaptive OEE ranking under CS3:
Preprints 207668 i030
The high quality weight (0.600) confirms Asset 3 in first position, consistent with its very high quality rate (0.95). However, Asset 2 falls to last position despite its high performance, because its moderate quality (0.74) is penalised in a quality-dominant context. The Divergence Index is:
Preprints 207668 i031
The Adaptive OEE scores and rankings across the three case studies, together with the classical OEE reference, are summarised in Table 2.
The results demonstrate that the classical OEE ranking (Asset 3 ≻ Asset 2 ≻ Asset 1) is not preserved under any of the three context-specific weight configurations. In CS1, availability dominance inverts the ranking completely. In CS2, performance dominance promotes Asset 2 and demotes Asset 3. In CS3, quality dominance preserves Asset 3 in first position but inverts the relative ordering of Assets 1 and 2. The non-zero Divergence Index in all three cases confirms that equal weighting, as implicit in the classical OEE formula, produces maintenance prioritisation decisions that are inconsistent with the operational context in each of the three scenarios examined.

5. Discussion

The results obtained across the three case studies provide empirical support for the propositions advanced in Section 2 and directly address the three research questions formulated in Section 1.
Regarding RQ1, the divergence between Adaptive OEE rankings and the classical OEE ranking in all three case studies confirms that equal weighting constitutes a structurally consequential assumption rather than a neutral default. The classical OEE ranking (Asset 3 ≻ Asset 2 ≻ Asset 1) is not preserved under any of the three FUCOM weight configurations. This result is not attributable to data variability, the input matrix X is identical across all cases, but exclusively to the weight structure applied. The equal-weighting assumption implicit in the classical formula thus produces maintenance prioritisation decisions that are inconsistent with operational context in all three scenarios examined, confirming the analytical argument formalised in Section 2.1.
Regarding RQ2, the FUCOM elicitation procedure produced fully consistent weight vectors (DFC = 0) in all three case studies from only two pairwise comparisons per context, satisfying simultaneously the minimum-comparison, guaranteed-consistency, and context-sensitivity requirements identified as absent from existing OEE extensions in Section 2.2. The integration of FUCOM with TOPSIS yields a scoring model that is both formally rigorous and operationally tractable, addressing the gap identified in RQ2.
Regarding RQ3, rank inversions are observed in all three case studies, with Divergence Index values of 1.333, 1.333, and 0.667 for CS1, CS2, and CS3 respectively. The highest divergence occurs in the availability-dominant and performance-dominant contexts, where a single component carries a weight of 0.600 or above, amplifying rank displacements relative to the equal-weight baseline. In the quality-dominant context, the first-ranked asset under classical OEE retains its position under Adaptive OEE, but the relative ordering of the remaining assets is inverted, yielding a non-zero DI. These results characterise the threshold conditions under which classical OEE produces materially misleading prioritisation: divergence is most severe when the operational context assigns strongly asymmetric importance to one OEE component, precisely the condition that the equal-weighting assumption is structurally unable to accommodate.

6. Conclusions

This paper proposed the Adaptive OEE framework, a novel equipment effectiveness indicator that addresses three structural limitations of classical OEE: the implicit equal weighting of availability, performance and quality; the aggregation distortion arising from the multiplicative formula; and the context insensitivity of the universal 85% benchmark. The framework integrates the Full Consistency Method for context-specific weight elicitation with TOPSIS scoring against fixed ideal poles, producing a closeness coefficient Ci∈[0,1] that is computable for a single asset, comparable across time and contexts, and sensitive to operational priorities.
The three illustrative case studies demonstrated that the classical OEE ranking is not preserved under any of the three FUCOM weight configurations derived for availability-dominant, performance-dominant, and quality-dominant contexts, with Divergence Index values ranging from 0.667 to 1.333. These results confirm that equal weighting is a structurally consequential assumption that produces maintenance prioritisation decisions inconsistent with operational context, and that the proposed framework resolves this limitation in a formally rigorous and operationally tractable manner.
The contributions of this work encompass the formalisation of the equal-weighting limitation as a measurement-theoretic deficiency with demonstrable consequences for maintenance prioritisation, the development of an indicator that replaces multiplicative aggregation with a weighted distance measure while preserving the diagnostic A/P/Q decomposition of classical OEE, and the introduction of the Divergence Index as a quantitative measure of the rank displacement induced by context-insensitive weighting.
Two directions for future research are identified as priorities. Empirical validation of the framework using longitudinal operational data from industrial assets would enable the calibration of DI threshold values and the assessment of FUCOM weight stability across elicitation sessions and decision-makers. The extension of the weight elicitation module to group decision-making settings, in which weights are derived from multiple experts with potentially divergent operational priorities, would further require the integration of consensus mechanisms compatible with the FUCOM consistency framework.

Author Contributions

Conceptualization, V.A., P.M. and A.A.; methodology, V.A..; software, V.A. and P.M.; validation, V.A., A.A.; formal analysis, A.A. and P.M.; investigation, A.A., P.M. and V.A.; resources, V.A. and P.M.; data curation, A.A.; writing-original draft preparation, V.A..; writing-review and editing, A.A. and P.M.; visualization, V.A. and P.M.; supervision, V.A.; project administration, A.A. and P.M.; funding acquisition, A.A. and V.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Fundação para a Ciência e a Tecnologia in the scope of the project UID/00667/2025 (https://doi.org/10.54499/UID/00667/2025) (UNIDEMI).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data analysed in this study are contained within the article.

Acknowledgments

the authors acknowledge financial support via National funds from FCT, I.P. – Fundação para a Ciência e a Tecnologia, financed this work in the scope of the project UID/00667/2025 (https://doi.org/10.54499/UID/00667/2025) (UNIDEMI).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nakajima, S.; Nakajima, S. Introduction to TPM: Total Productive Maintenance; Productivity Press: Cambridge, Mass., 1988; ISBN 978-0-915299-23-2. [Google Scholar]
  2. Muchiri, P.; Pintelon, L. Performance Measurement Using Overall Equipment Effectiveness (OEE): Literature Review and Practical Application Discussion. International Journal of Production Research 2008, 46, 3517–3535. [Google Scholar] [CrossRef]
  3. Ng Corrales, L.D.C.; Lambán, M.P.; Hernandez Korner, M.E.; Royo, J. Overall Equipment Effectiveness: Systematic Literature Review and Overview of Different Approaches. Applied Sciences 2020, 10, 6469. [Google Scholar] [CrossRef]
  4. Luozzo, S.D.; Starnoni, F.; Schiraldi, M.M. On the Relationship between Human Factor and Overall Equipment Effectiveness (OEE): An Analysis through the Adoption of Analytic Hierarchy Process and ISO 22400. International Journal of Engineering Business Management 2023, 15, 18479790231188548. [Google Scholar] [CrossRef]
  5. Chikwendu, O.C.; Chima, A.S.; Edith, M.C. The Optimization of Overall Equipment Effectiveness Factors in a Pharmaceutical Company. Heliyon 2020, 6, e03796. [Google Scholar] [CrossRef] [PubMed]
  6. Lakshmi Sadhana, S.; Priya Dharshini, K.; Ramya Devi, D.; Naryanan, V.H.B.; Veerapandian, B.; Luo, R.-H.; Yang, J.-X.; Shanmugam, S.R.; Ponnusami, V.; Brzezinski, M.; et al. Investigation of Levan-Derived Nanoparticles of Dolutegravir: A Promising Approach for the Delivery of Anti-HIV Drug as Milk Admixture. Journal of Pharmaceutical Sciences 2024, 113, 2513–2523. [Google Scholar] [CrossRef] [PubMed]
  7. Song, Z.; Zhu, H.; Jia, G.; He, C. Comprehensive Evaluation on Self-Ignition Risks of Coal Stockpiles Using Fuzzy AHP Approaches. Journal of Loss Prevention in the Process Industries 2014, 32, 78–94. [Google Scholar] [CrossRef]
  8. Soltanali, H.; Rohani, A.; Tabasizadeh, M.; Abbaspour-Fard, M.H.; Parida, A. Operational Reliability Evaluation-Based Maintenance Planning for Automotive Production Line. Quality Technology & Quantitative Management 2020, 17, 186–202. [Google Scholar] [CrossRef]
  9. Bengtsson, M.; Andersson, L.-G.; Ekström, P. Measuring Preconceived Beliefs on the Results of Overall Equipment Effectiveness – A Case Study in the Automotive Manufacturing Industry. JQME 2022, 28, 391–410. [Google Scholar] [CrossRef]
  10. Kechaou, F.; Addouche, S.-A.; Zolghadri, M. A Comparative Study of Overall Equipment Effectiveness Measurement Systems. Production Planning & Control 2024, 35, 1–20. [Google Scholar] [CrossRef]
  11. Baykasoğlu, A.; Yoruk, E. Computing Weighted Overall Equipment Effectiveness Based on a Fuzzy Approach: A Comparative Study with an Application. Fuzzy Optim Decis Making 2025, 24, 251–269. [Google Scholar] [CrossRef]
  12. Stefana, E.; Cocca, P.; Fantori, F.; Marciano, F.; Marini, A. Resource Overall Equipment Cost Loss Indicator to Assess Equipment Performance and Product Cost. IJPPM 2024, 73, 20–45. [Google Scholar] [CrossRef]
  13. Raju, S.; Kamble, H.A.; Srinivasaiah, R.; Swamy, D.R. Anatomization of the Overall Equipment Effectiveness (OEE) for Various Machines in a Tool and Die Shop. JIMSE 2022, 3, 97–105. [Google Scholar] [CrossRef]
  14. Tsarouhas, P.H. Overall Equipment Effectiveness (OEE) Evaluation for an Automated Ice Cream Production Line: A Case Study. IJPPM 2019, 69, 1009–1032. [Google Scholar] [CrossRef]
  15. Bengtsson, M.; Andersson, L.-G.; Ekström, P. Measuring Preconceived Beliefs on the Results of Overall Equipment Effectiveness – A Case Study in the Automotive Manufacturing Industry. JQME 2022, 28, 391–410. [Google Scholar] [CrossRef]
  16. Koç, B.; Eryürük, S.H. Achieving Sustainable Overall Equipment Effectiveness ( OEE ) in Apparel Industry With Lean and Digital Integration. Engineering Reports 2025, 7, e70162. [Google Scholar] [CrossRef]
  17. Lucantoni, L.; Antomarioni, S.; Ciarapica, F.E.; Bevilacqua, M. A Rule-Based Machine Learning Methodology for the Proactive Improvement of OEE: A Real Case Study. IJQRM 2024, 41, 1356–1376. [Google Scholar] [CrossRef]
  18. Raouf, A. Improving Capital Productivity through Maintenance. Int Jrnl of Op & Prod Mnagemnt 1994, 14, 44–52. [Google Scholar] [CrossRef]
  19. Braglia, M.; Frosolini, M.; Zammori, F. Overall Equipment Effectiveness of a Manufacturing Line (OEEML): An Integrated Approach to Assess Systems Performance. Jnl of Manu Tech Mnagmnt 2008, 20, 8–29. [Google Scholar] [CrossRef]
  20. Garza-Reyes, J.A. From Measuring Overall Equipment Effectiveness (OEE) to Overall Resource Effectiveness (ORE). Journal of Quality in Maintenance Engineering 2015, 21, 506–527. [Google Scholar] [CrossRef]
  21. Wudhikarn, R. Overall Weighting Equipment Effectiveness. In Proceedings of the 2010 IEEE International Conference on Industrial Engineering and Engineering Management; IEEE: Macao, China, December 2010; pp. 23–27. [Google Scholar]
  22. Tang, H. OEE Review and Compatible Weighting Approach for OEE. In Advances in Performance Management and Measurement for Industrial Applications and Emerging Domains; Schiraldi, M.M., De Carlo, F., Fera, M., Eds.; Springer Nature Switzerland: Cham, 2024; pp. 91–101. ISBN 978-3-031-59929-3. [Google Scholar]
  23. Pamučar, D.; Stević, Ž; Sremac, S. A New Model for Determining Weight Coefficients of Criteria in MCDM Models: Full Consistency Method (FUCOM). Symmetry 2018, 10, 393. [Google Scholar] [CrossRef]
Figure 1. - Adaptive OEE framework architecture. OEE component measurements (A, P, Q) feed both modules. Module 1 (FUCOM) derives the context-specific weight vector w from expert judgement. Module 2 (TOPSIS) computes the Adaptive OEE score C_i using the weight vector and fixed ideal poles A+ = {1,1,1} and A- = {0,0,0}. The two modules are strictly sequential.
Figure 1. - Adaptive OEE framework architecture. OEE component measurements (A, P, Q) feed both modules. Module 1 (FUCOM) derives the context-specific weight vector w from expert judgement. Module 2 (TOPSIS) computes the Adaptive OEE score C_i using the weight vector and fixed ideal poles A+ = {1,1,1} and A- = {0,0,0}. The two modules are strictly sequential.
Preprints 207668 g001
Table 1. Table 1. FUCOM weight vectors by operational context.
Table 1. Table 1. FUCOM weight vectors by operational context.
Context wA wP wQ DFC
CS1: Availability-dominant 0.667 0.222 0.111 0.000
CS2: Performance-dominant 0.300 0.600 0.100 0.000
CS3: Quality-dominant 0.300 0.100 0.600 0.000
Classical OEE (reference) 0.333 0.333 0.333 n.a.
Table 2. Table 2. Adaptive OEE scores and rankings versus classical OEE across the three case studies.
Table 2. Table 2. Adaptive OEE scores and rankings versus classical OEE across the three case studies.
Asset Classical OEE CS1: Availability-dominant CS2: Performance-dominant CS3: Quality-dominant
Score Rank Ci Rank Ci Rank Ci Rank
Asset 1 0.424 3 0.870 1 0.671 2 0.752 2
Asset 2 0.436 2 0.650 3 0.836 1 0.717 3
Asset 3 0.457 1 0.734 2 0.671 3 0.869 1
DI 1.333 1.333 0.667
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated