Preprint
Article

This version is not peer-reviewed.

PCAT: A Software System for Cross-Product Commonality Analysis in Engineer-to-Order Manufacturing

A peer-reviewed version of this preprint was published in:
Applied Sciences 2026, 16(8), 3771. https://doi.org/10.3390/app16083771

Submitted:

11 March 2026

Posted:

17 March 2026

You are already at the latest version

Abstract
Engineer-to-Order (ETO) manufacturers face persistent cost and complexity challenges driven by product variety, including duplicate components, redundant variants, and inconsistent procurement setups. Although enterprise resource planning (ERP) and product lifecycle management (PLM) systems contain detailed Bills of Materials (BOMs) and procurement records, they typically lack portfolio-wide support for systematic cross-product commonality analysis without substantial manual effort. Prior approaches are either conceptual (e.g., indices and modularity frameworks) or ad hoc in practice, often relying on one-off spreadsheet analyses. This paper introduces the concept of Product Commonality Analysis Tools (PCATs) and develops and evaluates a lightweight PCAT in an action-research collaboration with a European ETO laser manufacturer. The PCAT operates on exported enterprise data to provide interactive portfolio-level views of component reuse and cross-product consistency. Its usefulness is evaluated through scenario-based think-aloud usability sessions and a functional comparison against Excel workarounds, standard ERP/PLM reporting, and vendor customizations. The results indicate that a lightweight PCAT can be integrated into existing ERP/PLM workflows with minimal disruption and can reduce the effort required to prepare reusable portfolio views for engineering and procurement reviews.
Keywords: 
;  ;  ;  ;  
Subject: 
Engineering  -   Other

1. Introduction

Engineer-to-Order (ETO) companies face persistent challenges in managing costs and complexity arising from product variety. While variety is often driven by customer-specific requirements, it is also reinforced by limited reuse governance and fragmented part libraries. When reuse is not actively governed across projects, redundant variants and near-duplicate components tend to accumulate over time, increasing inventory, handling, and engineering effort. In practice, this can be observed in duplicate components that reduce commonality and drive inventory and handling overhead (Trattner et al., 2019), redundant variants that exacerbate product proliferation and raise production and inventory costs (Menezes et al., 2021), and inconsistent procurement setups that add coordination effort through supplier and sourcing diversity (Myrodia et al., 2021).
Research has long shown that higher part commonality can improve operating efficiency and reduce cost (Collier, 1981; Hillier, 2002). Product-architecture choices, particularly the degree of modularity-also influence supplier selection and sourcing configuration, with downstream effects on responsiveness and cost (Chiu and Okudan, 2014; Saeed et al., 2019). Yet, despite decades of conceptual and analytical work, these ideas remain difficult to operationalize in day-to-day engineering and procurement work. In many ETO contexts, analysis of component reuse across products is still performed manually and reactively (Iakymenko et al., 2020).
These issues are typically detected in production, service, or maintenance-for example when an obsolete part is identified during a repair, a supplier issues an end-of-life notice, or an engineer discovers that nominally “equivalent” components differ in interfaces or procurement setup (Romero Rojo et al., 2010). This reactive mode of detection increases cost and exposes companies to life-cycle risks, especially for long-lived products and systems (Collier and Lambert, 2020).
Although enterprise systems such as ERP and PLM contain detailed, structured Bills of Materials (BOMs) and procurement data (Chen et al., 2012; MacCarthy and Pasley, 2021), standard reporting can at times support identifying obsolescence candidates or duplicate part numbers. However, these systems often lack dedicated, portfolio-wide functionality for systematic cross-product commonality analysis without substantial data preparation or bespoke reporting (MacCarthy and Pasley, 2021). As a result, companies rely on ad-hoc database queries, spreadsheet analyses, or vendor-specific customizations (Shih, 2014; Kristjansdottir et al., 2017), which may yield partial insights but do not provide repeatable portfolio-level views that support engineering and procurement decisions at scale.
The academic literature on product families and modularity provides extensive frameworks for managing variety. Studies on product-family and platform development emphasize commonality as a lever for balancing efficiency with customization (Jiao et al., 2007), indices and metrics quantify component sharing (Thevenot and Simpson, 2007); and architectural models evaluate modularity and its effects on sourcing and production (Fixson, 2005). However, embedded, day-to-day decision tools integrated into PLM/engineering workflows remain limited (Mertens et al., 2023). The gap lies not in understanding the value of commonality, but in operationalizing it using available enterprise data in complex engineering environments. To address this gap, we use the literature to conceptualize Product Commonality Analysis Tools (PCATs) and investigate their value through a case study.
From a research standpoint, the paper makes two contributions. First, it defines the concept of PCATs, which is not yet formally defined in research or practice. Second, it demonstrates through the case study how PCATs can support more data-informed discussions in engineering and procurement by making portfolio-level reuse and consistency information accessible from exported enterprise data.

2. Literature Study

Concepts of Product Commonality

The concept of product commonality has long been central to managing variety in manufacturing. Early studies introduced formal indices to measure component reuse across products and showed that higher commonality can reduce inventory, safety stock, and overall system cost (Jeet and Kutanoglu, 2018). Subsequent research indicates that commonality and standardization influence more than cost: they affect perceived differentiation (Wong et al., 2019), can shorten development time when implemented via platforms (Zhang et al., 2025), and may improve supply resilience by reducing dependence on specialized suppliers (Gebhardt et al., 2022).
A large body of work develops metrics for evaluating commonality-from aggregate indices of shared parts to comparative surveys of index dimensions and valuation/optimization approaches that integrate cost, performance, and lifecycle objectives (Zhang et al., 2019; Hapuwatte et al., 2022). At the same time, several studies highlight trade-offs between commonality and differentiation: while standardization reduces costs, it can limit flexibility in meeting diverse customer requirements (Jonnalagedda and Saranga, 2017). Overall, the literature establishes the theoretical importance of commonality, but much of it remains centered on indices and design frameworks rather than practical, day-to-day tools.

Product Families and Modularity

Another research stream emphasizes modularization and product-family design as practical means to manage commonality. Architectural choices determine the degree of modularity, which affects how easily components can be shared across variants (Lee et al., 2025). Modular design enables reuse of components and subsystems across products, reducing engineering effort and procurement complexity (Roy and Abdul-Nour, 2024). Platform studies show that using common subsystems and interfaces can yield production and inventory efficiencies while reducing lead time and improving responsiveness (Galizia et al., 2020).
Recent reviews underline the persistent variety–commonality trade-off and note that fragmented, uncoordinated methods inhibit adoption, making the balance difficult to operationalize (Gauss et al., 2021). Frameworks linking modularity, commonality, and platform strategies address this tension (Bortolini et al., 2023; Hossain et al., 2023) but they largely remain conceptual and do not automatically translate into operational tools for day-to-day engineering decisions. Complementary work observes that while commonality indices are useful for ex post assessment and benchmarking, they do not by themselves guide in-process design decisions; integrated approaches are needed to translate requirements into actionable commonality specifications during ongoing projects (Simpson et al., 2012; AlGeddawy and ElMaraghy, 2013). In the ETO context, (Sohrt et al., 2025) argue that existing architecture methods do not adequately capture the cross-functional and cross-value-chain nature of decision making, while (Grønvald et al., 2025) highlight the need for data-driven approaches that connect modularization theory with practical industrial evaluation. This motivates practical, interactive tooling that makes commonality and modularity insights usable in industrial settings.

Industrial Practice and Tool Support

In practice, routine portfolio-level commonality analytics used by engineering, PMO/business process development, and procurement stakeholders remain uncommon, with reviews noting abundant methods but sparse evidence of sustained industrial deployment (Gauss et al., 2021). Enterprise systems (ERP/PLM) store detailed product structures and procurement data, but they are often implemented and used primarily for operational, transactional purposes and tend to produce per-system, per-item, or per-product views (Haapasalo et al., 2019). Portfolio-level cross-product analytics that link technical and commercial structures are therefore not typically available out of the box, particularly when data must be consolidated across systems and product variants (Hannila et al., 2020).
Consequently, firms often rely on business intelligence (BI) and analytics layers (e.g., data warehouses, reporting and online analytical processing (OLAP) tools) to conduct fact-based cross-product analyses such as product-level profitability or portfolio management (Chen et al., 2012; Hannila et al., 2020). In engineering practice, workarounds are common: engineers or procurement managers export BOMs to spreadsheets and perform manual overlap checks (Iakymenko et al., 2020; Kurniadi and Ryu, 2021), or they commission customized reports from system providers, which can be costly and inflexible when requirements change (Al-Assaf et al., 2024). In ETO contexts, where orders may include unique configurations, this limited tool support reinforces reactive handling of commonality issues that surface during production, service, or maintenance rather than at design time (Iakymenko et al., 2020).
A recurring challenge is data fragmentation across PLM/ERP and adjacent systems, which complicates consolidation for cross-product commonality analysis (Terzi et al., 2010). In addition, many methods require non-trivial database or analytics skills, limiting accessibility for engineers who need quick, actionable insights (Hannila et al., 2019). As a result, potential benefits-reducing unnecessary variation, optimizing sourcing, and managing obsolescence-are only partially realized; systematic reviews highlight fragmentation, weak methodological integration, and challenges in transferring commonality and modularity methods into routine industrial practice (Gauss et al., 2021; Mertens et al., 2023).

Synthesis of Literature Streams and Operationalization Gaps

Table 1 synthesizes five literature streams-commonality indices and valuation, trade-off studies, product architecture/modularity/platforms, industrial practice and enterprise-system tool support, and lifecycle risk/obsolescence-triggered rework-by mapping typical outputs to a concrete operationalization gap observed in day-to-day engineering. The final column links these gaps to PCAT views and functions (e.g., Compare & Filter; Commonality Matrix) that make abstract insights actionable in portfolio-level work.

Literature Discussion and Implications for PCATs

Taken together, the literature offers a strong theoretical foundation for understanding product commonality, supported by decades of work on indices, metrics, and modularization frameworks. However, three gaps remain:
  • Limited operationalization. Much of the work is conceptual (indices/frameworks), and recent studies call for IT-enabled decision-support tooling that engineers can apply directly in practice (e.g., model-based or analytics-enabled decision support) (Greve et al., 2022).
  • Lack of integration with enterprise data. Although ERP/PLM systems hold BOMs and procurement attributes, studies report siloed data and manual reporting; firms often turn to application-level analytics outside ERP/PLM rather than built-in cross-product functionality (Koskinen et al., 2020; Ji and Abdoli, 2023).
  • Reactive rather than proactive usage. In practice, commonality is frequently addressed only after problems arise, not as part of systematic design and procurement processes (Iakymenko et al., 2020; Sierra-Fontalvo et al., 2023).
This gap suggests the need for PCATs that can combine structured data from existing enterprise systems with simple, repeatable visualizations focused on two operations that engineering, PMO/business process development, and procurement stakeholders routinely need: (i) Compare & Filter (multi-product sharedness with attribute slicing and BOM-level/version control) and (ii) Commonality Matrix (presence/quantity consistency across products). In doing so, PCATs can support more proactive, data-informed engineering and procurement discussions.
Figure 1 depicts three persistent gaps that hinder translation of commonality theory into practice: (i) operationalization-indices/frameworks vs. everyday tools; (ii) data integration-fragmented ERP/PLM data vs. portfolio analytics; and (iii) proactivity-reactive issue discovery vs. proactive reuse management. PCATs are positioned as a lightweight bridge addressing each gap with targeted views and routines.
Definition. Drawing on these gaps, we define a Product Commonality Analysis Tool (PCAT) as:
“A PCAT is a task-focused software system that operates on exported enterprise data (e.g., BOMs and item-master records) to provide interactive, repeatable portfolio-level views of component reuse across multiple products and to support engineering and procurement decision-making.”
Terminology note. In this paper, an item refers to a BOM line item/part number. Exact duplicates refer to identical part numbers (optionally including revision) appearing across multiple products; near-duplicates refer to functionally similar components represented by different part numbers. Sharedness denotes how widely an item is reused across a selected product set (e.g., “Used in X of Y products”).
Core functions of a PCAT
The core functions of a PCAT include:
  • Portfolio selection and data scoping – selecting a product set, choosing BOM depth, and controlling version sets to define a consistent analytical basis.
  • Cross-product reuse quantification – computing and displaying sharedness metrics such as “Used in X of Y products,” usage percentages, and sharedness bands.
  • Commonality Matrix visualization – presenting item presence and quantity consistency across products in a matrix view and highlighting inconsistencies.
  • Attribute-based slicing and filtering – filtering/segmenting by procurement attributes (e.g., Make-Buy, procurement type, class, lifecycle flags where available).
  • Drill-through to BOM line detail – linking aggregate views to underlying BOM occurrences to support verification and follow-up actions.
  • Export and reporting – exporting tables (CSV/XLSX) and generating printable reports for engineering and procurement reviews.

3. Methodology

Study Design and Rationale

The value of this tool is investigated through an action-research study with a medium-sized European laser manufacturer, co-developing a lightweight Product Commonality Analysis Tool (PCAT) and evaluating it in day-to-day engineering and procurement work. We adopt an action-research (AR) approach suited to collaborative, in-situ interventions with practitioners. AR structures the study into iterative cycles of diagnosis, planning, action, and evaluation/reflection (Ollila and Yström, 2020; Kantola et al., 2022) with participating engineering and related stakeholders (including PMO/business process development and procurement). The AR study includes two main cycles:
  • AR cycle 1: Development of PCAT – focusing on understanding current challenges, translating them into requirements, and developing a workable prototype.
  • AR cycle 2: Implementation and test of PCAT – focusing on deploying the prototype in the company, comparing it to existing practices, and evaluating its usefulness in realistic scenarios.
Practitioners involved across the cycles included R&D/Engineering engineers, PMO and procurement participants, who contributed by describing current practices (diagnosis), co-defining tasks/requirements (planning), validating prototype behaviour through walkthroughs and scenario-based think-aloud usability sessions (action), and providing feedback on outputs and usability (evaluation/reflection).

Empirical Setting

The firm was selected as an ETO manufacturer of complex laser systems with long product lifecycles and high variety, where duplicate/near-duplicate items and heterogeneous procurement setups were repeatedly flagged by PMO and procurement participants as pain points. The company sought portfolio-level visibility across multiple released products to support reviews and to reduce manual spreadsheet work. Table 2 summarizes the empirical setting and disclosure constraints.

Action-Research Cycles and Stages

The study followed an action-research (AR) process with two cycles: (1) development of the PCAT prototype and (2) implementation and test in the company. Both cycles followed the canonical AR stages of diagnosis, planning, action, and evaluation/reflection. Table 3 summarizes Cycle 1 and Table 4 summarizes Cycle 2, including participant activities, evidence captured, and outputs used in later stages.
Whereas Table 3 summarizes prototype development (Cycle 1), Table 4 summarizes implementation and evaluation (Cycle 2), combining scenario-based user sessions with a functional comparison to Excel and standard ERP/PLM reporting and collecting verifiable artefacts (bookmarked views, exports, and PDFs).

Data Collection

Data collection. Data were collected across the two AR cycles using interviews/workshops, work-practice artefacts, enterprise exports, and evaluation session outputs.
AR cycle 1 (development). We conducted exploratory semi-structured interviews and informal workshops with practitioners involved in cross-product analysis in the case company (R&D/Engineering, PMO/business process development and a procurement engineer). Cycle 1 involved exploratory sessions with three R&D/Engineering participants, one PMO/business process development participant and one procurement engineer (counts reported at role level for confidentiality). Sessions focused on (i) current cross-product analysis practices, (ii) recurring pain points (duplicates, inconsistent procurement setups, and manual BOM comparison effort), and (iii) requirements for a lightweight, export-based analysis tool. Notes were taken during all sessions; where permitted by the company, sessions were audio-recorded to support accurate reconstruction of requirements and task descriptions. In addition, we collected work-practice artefacts including the spreadsheets, BOM printouts, and ad-hoc query outputs used in current practice. During iterative prototype walkthroughs, practitioners executed the emerging task catalogue (T1–T9) and provided feedback on terminology, layout, and missing functionality.
AR cycle 2 (implementation and test). We conducted five scenario-based usability sessions (≈30 minutes each) using a guided walkthrough and a task-based think-aloud protocol. Each session consisted of approximately 10 minutes of tool introduction and orientation, followed by approximately 20 minutes of task execution and verbalized reasoning while completing tasks T1–T7 (and T9 for export/print). Observation notes were taken during each session, and session artefacts were collected as evidence of task completion and usability issues (bookmarked views, CSV/XLSX exports, and PDF printouts).
Data sources collected. Data were collected in the following forms:
  • Enterprise data: multi-level BOM exports (first level and full structure), item-master attributes (Class, Make-Buy, Procurement Type), and version flags (Newest / Newest-approved).
  • Work-practice materials: representative spreadsheets and ERP/PLM printouts used for cross-product checks.
  • Session artefacts: bookmarked PCAT views, CSV/XLSX exports, and PDF printouts produced during evaluation sessions.
Enterprise exports provided the basis for computing sharedness metrics and building the two PCAT views; legacy spreadsheets/printouts established the baseline workflows for comparison; and session artefacts documented how users executed tasks T1–T7 and which outputs (tables, PDFs) were produced for follow-up action.
The R&D/Engineering participants included two Senior Optical Engineers and one Engineering Manager (Products & ATE). The PMO/Business Process Development participant was a Senior R&D Process Engineering Manager, and the fifth participant was a Procurement Engineer. To preserve confidentiality, participants are reported as P1–P5 by role group.
Table 5 summarizes the participant roles and session characteristics.

Data Analysis

We analyzed the study materials in three steps. We organized the evaluation around four evaluation foci (EF1–EF4), which Table 6 maps to the corresponding data sources, analysis operations, and outputs. First, we derived and refined the task catalogue (T1–T9) from work artefacts (Excel sheets, ERP/PLM printouts, and ad-hoc query outputs) through iterative coding and practitioner review (Ollila and Yström, 2020; Kantola et al., 2022). Second, we performed deductive comparative functionality coding by assessing how each task (T1–T9) is supported in PCAT versus Excel versus standard ERP/PLM reporting, using a Yes/Partly/No scheme with short justification notes. Third, we applied thematic content analysis to session artefacts and observation notes, combining deductive categories aligned with EF1–EF4 (Table 6) and inductive sub-themes derived from observed examples.

4. AR Cycle 1: Development of PCAT

Diagnosis

In the diagnosis stage, we documented how cross-product analysis was performed in practice and where it broke down; the main workflow steps, actors, and artefacts identified are summarized in Table 3 (Action-research cycle 1: Development of PCAT). Engineering, PMO/business process development and procurement participants described a workflow dominated by ad-hoc Excel exports and manual comparisons of printed BOMs. Duplicate parts and inconsistent procurement setups were hard to detect systematically, and existing ERP/PLM reports were largely per-product and not designed for portfolio-level comparison. According to all interviewees, the firm lacked a simple, repeatable way to obtain portfolio-level commonality analytics from existing enterprise data, resulting in reactive, experience-driven decisions.

Planning

The resulting requirements were captured as (i) a task catalogue (T1–T9) operationalized in the prototype (Table 7) and (ii) a consolidated requirements and scope summary in the Cycle 1 planning record (Table 3). We agreed that the tool should be read-only, operate on exported data, and avoid modifying the ERP/PLM systems.
Power BI was selected to enable rapid iteration. Core requirements included (i) portfolio selection and data scoping (BOM depth and version set), (ii) a sortable sharedness list with metrics such as “Used in X of Y”, (iii) a Commonality Matrix that highlights presence and quantity consistency across products, (iv) drill-through to BOM line detail, (v) slicing by procurement attributes (Procurement Type, Make-Buy, Class), and (vi) export/print for follow-up work.

Action

Based on the planning stage, a PCAT was developed. It was created as a lightweight software system for cross-product analysis, implemented in this study as a read-only Power BI application connected to ERP/PLM exports. The application refreshes from ERP/PLM exports (multi-level BOMs and item-master attributes), and allows engineers, PMO/business process development and procurement participants to compare a selected set of released products in two complementary views: Compare & Filter and Commonality Matrix.
A typical analysis starts by (i) selecting the products to compare, (ii) choosing the BOM depth (first-level items or full BOM structure), (iii) selecting the version set (Newest or Newest-approved), and (iv) optionally slicing the results by procurement-related attributes (Procurement Type, Make-Buy, and Class). Results can be sorted, exported (CSV/XLSX), or printed to PDF for follow-up work and review meetings.
Figure 2, Figure 3 and Figure 4 illustrate the two views and an example item-level drill-through.
In the Compare & Filter view (Figure 2), items are ranked by how often they occur across the selected products and can be narrowed down using procurement-related slicers. The Commonality Matrix (Figure 3) complements this by showing, per item, whether it is present across products and whether quantities are consistent. Drill-through (Figure 4) links aggregated metrics to the underlying product occurrences to support verification and follow-up actions.

Evaluation and Reflection

To evaluate the developed PCAT in Cycle 1, we conducted a structured walkthrough of the prototype with five practitioners; the same five practitioners later participated in Cycle 2 (Table 5). Participants executed the task catalogue (T1–T9) using realistic product selections and commented on terminology, layout, and interpretability of the visual encodings (especially the matrix legend). The walkthrough produced concrete improvement actions (e.g., clearer sharedness labels such as “Used in X of Y”, explicit BOM-level and version toggles, and standard filter presets for recurring review scenarios). The resulting refinements and the short user manual were then used as inputs to cycle 2, where the tool was tested through scenario-based usability sessions.
Table 7 summarizes the task catalogue implemented in PCAT, the concrete user operations used to execute each task in Power BI, and the resulting outputs; it also serves as the backbone for the evaluation scenarios. In the user tests, we focused on tasks T1–T7 (and T9 for export/print), because together they cover the end-to-end workflow from selecting a product set to producing a documented comparison result (exported tables and PDF printouts).
Table 8 summarizes the operational definitions used in the prototype and the corresponding data fields and outputs.

5. AR Cycle 2: Implementation and Test of PCAT

Diagnosis

We define pain scenarios as concrete, recurring situations in which current portfolio/BOM handling causes avoidable effort, errors, or delays. We elicited these scenarios in short meetings with practitioners by asking them to describe recent cross-product checks that were difficult or time-consuming, and we translated them into the tasks used in the sessions (T1–T7).
In cycle 2, the diagnosis stage focused on defining realistic evaluation scenarios using the deployed prototype. We elicited concrete pain scenarios from practitioners (e.g., comparing closely related released products, detecting missing components, and reviewing heterogeneous sourcing setups for reused parts) and collected representative BOM exports and the baseline materials (spreadsheets and ERP/PLM printouts) currently used for cross-product checks. This step confirmed the relevance of the task catalogue (T1–T9) and selected T1–T7 plus T9 for detailed testing. The evaluated product sets and datasets were selected to reflect the scenarios raised by participants and to include closely related released variants with comparable BOM structures and available procurement attributes in the exports.

Planning

In the planning stage, we translated the pain scenarios into a structured session protocol and defined what evidence to collect. Scenario scripts mapped the practitioners’ questions to specific tasks (T1–T7), and acceptance criteria were agreed at a high level (task completion without manual joins, clarity of outputs for review meetings, and effort relative to the Excel/ERP baseline). Appendix A1 and A2 summarize the detailed acceptance criteria and session guide. We planned to capture session artefacts (bookmarked PCAT views, exports, and PDF printouts) alongside observation notes.

Action

Cycle 2 action consisted of deploying the refreshed Power BI prototype and running five scenario-based usability sessions using a think-aloud protocol. Each session lasted approximately 30 minutes and followed the same structure: a 10-minute guided walkthrough of the interface and terminology, followed by ~20 minutes of task-based interaction (product selection, filtering, search, quantity mismatch detection, use of the summary chart, and export/print).
The moderator guide and task set are provided in Appendixes A1 and A2. In brief, the walkthrough introduced the tool purpose, key definitions (product, component, BOM level, Make-Buy, Class, and version sets), and the meaning of the visual encodings in the sharedness list and matrix. Participants were instructed to think aloud and to verbalize expectations before interacting. The task set progressed from basic orientation and filtering to targeted search, quantity mismatch detection, interpretation of the summary chart, and finally a decision-oriented prompt (“make a recommendation”). Outputs created during the session (exported tables and PDF printouts) were retained as artefacts for analysis.

Evaluation / Reflection

Evidence from the sessions was analyzed together with the functional comparison against common alternatives (Excel-based workarounds and standard ERP/PLM reports). Across sessions, participants were able to complete the core workflow from selecting a product set to producing shareable outputs (exports/PDFs) without manual copy-paste or multi-sheet joins. This is shown in Table 9, which compares the tool’s support for the task catalogue with (i) spreadsheet-based analysis and (ii) standard ERP/PLM reporting. Commissioned vendor reports and bespoke queries can also be used in practice, but stakeholders described them as costly, slow to obtain, and hard to iterate; they are therefore discussed separately rather than treated as a baseline alternative.
Overall, Table 9 indicates that PCAT provides portfolio-level views that are difficult to obtain with standard ERP/PLM reporting and time-consuming to assemble in spreadsheets. This was most evident for (a) sharedness ranking (“Used in X of Y”), (b) matrix-style checks of component presence and quantity consistency across products, and (c) interactive slicing by procurement-related attributes.
Across the five scenario-based think-aloud usability sessions (~30 minutes each), three R&D/Engineering participants (P1–P3), one PMO/business process development participant (P4), and one procurement engineer (P5) found the matrix view particularly useful for spotting missing components and quantity inconsistencies, while the sharedness ranking (“Used in X of Y”) helped focus discussions on high-reuse parts. The main usability issues related to the visibility of the legend and the discoverability of multi-select behavior in slicers; these were addressed through minor UI clarifications (labels, legend placement, and filter-reset guidance).
Participants also noted that, while portfolio-level views can sometimes be produced within ERP/PLM environments depending on the specific system configuration, more complex or recurring cross-product analyses in the case company typically required either (i) manual spreadsheet work or (ii) commissioned queries/reports from the system provider. They emphasized that commissioned reports/queries involved higher cost, longer lead time, and limited adaptability (fixed scope and slow to change) compared with maintaining a lightweight, refreshable in-house Power BI view.
Similarly, participants observed that customized reports commissioned from system providers typically address a fixed question; for recurring analysis needs that evolve over time, this approach was perceived as less flexible than maintaining a small set of refreshable analytics views.
By reducing manual copying, joining, and reformatting of BOM extracts, participants reported that PCAT shortened preparation time and reduced the risk of comparison errors when preparing engineering and procurement reviews.

6. Discussion and Conclusions

ETO manufacturers frequently encounter challenges because enterprise systems often do not provide adequate support for structured cross-product analysis. To address this gap, this paper introduced the concept of Product Commonality Analysis Tools (PCATs), and developed and tested such a system through an action-research collaboration with a European ETO laser manufacturer. The study indicates that a lightweight PCAT operating on exported enterprise data can make cross-product reuse patterns visible in ways that are difficult to achieve with standard ERP/PLM reporting and time-consuming to build in spreadsheets. In the evaluation, PCAT reduced manual preparation work and improved transparency in cross-product comparisons, enabling more repeatable, data-informed review routines. Specifically, the Compare & Filter view supports reuse ranking (“Used in X of Y”) and attribute slicing that links technical reuse to sourcing discussions, while the Commonality Matrix supports detection of missing components and quantity inconsistencies with drill-through to BOM line detail. Taken together, these capabilities move portfolio-level commonality checks from ad hoc and reactive handling toward more proactive, evidence-based routines.

Implications for Research

This study contributes by defining and empirically instantiating PCATs as a distinct class of lightweight, task-focused analytics overlays that make established commonality and platforming ideas usable with exported enterprise data. The contribution is not a new commonality metric; rather, it is a practical operationalization pathway that connects well-known theory to repeatable day-to-day analysis routines. This matters because prior research on modularity, commonality, and platforming highlights the performance impact of complexity reduction, but the translation of these concepts into everyday, enterprise-data–based decision support remains challenging (Fixson, 2007; Zhang, 2015).
More specifically, PCAT operationalizes several methods for reducing product variance that are prominent in literature:
  • Standardization and part rationalization as a life-cycle cost lever. Prior work argues that increasing commonality and modularity can reduce cost through mechanisms such as reduced process complexity and economies of scale, and can improve manageability of product complexity (Fixson, 2007). PCAT supports this principle by making reuse patterns explicit across multiple products (e.g., reuse ranking and cross-product presence/quantity checks), enabling stakeholders to identify where commonality is strong and where reuse breaks down.
  • Platforming and modularization as mechanisms to manage variety. Platforming research emphasizes reuse of components/subsystems and interfaces across a product family as a means to offer variety efficiently, and reviews report benefits such as reduced development time, reduced system complexity, and reduced costs (Zhang, 2015). PCAT contributes a practical support layer for such discussions by providing portfolio-level views that help inspect reuse footprints and deviations at BOM-line detail, which can inform platform and modularization decisions.
  • Variety reduction procedures that link internal and external variety. In ETO settings, where high variety is expected, literature discusses modular configurations and the use of standard items as strategies to manage variety and improve cost/lead-time (Gosling and Naim, 2009). Related variety-management work also emphasizes the value of transparently linking internal (component/module) and external (finished-goods) variety using enterprise data such as BOMs and master data, although applicability is noted to be strongest outside pure ETO contexts (Staśkiewicz et al., 2022). PCAT complements these strategies by enabling repeatable cross-product analyses that support structured reviews of reuse, inconsistencies, and comparison outputs without requiring intrusive ERP/PLM customization.
Positioning PCAT as a tool concept therefore complements the literature by specifying a minimal set of portfolio views and operations needed to make commonality and platforming concepts actionable within typical enterprise-data constraints, and by providing empirical evidence from an ETO setting where variety is expected and costly (Fixson, 2007; Gosling and Naim, 2009; Zhang, 2015).

Implications for Practice

For ETO manufacturers, the main practical implication is that cross-product analysis does not necessarily require intrusive ERP/PLM customization. A read-only analytics layer can provide actionable portfolio views while preserving existing transactional workflows. The export/print features further support organizational handoff by turning analysis into shareable artefacts for engineering and procurement-aligned reviews (e.g., reuse rationalization, sourcing harmonization discussions). In practice, this suggests a pragmatic progression: start with export-based visibility (PCAT), stabilize review routines, and only then consider deeper integration if recurring decisions justify it.

Limitations and Future Research

Findings are based on one company and on export-based data integration. Tool usefulness depends on the quality and completeness of exports and the availability of procurement-related attributes in the item master. The prototype is read-only; therefore, it supports identification and decision preparation rather than implementing changes inside enterprise systems. The evaluation was based on a limited set of scenario-based sessions; while sufficient to demonstrate feasibility and perceived usefulness for the selected tasks, it does not establish long-term effects on portfolio outcomes.
Future work should test PCAT deployment across additional ETO contexts, incorporate more lifecycle-risk indicators when available (e.g., end-of-life flags), and examine longitudinal effects on duplicate reduction, reuse governance, and procurement harmonization. In addition, further studies could compare different integration strategies (export-based overlays versus deeper system integration) under varying data-quality and organizational constraints.

Author Contributions

Conceptualization, G.K.K.; Methodology, G.K.K.; Software, G.K.K.; Validation, G.K.K., L.H., A.H., S.H.M.J., and M.F.C.; Formal analysis, G.K.K.; Investigation, G.K.K.; Resources, S.H.M.J. and M.F.C.; Data curation, G.K.K., L.H. and A.H.; Writing-original draft preparation, G.K.K.; Writing-review and editing, L.H., A.H.; Visualization, G.K.K.; Supervision, L.H. and A.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was waived.

Institutional Review Board Statement

Ethical review and approval were waived for this study because it involved non-invasive usability evaluation with adult participants in a workplace setting and did not involve collection of sensitive personal data beyond research notes.

Data Availability Statement

Restrictions apply to the availability of the data. Enterprise data used in this study were obtained from an industrial partner and are not publicly available due to confidentiality. Derived and aggregated data supporting the findings may be made available from the corresponding author upon reasonable request, subject to partner approval.

Conflicts of Interest

S.H.M.J. and M.F.C. are employees of NKT Photonics A/S, the industrial partner in the study. The remaining authors declare no conflicts of interest.

Appendix A. Usability Testing Materials

A1. Moderator Guide (Walkthrough)

The session began with a short introduction to the tool purpose (cross-product component sharing analysis), ground rules (think-aloud; we test the tool, not the participant), and key definitions (product, component, BOM level, Make-Buy, Class, Newest vs Newest-approved). The interface tour covered (i) slicers and multi-select behavior, (ii) the sharedness list and matrix and how to read the legend, (iii) the summary chart and cross-filtering, and (iv) component search. A brief end-to-end example demonstrated selecting two products, switching to Full BOM, filtering by a component class, and interpreting presence/quantity status.

A2. Task Set (Think-Aloud)

Task 1 – Get oriented: Without clicking: describe what the page shows and what the main output is (rows/columns and purpose).
Task 2 – Filter to a product set: Select a specified set of products and show the components used across them; confirm the matrix updates.
Task 3 – Find a component: Locate a specified component ID and explain where it is used across the selected products.
Task 4 – Use key filters: Filter by procurement attribute, switch between First-level and Full BOM, and clear all filters.
Task 5 – Detect quantity mismatches: Identify a partially shared component with different quantities and specify which products differ and by how much.
Task 6 – Interpret the summary chart: Explain what the summary pie/donut communicates and how it can be used to segment the results.
Task 7 – Make a recommendation: Based on the outputs, propose one concrete follow-up action (e.g., standardize a component or investigate a mismatch).

References

  1. Al-Assaf, K., Alzahmi, W., Alshaikh, R., Bahroun, Z. & Ahmed, V. (2024) ‘The Relative Importance of Key Factors for Integrating Enterprise Resource Planning (ERP) Systems and Performance Management Practices in the UAE Healthcare Sector’, Big Data and Cognitive Computing, Multidisciplinary Digital Publishing Institute (MDPI), 8.
  2. AlGeddawy, T. & ElMaraghy, H. (2013) ‘Reactive design methodology for product family platforms, modularity and parts integration’, CIRP Journal of Manufacturing Science and Technology, 6, 34–43.
  3. Bortolini, M., Calabrese, F., Galizia, F.G. & Regattieri, A. (2023) ‘A two-step methodology for product platform design and assessment in high-variety manufacturing’, International Journal of Advanced Manufacturing Technology, Springer Science and Business Media Deutschland GmbH, 126, 3923–3948.
  4. Chen, H., Chiang, R.H.L., Storey, V.C., Lindner, C.H. & Robinson, J.M. (2012) Business Intelligence and Analytics: From Big Data to Big Impact Quarterly-Business Intelligence and Analytics: From Big Data to Big Impact.
  5. Chiu, M.C. & Okudan, G. (2014) ‘An investigation on the impact of product modularity level on supply chain performance metrics: An industrial case study’, Journal of Intelligent Manufacturing, 25, 129–145.
  6. Collier, D.A. (1981) ‘THE MEASUREMENT AND OPERATING BENEFITS OF COMPONENT PART COMMONALITY’, Decision Sciences, 12, 85–96.
  7. Collier, Z.A. & Lambert, J.H. (2020) ‘Managing obsolescence of embedded hardware and software in secure and trusted systems’, Frontiers of Engineering Management, Higher Education Press Limited Company, 7, 172–181.
  8. Fixson, S.K. (2005) ‘Product architecture assessment: A tool to link product, process, and supply chain design decisions’, Journal of Operations Management, 23, 345–369.
  9. Fixson, S.K. (2007) ‘Modularity and Commonality Research: Past Developments and Future Opportunities’, Concurrent Engineering, 15, 85–111.
  10. Galizia, F.G., ElMaraghy, H., Bortolini, M. & Mora, C. (2020) ‘Product platforms design, selection and customisation in high-variety manufacturing’, International Journal of Production Research, Taylor and Francis Ltd., 58, 893–911.
  11. Gauss, L., Lacerda, D.P. & Cauchick Miguel, P.A. (2021) ‘Module-based product family design: systematic literature review and meta-synthesis’, Journal of Intelligent Manufacturing, Springer, 265–312.
  12. Gebhardt, M., Spieske, A., Kopyto, M. & Birkel, H. (2022) ‘Increasing global supply chains’ resilience after the COVID-19 pandemic: Empirical results from a Delphi study’, Journal of Business Research, Elsevier Inc., 150, 59–72.
  13. Gosling, J. & Naim, M.M. (2009) ‘Engineer-to-order supply chain management: A literature review and research agenda’, International Journal of Production Economics, 122, 741–754.
  14. Greve, E., Fuchs, C., Hamraz, B., Windheim, M., Rennpferdt, C., Schwede, L.N. & Krause, D. (2022) ‘Knowledge-Based Decision Support for Concept Evaluation Using the Extended Impact Model of Modular Product Families’, Applied Sciences (Switzerland), MDPI, 12.
  15. Haapasalo, H., Harkonen, J., Tolonen, A. & Hannila, H. (2019) ‘Product and supply chain related data, processes and information systems for product portfolio management’, International Journal of Product Lifecycle Management, 12, 1.
  16. Hannila, H., Koskinen, J., Harkonen, J. & Haapasalo, H. (2019) ‘Product-level profitability’, Journal of Enterprise Information Management, 33, 214–237.
  17. Hannila, H., Koskinen, J., Harkonen, J. & Haapasalo, H. (2020) ‘Product-level profitability: Current challenges and preconditions for data-driven, fact-based product portfolio management’, Journal of Enterprise Information Management, Emerald Group Holdings Ltd., 33, 214–237.
  18. Hapuwatte, B.M., Badurdeen, F., Bagh, A. & Jawahir, I.S. (2022) ‘Optimizing sustainability performance through component commonality for multi-generational products’, Resources, Conservation and Recycling, Elsevier B.V., 180.
  19. Hillier, M.S. (2002) ‘Using commonality as backup safety stock’, European Journal of Operational Research, 136, 353–365.
  20. Hossain, M.S., Chakrabortty, R.K., Elsawah, S. & Ryan, M.J. (2023) ‘Modelling and application of hierarchical joint optimisation for modular product family and supply chain architecture’, International Journal of Advanced Manufacturing Technology, Springer Science and Business Media Deutschland GmbH, 126, 947–971.
  21. Iakymenko, N., Romsdal, A., Alfnes, E., Semini, M. & Strandhagen, J.O. (2020) ‘Status of engineering change management in the engineer-to-order production environment: insights from a multiple case study’, International Journal of Production Research, Taylor and Francis Ltd., 58, 4506–4528.
  22. Jeet, V. & Kutanoglu, E. (2018) ‘Part commonality effects on integrated network design and inventory models for low-demand service parts logistics systems’, International Journal of Production Economics, Elsevier B.V., 206, 46–58.
  23. Ji, X. & Abdoli, S. (2023) ‘Challenges and Opportunities in Product Life Cycle Management in the Context of Industry 4.0’, in Procedia CIRP, Elsevier B.V., 29–34.
  24. Jiao, J., Simpson, T.W. & Siddique, Z. (2007) ‘Product family design and platform-based product development: A state-of-the-art review’, Journal of Intelligent Manufacturing, 18, 5–29.
  25. Jonnalagedda, S. & Saranga, H. (2017) ‘Commonality decisions when designing for multiple markets’, European Journal of Operational Research, Elsevier B.V., 258, 902–911.
  26. Kantola, K., Vanhanen, J. & Tolvanen, J. (2022) ‘Mind the product owner: An action research project into agile release planning’, Information and Software Technology, Elsevier B.V., 147.
  27. Koskinen, J., Mustonen, E., Harkonen, J. & Haapasalo, H. (2020) ‘Product-level profitability analysis and decision-making: opportunities of IT application-based approach’, International Journal of Product Lifecycle Management, 12, 210.
  28. Kristjansdottir, K.;, Shafiee, S.; & Hvam, L. (2017) How to Identify Possible Applications of Product Configuration Systems in Engineer-to-Order Companies, International Journal of Industrial Engineering and Management (IJIEM), APA.
  29. Kurniadi, K.A. & Ryu, K. (2021) ‘Development of multi-disciplinary green-bom to maintain sustainability in reconfigurable manufacturing systems’, Sustainability (Switzerland), MDPI AG, 13.
  30. Lee, J., Lim, J., Hong, Y.S. & Kang, C. (2025) ‘Variant mode and effects analysis for effective product family expansion under modular architecture’, Journal of Engineering Design, Taylor and Francis Ltd.
  31. MacCarthy, B.L. & Pasley, R.C. (2021) ‘Group decision support for product lifecycle management’, International Journal of Production Research, Taylor and Francis Ltd., 59, 5050–5067.
  32. Menezes, M.B.C., Jalali, H. & Lamas, A. (2021) ‘One too many: Product proliferation and the financial performance in manufacturing’, International Journal of Production Economics, Elsevier B.V., 242.
  33. Mertens, K.G., Rennpferdt, C., Greve, E., Krause, D. & Meyer, M. (2023) ‘Reviewing the intellectual structure of product modularization: Toward a common view and future research agenda’, Journal of Product Innovation Management, John Wiley and Sons Inc, 40, 86–119.
  34. Myrodia, A., Hvam, L., Sandrin, E., Forza, C. & Haug, A. (2021) ‘Identifying variety-induced complexity cost factors in manufacturing companies and their impact on product profitability’, Journal of Manufacturing Systems, Elsevier B.V., 60, 373–391.
  35. Ollila, S. & Yström, A. (2020) Action research for innovation management: three benefits, three challenges, and three spaces.
  36. Romero Rojo, F.J., Roy, R. & Shehab, E. (2010) ‘Obsolescence management for long-life contracts: State of the art and future trends’, International Journal of Advanced Manufacturing Technology, 49, 1235–1250.
  37. Roy, M.A. & Abdul-Nour, G. (2024) ‘Integrating Modular Design Concepts for Enhanced Efficiency in Digital and Sustainable Manufacturing: A Literature Review’, Applied Sciences (Switzerland), Multidisciplinary Digital Publishing Institute (MDPI), 14.
  38. Saeed, K.A., Malhotra, M.K. & Abdinnour, S. (2019) ‘How supply chain architecture and product architecture impact firm performance: An empirical examination’, Journal of Purchasing and Supply Management, Elsevier Ltd., 25, 40–52.
  39. Shih, H.M. (2014) ‘Migrating product structure bill of materials Excel files to STEP PDM implementation’, International Journal of Information Management, Elsevier Ltd., 34, 489–516.
  40. Sierra-Fontalvo, L., Gonzalez-Quiroga, A. & Mesa, J.A. (2023) ‘A deep dive into addressing obsolescence in product design: A review’, Heliyon, Elsevier Ltd., 9.
  41. Simpson, T.W., Bobuk, A., Slingerland, L.A., Brennan, S., Logan, D. & Reichard, K. (2012) ‘From user requirements to commonality specifications: An integrated approach to product family design’, Research in Engineering Design, 23, 141–153.
  42. Staśkiewicz, A.M., Hvam, L. & Haug, A. (2022) ‘A procedure for reducing stock–keeping unit variety by linking internal and external product variety’, CIRP Journal of Manufacturing Science and Technology, Elsevier Ltd., 37, 344–358.
  43. Terzi, S., Bouras, A., Dutta, D., Garetti, M. & Kiritsis, D. (2010) Product lifecycle management-from its history to its new role, Int. J. Product Lifecycle Management.
  44. Thevenot, H.J. & Simpson, T.W. (2007) ‘A comprehensive metric for evaluating component commonality in a product family’, Journal of Engineering Design, 18, 577–598.
  45. Trattner, A., Hvam, L., Forza, C. & Herbert-Hansen, Z.N.L. (2019) ‘Product complexity and operational performance: A systematic literature review’, CIRP Journal of Manufacturing Science and Technology, Elsevier Ltd., 69–83.
  46. Wong, H., Lesmono, D., Chhajed, D. & Kim, K. (2019) ‘On the evaluation of commonality strategy in product line design: The effect of valuation change and distribution channel structure’, Omega (United Kingdom), Elsevier Ltd., 83, 14–25.
  47. Zhang, L.L. (2015) ‘A literature review on multitype platforming and framework for future research’, International Journal of Production Economics, Elsevier B.V., 1–12.
  48. Zhang, L.L., Jiao, R.J., Huang, G. & MacCarthy, B.L. (2025) ‘Extended guest editorial: Smart product platforming in the industry 4.0 era and beyond’, International Journal of Production Economics, Elsevier B.V., 280.
  49. Zhang, Y., Yang, Z., Ma, X., Dong, W., Dong, D., Tan, Z. & Zhang, S. (2019) ‘Exploration and implementation of commonality valuation method in commercial aircraft family design’, Chinese Journal of Aeronautics, Chinese Journal of Aeronautics, 32, 1828–1846.
Figure 1. Three-Gap Bridge.
Figure 1. Three-Gap Bridge.
Preprints 202530 g001
Figure 2. PCAT user interface. Compare & Filter view with product selection and attribute slicers, ranked component list with sharedness (“Used in X of Y”), and a sharedness summary chart.
Figure 2. PCAT user interface. Compare & Filter view with product selection and attribute slicers, ranked component list with sharedness (“Used in X of Y”), and a sharedness summary chart.
Preprints 202530 g002
Figure 3. PCAT user interface. Commonality Matrix view showing item presence and quantity consistency across selected products.
Figure 3. PCAT user interface. Commonality Matrix view showing item presence and quantity consistency across selected products.
Preprints 202530 g003
Figure 4. PCAT user interface. Example drill-through from a component to the underlying product occurrences and quantities.
Figure 4. PCAT user interface. Example drill-through from a component to the underlying product occurrences and quantities.
Preprints 202530 g004
Table 1. Literature streams → typical findings → operationalization gap → implication for PCAT.
Table 1. Literature streams → typical findings → operationalization gap → implication for PCAT.
Literature stream What the stream focuses on (definition) Typical outputs in the literature Operationalization gap (what’s missing in day-to-day work) What PCAT operationalizes (view/feature) Representative sources (examples)
Commonality indices & valuation Quantifying component reuse/commonality and linking it to performance outcomes (cost, inventory, efficiency). Indices/metrics, cost or inventory models, portfolio-level assessments Metrics require clean aggregation + joins across many BOMs; rarely embedded in engineers’ routine workflow/tools. Compare & Filter: “Used in X of Y”, usage %, sharedness bands; sortable portfolio list. Collier (1981); Hillier (2002); Jeet & Kutanoglu (2018); Zhang et al. (2019); Hapuwatte et al. (2022).
Trade-off studies (variety–standardization–differentiation) The benefits and limits of standardization/commonality versus product variety and differentiation. Conceptual trade-offs, managerial implications, sometimes empirical relationships. Hard to translate trade-offs into concrete “what to check in BOM/procurement data” during decisions. Attribute slicing in Compare & Filter (Make-Buy, Class, Procurement Type) + BOM-level focus to inspect variety vs reuse. Jonnalagedda & Saranga (2017)
Product architecture, modularity & platforms How modular product architectures/platforms enable reuse, faster development, and responsiveness. Frameworks, architectural models, platform design methods; often early-phase guidance. Frameworks stay conceptual; rarely turned into simple portfolio analytics that highlight “where reuse breaks” across products. Commonality Matrix: presence/absence + quantity-consistency status across products; highlights gaps undermining modular reuse. Fixson (2005); Simpson et al. (2012); AlGeddawy & ElMaraghy (2013); Chiu & Okudan (2014); Galizia et al. (2020).
Industrial practice & enterprise-system tool support What ERP/PLM do well (transactional, per-product views) vs what they don’t (repeatable cross-product analytics). BOM printouts/reports, per-product views; workarounds via Excel/structured query language (SQL); bespoke vendor reports. Data fragmentation + high manual effort; results are ad hoc, non-repeatable, and dependent on specialist skills. Read-only tool over exports + two portfolio views; drill-through + export/print for handoff; positioned vs Excel and ERP/PLM standard reports. Haapasalo et al. (2019); Hannila et al. (2019); Iakymenko et al. (2020); Kurniadi & Ryu (2021); Al-Assaf et al. (2024).
Lifecycle risk / obsolescence-triggered rework Commonality risks that surface late (service/production) when parts become obsolete or inconsistent. Reactive problem descriptions; risk/obsolescence framing; lifecycle management discussions. Signals exist in item master/procurement fields but aren’t routinely connected to “how widely reused is this part?” across products. (If data exists) add slicers/flags (end-of-life (EOL)/lifecycle) + reuse ranking to surface “high-reuse, high-risk” items. Romero Rojo et al. (2010); Collier & Lambert (2020)
Table 2. Empirical setting (neutral facts).
Table 2. Empirical setting (neutral facts).
Attribute Neutral fact Disclosure note
Company size (headcount) Medium-sized enterprise (50–249 employees) EU small and medium-sized enterprise (SME) size class; exact headcount withheld
Geography Europe Country withheld for confidentiality
Industry Laser manufacturing / photonics High-mix, complex engineered products
Business model ETO with some Configure-to-Order (CTO) components Typical for customized instruments
Product-line scope Supercontinuum sources (single product line) Focused to avoid cross-line identification
Portfolio within line Several released variants Exact count withheld; analysis at aggregate level
Lifecycle traits Long service life; obsolescence risk relevant Motivation for portfolio-level reuse visibility
Data used ERP/PLM BOM exports + procurement attributes Masked IDs, no customer/order data
User roles involved R&D/Engineering, PMO/Business Process Development, procurement Roles only; no names or teams disclosed
Study role of PCAT Read-only analytics layer (no write-back) Built from exported, de-identified data
Table 3. Action-research cycle 1: Development of PCAT.
Table 3. Action-research cycle 1: Development of PCAT.
AR stage Key activities with participants Evidence / artefacts captured Outputs used later
Diagnosis Mapped current cross-product analysis practice and pain points. Collected examples of how engineers/PMO/business process development participants check reuse, duplicates, and procurement consistency (Excel exports, printed BOMs). Confirmed that existing ERP/PLM reports are largely product-specific and do not provide portfolio-level reuse views. Interview/workshop notes; sample Excel workbooks; ERP/PLM BOM printouts; example ad-hoc query outputs; initial problem statement draft. Problem formulation; initial task catalogue draft (T1–T9); initial requirements themes.
Planning Co-defined PCAT scope and requirements. Derived preliminary task list (T1–T9) from artefact analysis of existing Excel sheets and ERP/PLM printouts. Agreed tool constraints: read-only, operates on exports, no ERP/PLM write-back. Selected Power BI for rapid iteration. Specified core functions and views: portfolio comparison across released products; Compare & Filter and Commonality Matrix; drill-through to BOM line detail; slicing by procurement attributes (Procurement Type, Make-Buy, Class); export/print for review use. Consolidated requirements list; refined task list T1–T9; workshop notes; solution outline (views, fields, filters); data input specification (BOM + item-master attributes + version flags). PCAT requirements specification; development plan; initial view mock-ups/configuration plan.
Action Implemented first PCAT prototype as a lightweight analytics layer. Built an Extract–Transform–Load (ETL) routine from ERP/PLM exports (multi-level BOM + item-master attributes) into a Power BI data model. Implemented Compare & Filter: product selection; BOM depth (first-level vs full structure); version sets (Newest vs Newest-approved); sharedness metrics (“Used in X of Y”, usage%); attribute slicers (Procurement Type, Make-Buy, Class). Implemented Commonality Matrix: items as rows/products as columns; presence/absence + quantity-consistency status; drill-through to occurrences; enabled export (CSV/XLSX) and print (PDF). Power BI data model and report; ETL/transformation notes; screenshots of views; test exports (CSV/XLSX) and PDF prints; prototype configuration notes. Working PCAT prototype; short user manual (core functions and usage steps).
Evaluation / reflection Internal walkthrough of tasks T1–T9 in the prototype with practitioners. Collected feedback on terminology, layout, missing functions, and usability issues (e.g., legend clarity for matrix statuses). Agreed minor refinements: sharedness labels (“Used in X of Y”); BOM-level toggles; filter presets for typical review scenarios; legend placement and reset guidance. Session notes; annotated UI screenshots; consolidated change list; updated view labels and presets. Updated prototype baseline for Cycle 2; refined task definitions; updated user manual.
Table 4. Action-research cycle 2: Implementation and test of PCAT (expanded).
Table 4. Action-research cycle 2: Implementation and test of PCAT (expanded).
AR stage Key activities with participants Evidence / artefacts captured Outputs used later
Diagnosis Revisited work practices with prototype available to define realistic evaluation scenarios. Conducted meetings to elicit concrete “pain scenarios” (e.g., compare new variant vs existing, check missing items/quantity inconsistencies, review sourcing setups for highly reused components). Collected representative ERP/PLM exports (incl. versions) and the spreadsheets/printouts used for current cross-product checks. Confirmed nine recurring tasks T1–T9; selected T1–T7 and T9 for detailed user testing. Diagnosis meeting notes; representative ERP/PLM BOM exports + item-master extracts; legacy Excel files and ERP/PLM printouts; finalized task catalogue T1–T9; scenario candidates list. Baseline description of current practice; evaluation scenario set; data specification for evaluation.
Planning Structured evaluation and evidence collection. Co-designed scenario scripts mapping typical questions to tasks T1–T7 (and export/print task T9). Defined acceptance criteria: ability to complete tasks without manual joins; clarity of outputs for review meetings; effort relative to Excel/ERP/PLM. Agreed data-collection artefacts: bookmarked views; exported CSV/XLSX; PDF printouts; observation notes. Linked evaluation foci EF1–EF4 to data sources and analysis methods. Scenario scripts; acceptance criteria (EF1–EF4); data-collection protocol; planned outputs list (bookmarks, exports, PDFs); evaluation plan (Table 6 mapping question→data→analysis→evidence→output). Implementation and evaluation plan; explicit EF1–EF4 traceability to data and analysis.
Action Deployed refreshed prototype for evaluation sessions. Refreshed Power BI model with up-to-date exports. Ran scenario-based think-aloud sessions where participants executed tasks: product set selection/sharedness lists (T1); attribute slicing (T2); BOM depth and version toggles (T3–T4); reuse ranking focus (T5); matrix checks for missing items/quantity inconsistencies (T6–T7); export/print for follow-up (T9). In parallel, replicated the same tasks using existing Excel workbooks and standard ERP/PLM reports to document feasibility and effort for the functional comparison. Session observation notes; bookmarked PCAT views; CSV/XLSX exports; PDF printouts; screenshots; time/step counts (PCAT vs Excel/ERP/PLM); comparison notes from parallel replications. Empirical artefact set for analysis; documented basis for functional comparison (comparison matrix in Section 5/Table 9).
Evaluation / reflection Combined functional comparison with qualitative analysis. Coded task support for T1–T9 as Yes/Partly/No for PCAT, Excel, and standard ERP/PLM based on actual attempts and user sessions (comparison table). Analyzed session artefacts for evidence of (i) error detection (missing components/qty inconsistencies), (ii) procurement insight (slicing by attributes revealing heterogeneous sourcing), and (iii) reuse visibility (sharedness rankings shaping review focus). Collected participant reflections on preparation effort and confidence vs Excel-based workarounds. Completed Yes/Partly/No coding matrix with justification notes; annotated exports/PDFs; session notes with quotations/paraphrases; short vignettes/examples for Section 5. Findings on usefulness and limitations (Section 5); implications for practice and future work (Section 6); refinement backlog (if any).
Table 5. Usability testing participants and session overview.
Table 5. Usability testing participants and session overview.
Participant Role / Title Function group Session format Approx. duration
P1 Senior Optical Engineer R&D / Engineering Walkthrough + think-aloud tasks (T1–T7, export/print as needed) ~30 min
P2 Engineering Manager - Products & ATE R&D / Engineering Walkthrough + think-aloud tasks (T1–T7, export/print as needed) ~30 min
P3 Senior Optical Engineer R&D / Engineering Walkthrough + think-aloud tasks (T1–T7, export/print as needed) ~30 min
P4 Senior R&D Process Engineering Manager PMO / Business Process Development Walkthrough + think-aloud tasks (T1–T7, export/print as needed) ~30 min
P5 Procurement Engineer Procurement Walkthrough + think-aloud tasks (T1–T7, export/print as needed) ~30 min
Table 6. Data-analysis map: question → data → analysis → evidence → output.
Table 6. Data-analysis map: question → data → analysis → evidence → output.
Evaluation focus Data used Analysis operations Evidence captured Output
EF1: Does PCAT provide portfolio-level visibility that Excel/ERP can’t readily offer? PCAT views; Excel workbooks; ERP/PLM reports Comparative functionality coding of T1–T9 (Yes/Partly/No + note) Completed comparison matrix; screenshots Table 9 in Section 5 (comparison); narrative summary
EF2: Does PCAT help detect duplicates/missing items/qty inconsistencies? Session artefacts; PCAT exports Thematic content analysis of session artefacts for “Error detection,” “Procurement insight,” “Reuse visibility” Annotated exports; bookmarked filters Result examples in Section 5; printed PDFs
EF3: Can users slice by procurement attributes to support sourcing discussions? PCAT slicers; session notes Usage tracing of slicers (Procurement Type/Make-Buy/Class) Screenshot sequences; timestamps Short vignettes in Section 5
EF4: Preparation effort vs baseline Time estimates; steps Task decomposition (steps, manual joins) Step counts; example formulas Effort commentary in Section 5
Table 7. PCAT tasks, functionality used, specific user operations, and outputs.
Table 7. PCAT tasks, functionality used, specific user operations, and outputs.
Task ID & name PCAT functionality used Specific user operations (Power BI) Output / artefact
T1. Cross-product sharing list Compare & Filter view Multi-select products → adjust slicers (Procurement Type / Make-Buy / Class / BOM level) → sort “Used in” Item table with Used in X of Y and usage %
T2. Attribute-sliced comparison Compare & Filter view Slice by Procurement Type / Make-Buy / Class → view affected items Attribute-filtered item list for engineering/procurement
T3. BOM-level focus Compare & Filter view Toggle First-level vs Full BOM structure Narrowed list (e.g., top-level only)
T4. Version selection Compare & Filter view Choose Newest or Newest approved Version-consistent comparison basis
T5. Sharedness summary Compare & Filter view (pie) After selecting products/filters, read pie bands Share-of-BOM summary: in all / 50–<100% / <50% / unique
T6. Commonality Matrix Commonality Matrix view Select products → (optional) paste list of Item IDs → read cell statuses Matrix with cell legends: all/same qty, all/different qty, partial/same, partial/different
T7. Missing-component detection Commonality Matrix view Scan for blanks/partials per row → filter to gaps Gap list: items missing from specific products
T8. Quantity consistency check Commonality Matrix view Sort/filter by “different qty” status Irregular-quantity list per product
T9. Export / print Both views Export → CSV/XLSX; Print → PDF Shareable files / meeting pack
Table 8. Operational definitions used in PCAT (terms, definitions, data required, and example output).
Table 8. Operational definitions used in PCAT (terms, definitions, data required, and example output).
Term Operational definition Data required Example output in PCAT
Exact duplicate Same part number (optionally including revision) appears in multiple selected products. Part number (and revision if used), product set, BOM occurrences Appears multiple times in “Used in X of Y” list / drill-through occurrences
Sharedness (“Used in X of Y”) X = number of selected products containing the item; Y = size of selected product set. Product selection; BOM occurrences “Used in 3 of 5”
Item usage % X/Y × 100 for the current selection. Same as above “60%”
Commonality Matrix status Present in all/same qty; present in all/different qty; partial/same qty; partial/different qty. Per-product quantities per item Matrix cell status/legend class
BOM level First-level includes top-level items; full structure includes all BOM levels. BOM depth tagging / multilevel BOM “First-level” vs “Full BOM” toggle
Version set Newest vs Newest-approved controls which revision of each product/BOM is included. Version flags in export Version toggle
Attribute slicing Procurement Type, Make-Buy, Class used as filters (additional attributes if present). Item-master attributes Slicers + filtered item list
Lifecycle/EOL Not included in current prototype; if an EOL flag exists in exports, it can be added as a slicer (out of scope here). EOL/lifecycle field (if available) Lifecycle slicer (future work)
Table 9. Functional comparison (modeling-related functionality; Yes/Partly/No with brief note).
Table 9. Functional comparison (modeling-related functionality; Yes/Partly/No with brief note).
Modeling-related task PCAT (Power BI prototype)* Excel / spreadsheets* ERP/PLM systems standard reports*
T1. Select multiple products and list components shared across them (with “used in N of M” counts) Yes - Compare & Filter view lists items and “Used in X of Y” Partly - pivots/joins; heavy manual setup No - reports are per-product
T2. Filter by Procurement Type / Make-Buy / Class Yes - slicers on those attributes Partly - VLOOKUP/PowerQuery; fragile No - typically split across modules
T3. Limit to BOM level (first-level vs full structure) Yes - BOM-level slicer Partly - needs level tagging & formulas No - standard BOM printouts only
T4. Choose version set (Newest vs Newest-approved) Yes - version toggle Yes (with limitations) - only if version flags present No - usually one active view
T5. Summarize sharedness across selected products (pie: in all / 50–<100% / <50% / unique) Yes - pie chart on sharedness bands Yes (with limitations) - build buckets manually No - not a standard portfolio view
T6. Per-item Commonality Matrix (products in columns; presence/qty status by cell) Yes - matrix shows present in all/same qty, present in all/different qty, partial/same, partial/different Yes (with limitations) - complex pivot; formatting overhead No - per-product only
T7. Missing components between compared products Yes -empty/partial cells highlight gaps Yes (with limitations) - manual comparison No - not available
T8. Quantity consistency check across products (same vs different qty) Yes - cell status encodes qty consistency Yes (with limitations) - manual rules No - not available
T9. Export/print selected views for handoff Yes - Export CSV/XLSX; Print to PDF Yes - native Yes (with limitations) - fixed layouts
* Entries are Yes / Partly / No; a short note clarifies boundary conditions (e.g., portfolio vs. per-product views).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated