Preprint
Article

This version is not peer-reviewed.

Perceptual Haptic Spectrum Modeling for Fine Texture Rendering on Virtual Object Surfaces in Virtual Reality

Submitted:

19 April 2026

Posted:

27 April 2026

You are already at the latest version

Abstract
To enhance immersion in virtual reality (VR) environments and improve the fidelity of virtual tactile interaction, this study presents a perceptually grounded haptic-rendering framework for fine surface-texture simulation. The framework is centred on a Perceptual Haptic Spectrum Model (PHSM), which maps virtual surface attributes (e.g., hardness, elasticity, roughness, and friction) to multi-band tactile targets defined in perceptual frequency space. The Just Noticeable Difference (JND) principle from psychophysics inspired parameterisation strategy is introduced to keep generated tactile cues within perceptually meaningful intervals. To account for anatomical heterogeneity across the fingertip, region-specific response functions are defined for the fingertip centre, finger pad, and lateral edge. The system further integrates flexible strain sensors for contact-state detection, rule-based channel allocation for multimodal feedback scheduling, and a short-horizon predictive feedforward module for anticipatory actuation. A dual-actuation prototype is described in which a glove-based primary actuation layer provides macroscopic force support and a finger-sleeve secondary actuation layer provides local texture cues. A representative virtual-fabric exploration scenario is reported to illustrate how the proposed framework handles concentrated, distributed, and slip contact states. The present manuscript therefore reports a prototype framework and proof-of-concept system operation rather than a completed large-scale psychophysical study. The results demonstrate internal consistency between perceptual modelling, sensing, and actuation, and suggest that the proposed approach is a promising basis for future quantitative evaluation of fine VR haptic interaction.
Keywords: 
;  ;  ;  ;  

1. Introduction

Recent advances in haptic gloves have led to meaningful progress in force-feedback actuation, electrical stimulation, vibrotactile output, and pneumatic feedback. Despite these developments, many current haptic-rendering systems still struggle to reproduce high-fidelity tactile sensations in dynamic and immersive VR environments. Existing interfaces often fail to match the spatial precision, adaptive responsiveness, and temporal sensitivity of human tactile perception, which reduces realism and limits user acceptance (Culbertson et al., 2018; Pacchierotti et al., 2017).
A first limitation concerns spatial resolution. Many commercially available gloves rely on low-density linear or circular actuator layouts that cannot produce localised, region-specific responses consistent with the heterogeneous physiology of the fingertip. As a result, tactile output tends to be homogenised across the finger surface: touching an edge with the fingertip centre and sliding over a rough boundary with the lateral finger surface may produce similar feedback patterns, even though these interactions should evoke different tactile sensations (Loomis and Lederman, 1986; Klatzky and Lederman, 2010).
A second limitation concerns dynamic interaction recognition. In many systems, feedback is triggered only after sensor signals exceed preset thresholds. Such reactive control is often sufficient for basic contact notification, but it is less effective for representing light touch, stroking, pressing, or rapid slip transitions during complex manipulation. Without richer contact-state interpretation, feedback may become delayed, abrupt, or semantically ambiguous, thereby disrupting immersion (Razzaque et al., 2002).
A third limitation concerns the restricted bandwidth of single-modality feedback. Human tactile perception relies on coordinated responses from multiple mechanoreceptive channels with distinct frequency sensitivities (Johnson, 2002; Saal and Bensmaia, 2014). However, many wearable systems provide only one dominant physical feedback mode, such as a single-frequency vibration motor or simple pneumatic pressure modulation. This makes it difficult to present macroscopic force cues and fine textural cues simultaneously at the same contact site.
Finally, temporal latency remains critical. Tactile perception is highly time-sensitive, and delays on the order of a few tens of milliseconds can reduce perceived realism in VR interaction (Di Luca and Mahnan, 2019). Because sensing, classification, and actuation each introduce delay, haptic rendering benefits from anticipatory control strategies that can prepare tactile output before contact fully develops.
Motivated by these challenges, this paper presents a perceptually grounded framework for fine texture rendering on virtual object surfaces. The main contributions are as follows. First, a Perceptual Haptic Spectrum Model (PHSM) is proposed to map physical surface properties onto multi-band perceptual targets. Second, region-specific response functions are introduced to adapt tactile rendering to anatomical sub-regions of the fingertip. Third, a contact-aware control strategy combines contact-state classification, rule-based channel allocation, and short-horizon predictive feedforward control. Fourth, a dual-actuation prototype architecture is described to coordinate glove-based force support with local high-frequency texture augmentation. The manuscript reports a proof-of-concept system and scenario-based demonstration, while large-scale psychophysical validation is left for future work.

2. Materials and Methods

2.1. Perceptual Haptic Spectrum Model

The proposed framework is grounded in human tactile-perceptual mechanisms and in the physical properties governing surface-finger interaction. Virtual surface attributes, including hardness, elasticity, roughness, friction coefficient, viscoelasticity, and microtexture periodicity, are treated as the physical input space. These attributes are transformed into a structured perceptual representation, namely the Perceptual Haptic Spectrum Model (PHSM), in which tactile events are described by temporal, spatial, and spectral components.
Within the PHSM, each contact event is represented not as a single vibration or force value but as a composite tactile signal distributed across perceptual frequency bands. In the present work, three bands are used for practical rendering: 0-20 Hz for macroscopic force cues such as pressing resistance and deformation; 20-200 Hz for texture-related microvibrations and frictional variation; and 200-400 Hz for fine microtexture and sliding-induced tremor cues. This banded representation is consistent with the general frequency-dependent behaviour of cutaneous mechanoreception reported in tactile-perception research (Bensmaia and Hollins, 2003; Johnson, 2002).
By computing the spectral position and amplitude of each contact event within the PHSM, a multilayered haptic-feedback target set is generated. These targets are then mapped to physically actuated components in the haptic glove—such as pneumatic microbladders, piezoelectric patches, and vibrotactile motors—allowing accurate reconstruction of user-intended tactile sensations on the skin. This process establishes an end-to-end mapping from perceptual intention to physical actuation, shifting the system design philosophy from “how actuators are driven” to “what perceptual response should be evoked,” consistent with perception-first frameworks emphasised in recent tactile research (Saal & Bensmaia, 2014; Perrone, 2024). An important conceptual feature of the PHSM is reverse encoding. Instead of beginning from actuator capabilities and then asking what sensations they might produce, the framework begins from the desired perceptual outcome and infers the physical stimulation profile needed to approximate that outcome. This perception-first strategy is intended to improve perceptual consistency, enable cross-band integration, and support multimodal actuator coordination.
To support real-time interaction in VR, the PHSM incorporates a psychophysically grounded parameterisation mechanism using the Just Noticeable Difference (JND) principle (Weber, 1834; Fechner, 1860). Each frequency band is assigned a sensitivity interval within which variations in amplitude or frequency are guaranteed to be perceptually discriminable. This prevents unnecessary actuation, limits redundancy, and ensures efficient energy distribution across modalities. As a result, small but meaningful tactile variations remain distinguishable to the user, enabling high-fidelity surface detail rendering even under limited actuation bandwidth.
Building on the division of the perceptual spectrum, feedback-sensitive intervals were further defined for each frequency band in accordance with the principle of the JND in perceptual psychology (Gescheider, 1997). This ensures that subtle variations in haptic output remain detectable to users and produce perceptually distinct differences, thereby preventing unnecessary consumption of actuation resources or the generation of redundant feedback signals. At the same time, the proposed method no longer derives haptic logic from the perspective of device actuation; instead, it proceeds from the perceptual endpoint—what humans are intended to feel—and infers the corresponding physical frequency ranges and output intensities required for their realisation. In other words, the physical implementation is back-engineered from the perceptual objective, emphasising a perceptually centred modelling approach. Finally, based on this reverse mapping, a complete signal-encoding structure was developed (see Figure 1). During the encoding process, each virtual interaction event is first located within the perceptual spectrum and subsequently translated into a composite set of multi-band actuation commands, enabling a precise mapping of drive signals from the perceptual space to the execution space.

2.2. Perceptual Spectrum Parameterisation

To operationalise the PHSM, the physical attributes of each virtual contact event are converted into a set of spectral parameters describing amplitude, dominant frequency, bandwidth, and temporal envelope. For example, gross compliance and normal resistance are mapped mainly to the low-frequency band, whereas fine roughness, friction modulation, and boundary-induced oscillation are mapped to the mid- and high-frequency bands.
A JND-inspired quantisation strategy is used to prevent imperceptible or redundant output. In the present manuscript, the JND principle is used as a perceptual design guideline rather than as a completed user-specific psychophysical calibration (Gescheider et al., 1997). Accordingly, each frequency band is assigned a sensitivity interval intended to keep tactile variations within perceptually meaningful ranges while avoiding unnecessary actuation.
This parameterisation process produces a multi-band perceptual profile for each contact event. The profile is then passed to the actuation layer, where separate physical channels are assigned to force support, texture modulation, and sliding-related transient cues. Normalisation is applied across channels so that the combined output remains perceptually coherent despite the different dynamics of the underlying actuators.

2.3. Region-Specific Response Functions

Human tactile sensitivity is strongly heterogeneous across the finger surface. The fingertip centre, finger pad, and lateral edge differ in skin thickness, deformation behaviour, receptor density, and sensitivity to fine spatial and temporal cues. To account for these differences, the proposed framework subdivides the index-finger contact surface into three perceptual sub-regions: R1, the fingertip centre; R2, the main finger pad; and R3, the lateral edge used during sliding and boundary confirmation.
A flexible thin-film strain-sensor array is integrated into a finger sleeve to detect contact location, pressure magnitude, contact area, and temporal change. These sensors are used exclusively for sensing and classification. Local tactile rendering is produced by separate vibration-based or piezoelectric elements integrated into the sleeve. This distinction between sensing and actuation is important for the hardware architecture and is maintained throughout the revised manuscript.
Each sub-region is assigned an independent response profile controlling maximum feedback intensity, spatial attenuation, dominant frequency, and phase offset. Thus, the same high-level perceptual intention may lead to different physical stimulation patterns depending on which part of the finger is in contact with the virtual surface.
Figure 2. Finger-sleeve sensing module, regional division of the index finger, and local tactile-actuation layout.
Figure 2. Finger-sleeve sensing module, regional division of the index finger, and local tactile-actuation layout.
Preprints 209244 g002
To enhance fine-grained virtual tactile perception, a two-dimensional flexible strain sensor array embedded within the finger sleeve is employed as a secondary actuation unit, with the array providing coverage across all contact regions R i . Based on physiological differences in tactile perception across the finger pad, the contact surface of the glove’s finger sleeve is subdivided into multiple sub-perceptual regions R 1 , R 2 , , R n , including the central fingertip, mid finger pad, and lateral edge. Each region R i is associated with a distinct set of haptic feedback characteristics, including:
  • Force sensitivity, defined as the magnitude of response to variations in applied pressure;
  • Texture frequency sensitivity, referring to the ability to perceive high-frequency vibratory stimuli;
  • Perceptual latency tolerance, reflecting subjective tolerance to response delays.
For each sub-perceptual region, a corresponding set of regional gain parameters is assigned to regulate both the intensity and modal bias of the haptic feedback output. By integrating the gain parameter sets across all sub-regions, a region-based response function is constructed, exhibiting spatial normalisation and adaptive dynamic response characteristics, as defined below.
Ψ x , t = i = 1 n α i 1 e λ i x μ i cos 2 π β i · t + ϕ i γ i t
Ψ x , t denotes the integrated feedback intensity to be output by the system at a given time and spatial coordinate. i represents the index of sub-perceptual regions, with a total of 3 regions. α i refers to the static feedback intensity factor associated with zones i , which determines the maximum output amplitude. λ i denotes the spatial attenuation parameter centred on each region, reflecting the degree of tactile concentration. μ i represents the central coordinates of region i , specifying its spatial location. β i denotes the feedback frequency factor, corresponding to the frequency of vibration or texture patterns. ϕ i represents the initial phase of the feedback signal, which controls synchronisation or phase offset across multiple regions. γ i t denotes a time-varying gain function, dynamically adjusted based on real-time sensor data and incorporating the classification of contact behaviour (e.g. slip events leading to an increase in dynamic weighting).treated as a dynamic control factor shared with the contact-state classifier.

2.4. Contact-State Classification and Predictive Feedforward Control

Three contact states are considered in the present prototype: concentrated contact, distributed contact, and slip contact. Concentrated contact is characterised by a relatively small contact area with a high local pressure peak; distributed contact is characterised by a broader and more stable pressure distribution; and slip contact is characterised by rapid centroid displacement and short pressure pulses associated with sliding or boundary traversal.
The control logic uses these state estimates to allocate haptic channels according to task relevance. When concentrated contact is detected, the controller prioritises local high-frequency texture rendering at the dominant sub-region while preserving moderate force support. When distributed contact is detected, the controller shifts emphasis toward broader low-frequency support with reduced high-frequency intensity. When slip contact is detected, normal support is reduced and a more localised phase-modulated texture signal is generated to represent rapid lateral motion.
A short-horizon predictive feedforward mechanism is also incorporated. In the present prototype description, prediction is implemented as rule-based kinematic estimation using recent motion trajectory, velocity, and acceleration to anticipate the next contact state within an approximately 5-10 ms horizon. The objective is to pre-arm relevant channels before contact fully develops. A learning-based predictor, such as a lightweight neural network, is considered a future extension rather than a fully validated element of the present manuscript.
In addition to motion prediction, the controller applies perceptual continuity constraints so that transitions between surfaces or contact states do not create abrupt amplitude jumps or implausible temporal discontinuities. This is particularly important for high-frequency texture cues, which are more sensitive to latency and onset irregularities.

2.5. Dual-Actuation Hardware Architecture

To address the bandwidth limitations of a single actuator type, the system adopts a dual-actuation cooperative architecture (see Table 1). The primary actuation layer is the glove-based force-feedback unit, represented in this prototype by the HaptX Gloves G1. This layer is responsible for macroscopic, low-frequency force cues such as support, resistance, and compliance-related deformation.
The secondary actuation layer is embedded in the finger sleeve and consists of local texture actuators placed near the sensing region. These actuators provide higher-frequency cues associated with fine roughness, granularity, edge transitions, and slip-related pulses. The sensing layer and the secondary actuation layer are mechanically integrated but functionally distinct.
Control dominance is dynamically reassigned according to the current interaction semantics. Broad pressing and stable exploration favour the primary force-feedback channel, whereas fast local sliding and boundary confirmation favour the secondary texture channel. When appropriate, both layers are driven simultaneously so that macroscopic force cues and microscopic texture cues can be perceived together.

3. Results: Prototype Demonstration in a Virtual Fabric Task

This section reports a proof-of-concept operating scenario (see Figure 3) for the proposed system. It is intended to illustrate how the framework behaves during representative texture exploration in VR. It should therefore be read as a prototype demonstration rather than as a completed controlled user study with inferential statistical analysis.
In the demonstration scenario, the user explores a virtual fabric sample in VR and attempts to identify tactile characteristics such as roughness, boundary variation, and brushing resistance. The right index finger is used as the main exploratory finger. Three typical actions are considered: pressing with the fingertip centre, stroking with the finger pad, and rapid lateral sliding with the finger edge.

3.1. Prototype Platform and Demonstration Conditions

The haptic glove serves as the primary actuation layer and provides stable macroscopic force cues during contact with the virtual fabric. The finger sleeve hosts both the flexible strain-sensor array and the local tactile actuators. In the present prototype, the strain sensors measure contact pressure, contact area, and centroid movement, while the local actuators provide higher-frequency texture signals at the index finger.
During operation, the bi-stable system is governed by a logical control module that continuously monitors variations in the user’s haptic interaction context, such as sliding interruptions, abrupt material transitions, rapid tapping events, or complex motion trajectories. When pronounced changes in the contact environment are detected, or when the current haptic task exhibits high spatial or temporal complexity, the system dynamically reconfigures its control dominance. Specifically, the actuation modality most aligned with the prevailing perceptual objective is prioritised, while the complementary unit is retained either as an auxiliary actuator or transitioned into a synchronous cooperative mode. Under higher-level control policies, coordinated activation of the primary and secondary units is achieved, enabling the simultaneous delivery of macroscopic force cues and microscopic tactile details during critical interaction phases. This coordinated strategy ensures continuity, stability, and perceptual realism of haptic feedback even under abrupt contact transitions or highly dynamic interaction conditions, thereby preventing perceptual discontinuities or incongruent sensations caused by actuator bandwidth mismatches or feedback latency.
The index finger is pre-segmented into three perceptual sub-regions (see Figure 2): R1, the fingertip centre for fine tactile inspection; R2, the main finger pad for broad-area stroking; and R3, the lateral edge for sliding-based boundary confirmation. Representative parameter ranges are selected for these regions as follows: αi in [0.2, 1.0] for intensity scaling, βi in [20, 400] Hz for frequency allocation, λi in [1, 10] for spatial concentration, φi in [0, π] for phase control, and γi(t) in [0.0, 2.0] for dynamic gain modulation.
Contact-state classification is updated from the sensor stream. In the demonstration described below, concentrated contact is associated with small-area, high-intensity pressure; distributed contact is associated with wider, more stable pressure patterns; and slip contact is associated with rapid centroid displacement, here represented by a displacement speed above 15 cm/s and local pulse duration below 100 ms.

3.2. Demonstration of Concentrated Contact

A representative concentrated-contact case occurs when the user presses the virtual fabric with the central fingertip. The detected pressure peak is located near x = μ1 = 3.5 cm and the measured force is approximately 0.65 N at t = 1.2 s. A representative parameter set is then assigned as follows: R1 = {α1 = 0.9, λ1 = 7, μ1 = 3.5, β1 = 320 Hz, φ1 = 0.2π}, γ1(1.2) = 1.8; R2 = {α2 = 0.5, λ2 = 4, μ2 = 5.0, β2 = 160 Hz, φ2 = 0.4π}, γ2(1.2) = 0.7; and R3 = {α3 = 0.3, λ3 = 6, μ3 = 6.5, β3 = 260 Hz, φ3 = 0.6π}, γ3(1.2) = 0.4.
Under these conditions, the controller places primary emphasis on R1. The local texture actuator near the fingertip is driven at 320 Hz to represent fine roughness cues, while the glove-based force-feedback channel is simultaneously engaged to provide stable resistive support. The outputs of R2 and R3 are reduced toward their minimum thresholds to avoid unnecessary actuation. This operating mode is intended to increase local texture salience during high-acuity fingertip inspection.

3.3. Demonstration of Distributed Contact

When the user transitions from fingertip pressing to broader stroking over the fabric, the contact centroid shifts toward μ2 = 5.0 cm and the activated area expands. In the representative scenario, the activated pressure area grows from approximately 0.5 cm² to 1.6 cm², the number of engaged sensor elements increases from 8 to 19, and the mean pressure decreases from 0.65 N to 0.48 N while remaining relatively stable for more than 250 ms.
This state is classified as distributed contact. The control logic increases γ2(t) to 1.5 and reduces the effective texture emphasis by lowering α2 and β2, with β2 reduced from 160 Hz to 120 Hz in the representative example. The primary force-support channel remains active so that the user perceives a broader, smoother brushing sensation rather than a sharply localised roughness cue. Actuation is sequenced so that low-frequency support develops first, followed by lower-amplitude texture modulation.

3.4. Demonstration of Slip Contact

A representative slip-contact case occurs when the user rapidly sweeps the lateral finger edge across a boundary of the virtual fabric to confirm edge precision. In the described demonstration, the pressure centroid shifts from approximately μ = 6.5 cm to μ = 3.9 cm within 0.12 s, corresponding to an average displacement velocity of about 22 cm/s. This motion is accompanied by short local pressure pulses and an oblique sliding trajectory across the sensor array.
The control system classifies this state as slip contact and shifts priority toward R3. In the representative parameter setting, γ3(t) is increased to 1.7 and β3 is set to 280 Hz. The local texture channel is phase-modulated with a delay of 0.3π to represent lateral movement across boundary features, while the normal-support channel is reduced to decrease apparent sliding resistance. This state is intended to enhance the perceptibility of boundary transitions and rapid local motion.

3.5. Summary of Demonstration Outcomes

Across these three representative interaction states, the prototype demonstrates a coherent mapping from sensed contact conditions to region-specific haptic output. Concentrated contact leads to high-acuity local texture emphasis, distributed contact leads to broader and smoother force-supported rendering, and slip contact leads to phase-modulated local texture signalling with reduced normal support.
Although these outcomes are reported here as representative system behaviours rather than statistically validated perceptual results, they show that the PHSM, the regional response model, the contact classifier, and the dual-actuation architecture can operate as a single closed-loop control framework.

4. Discussion

The present study proposed a perceptually grounded haptic-rendering framework for VR texture interaction, centred on the Perceptual Haptic Spectrum Model (PHSM). In contrast to conventional actuator-driven strategies, the proposed approach begins from the perceptual endpoint and reconstructs the corresponding physical stimulation requirements through reverse encoding. This design logic is important because the primary challenge in VR haptics is not merely the generation of force or vibration, but the generation of tactile signals that are meaningful to human perception. By linking virtual surface properties to frequency-dependent perceptual bands, and by integrating region-specific response modelling, predictive feedforward control, and a dual-actuation cooperative architecture, the framework provides a coherent route for reproducing subtle textural cues that are often lost in existing haptic-glove systems.
A key contribution of this work lies in shifting the control objective from hardware output to perceptual fidelity. Many current glove-based systems treat haptic rendering as a problem of triggering available actuators once a pressure threshold is exceeded. Although such strategies can provide basic contact awareness, they are generally insufficient for simulating the layered tactile qualities involved in surface exploration, especially for fine textures, friction transitions, and sliding-induced microvibrations. In the proposed model, however, tactile events are encoded as structured multi-band signals containing temporal, spatial, and spectral information. This allows low-frequency force cues and high-frequency textural cues to coexist within the same interaction, thereby better reflecting the way human mechanoreceptive channels jointly contribute to texture perception.
The region-specific response mechanism further strengthens the physiological plausibility of the framework. Human tactile sensitivity is not uniform across the finger surface, and a major weakness of many wearable haptic interfaces is their tendency to apply homogeneous feedback over anatomically heterogeneous skin regions. By dividing the fingertip into perceptual sub-regions and assigning each region distinct gain, attenuation, frequency, and phase parameters, the proposed system adapts tactile output to local sensory characteristics. In the prototype scenario, this strategy enabled the fingertip centre, finger pad, and lateral edge to support different perceptual roles during pressing, stroking, and sliding, respectively. Such behaviour is consistent with natural exploratory touch and is particularly relevant for applications involving virtual material discrimination, rehabilitation training, skill simulation, and virtual prototyping.
Another important aspect of the framework is its use of JND-inspired parameterisation. In practical haptic systems, not every physical variation contributes meaningfully to perception; some variations are too small to be reliably detected, whereas others are redundant or energetically inefficient. By constraining spectral output to perceptually meaningful intervals, the PHSM is intended to improve the efficiency of feedback generation while maintaining tactile salience. The present manuscript uses JND as a design principle rather than as a completed user-specific psychophysical calibration; however, this principle still provides a valuable basis for later personalised tuning.
The predictive feedforward mechanism addresses another central limitation in VR haptics, namely temporal lag. In immersive interaction, delays between motion onset and tactile feedback can reduce the sense of simultaneity, particularly during rapid manipulations. The present framework incorporates short-horizon motion-based prediction and pre-activation logic to estimate the upcoming contact state before it fully develops. Conceptually, this extends haptic rendering from a purely reactive paradigm toward an anticipatory one. In the present prototype description, prediction is implemented as rule-based kinematic estimation, while learning-based prediction remains a future extension that should be benchmarked under more complex motion conditions.
The dual-actuation cooperative architecture also represents a pragmatic system contribution. The primary glove-based channel reproduces stable macroscopic force sensations, whereas the secondary finger-sleeve channel provides rapid local texture augmentation. These two layers complement each other functionally: one maintains gross contact realism and resistance, while the other enriches perceived surface detail. Importantly, the rule-based channel-allocation strategy allows feedback emphasis to change with the interaction context. Broad pressing can favour force support, whereas rapid sliding and boundary tracing can favour local high-frequency texture cues. This adaptive role allocation suggests that future advances in VR haptics may depend not only on stronger actuators, but also on more intelligent multimodal coordination.
The prototype demonstration reported in this paper illustrates the internal consistency of the framework. The concentrated-contact, distributed-contact, and slip-contact examples show that the PHSM, regional response functions, dynamic gain control, and actuation scheduling can be organised within a single closed-loop architecture. From a design perspective, this is a strength of the work because perceptual modelling, sensing, and actuation are not treated as isolated modules.
At the same time, several limitations should be acknowledged. First, the current manuscript is primarily model-driven and scenario-based. Although the prototype description demonstrates feasibility and logical coherence, it does not yet provide a large-scale psychophysical study showing statistically significant improvements in perceptual realism, discrimination accuracy, or task performance relative to baseline methods. Second, the regional parameters used here are representative rather than fully personalised. Because tactile sensitivity varies across users, subject-specific calibration will likely be necessary to fully exploit the framework. Third, the predictive control module should be evaluated more rigorously under noisy motion conditions, rapid task switching, and more complex multi-finger interactions. Finally, the present implementation is focused on fingertip-level texture interaction and does not yet address whole-hand coordination or bimanual manipulation.
These limitations suggest several directions for future work. A first priority is to conduct controlled psychophysical experiments and comparative user studies to evaluate the proposed framework against conventional threshold-trigger and single-modality rendering approaches. Useful outcome measures would include texture discrimination accuracy, perceived realism, subjective synchrony, immersion, and workload. A second direction is the development of adaptive calibration procedures that personalise JND intervals, regional sensitivity weights, and actuation gains. A third direction is the refinement of the predictive module using lightweight learning-based models trained on larger datasets of hand motion and contact transitions. Further work may also extend the PHSM to other material attributes, including anisotropic friction, stick-slip behaviour, graded softness, and more complex material transitions.

5. Conclusions

This paper presented a revised and more clearly delimited description of a perceptually grounded haptic-rendering framework for fine texture interaction in VR. The proposed PHSM maps virtual surface properties to multi-band perceptual targets, while region-specific response functions, contact-aware channel allocation, short-horizon predictive control, and dual-actuation coordination together support context-sensitive tactile rendering.
The prototype demonstration in a virtual fabric task showed how the framework can distinguish among concentrated, distributed, and slip contact states and reconfigure feedback emphasis accordingly. These results should be interpreted as proof-of-concept system operation rather than as a completed large-scale user study. Nevertheless, the manuscript establishes a coherent foundation for future quantitative evaluation of perceptually guided VR haptic interaction.
Overall, the revised framework suggests that higher-fidelity virtual tactile rendering may be achieved by organising haptic control around perceptual structure rather than actuator behaviour alone. This perspective may be useful for the future design of adaptive, multimodal, and low-latency haptic interfaces in VR.

Funding

The author(s) received no financial support for the research, authorship, and/or publication of this article.

Institutional Review Board Statement

During the preparation of this work, the first author used ChatGPT in order to edit transitions between different paragraphs in the early draft lightly. The first author then edited this and used ChatGPT to improve the grammar, and subsequent drafts were reviewed and edited by the entire authorship team. The authors take full responsibility for the content of the published article.

Data Availability Statement

The datasets generated and/or analysed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

No potential conflict of interest was reported by the author(s).

References

  1. Bensmaïa, S. J.; Hollins, M. The vibrations of texture. Somatosensory & Motor Research 2003, 20(1), 33–43. [Google Scholar] [CrossRef] [PubMed]
  2. Culbertson, H.; Schorr, S. B.; Okamura, A. M. Haptics: The present and future of artificial touch sensation. Annual Review of Control, Robotics, and Autonomous Systems 2018, 1, 385–409. [Google Scholar] [CrossRef]
  3. Di Luca, M.; Mahnan, A. Perceptual limits of visual-haptic simultaneity in virtual reality interactions. In 2019 IEEE World Haptics Conference (WHC); IEEE, 2019; pp. 67–72. [Google Scholar]
  4. Fechner, G. T. Elemente der psychophysik; Breitkopf u. Härtel, 1860; Vol. 2. [Google Scholar]
  5. Gescheider, G. A.; Thorpe, J. M.; Goodarz, J.; Bolanowski, S. J. The effects of skin temperature on the detection and discrimination of tactile stimulation. Somatosensory & Motor Research 1997, 14(3), 181–188. [Google Scholar] [CrossRef] [PubMed]
  6. Johnson, K. O. Pashler, H., Wixted, J., Eds.; Neural basis of haptic perception. In Stevens' Handbook of Experimental Psychology: Sensation and Perception; Wiley, 2002. [Google Scholar]
  7. Klatzky, R. L.; Lederman, S. J. Multisensory texture perception. In Multisensory Object Perception in the Primate Brain; Springer, 2010; pp. 211–230. [Google Scholar]
  8. Loomis, J. M.; Lederman, S. J. Tactual perception. In Handbook of Perception and Human Performance; Wiley, 1986; Vol. 2, pp. 31.1–31.41. [Google Scholar]
  9. Pacchierotti, C.; Sinclair, S.; Solazzi, M.; Frisoli, A.; Hayward, V.; Prattichizzo, D. Wearable haptic systems for the fingertip and the hand: Taxonomy, review, and perspectives. IEEE Transactions on Haptics 2017, 10(4), 580–600. [Google Scholar] [CrossRef] [PubMed]
  10. Perrone, K. H.; Abdelaal, A. E.; Pugh, C. M.; Okamura, A. M. Haptics: The science of touch as a foundational pathway to precision education and assessment. Academic Medicine 2024, 99((4) Suppl, S84–S88. [Google Scholar] [CrossRef] [PubMed]
  11. Razzaque, S.; Swapp, D.; Slater, M.; Whitton, M. C.; Steed, A. Redirected walking in place. In Egve; May 2002; Vol. 2, pp. 123–130. [Google Scholar]
  12. Saal, H. P.; Bensmaia, S. J. Touch is a team effort: Interplay of submodalities in cutaneous sensibility. Trends in Neurosciences 2014, 37(12), 689–697. [Google Scholar] [CrossRef] [PubMed]
  13. Weber, E. H. De Pulsu, resorptione, auditu et tactu: Annotationes anatomicae et physiologicae.; CF Koehler, 1834. [Google Scholar]
Figure 1. Workflow of the proposed PHSM-based perceptual-to-actuation pipeline.
Figure 1. Workflow of the proposed PHSM-based perceptual-to-actuation pipeline.
Preprints 209244 g001
Figure 3. Prototype setup showing the primary haptic glove and the secondary finger-sleeve module integrated with sensing and local tactile actuation.
Figure 3. Prototype setup showing the primary haptic glove and the secondary finger-sleeve module integrated with sensing and local tactile actuation.
Preprints 209244 g003
Table 1. Functional roles of the main sensing and actuation components.
Table 1. Functional roles of the main sensing and actuation components.
Component Physical role Signal domain Main function in the framework
HaptX Gloves G1 Primary actuation Low-frequency force support Provides gross pressure, resistance, and compliance-related cues
Flexible strain-sensor array Sensing Pressure/deformation measurement Detects contact location, area, pressure distribution, and temporal change
Local sleeve actuators Secondary actuation Mid-/high-frequency texture output Delivers local roughness, boundary, and slip-related tactile cues
Motion-capture/VR tracking Kinematic sensing Pose, velocity, acceleration Supports short-horizon predictive feedforward control
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated