Preprint
Article

This version is not peer-reviewed.

A Federated and Differentially Private Incentive–Marketing Framework for Privacy-Preserving Cross-Channel Measurement in AI-Powered Digital Commerce

Submitted:

21 February 2026

Posted:

28 February 2026

You are already at the latest version

Abstract
In the U.S. digital economy, small and medium-sized businesses (SMBs) and creators in remote regions face structural disadvantages in access to integrated advertising and incentive platforms, largely due to accelerating privacy regulations and the fragmentation of cross channel datasets. This paper proposes a unified federated and differentially private measurement framework that integrates Topics/Protected Audience, Attribution Reporting, and SKAdNetwork, aiming to achieve privacy-preserving incentive optimization and cross-channel effectiveness measurement for web and mobile environments. The framework prioritizes compliant data usage, resolves data silos across ad ecosystems, and supports privacy-preserving recommendation and incentive allocation. Technically, we design a hybrid architecture that combines federated learning, differential privacy, and low-latency attribution aggregation, while ensuring end-to-end consistency across uplift modeling, multi-touch attribution (MTA), and event-level reporting. Empirical analysis compares the proposed model with state-of-the-art privacy-preserving baselines (e.g., last-touch attribution with DP aggregation), demonstrating substantial gains in accuracy, robustness, and reporting fidelity under strict privacy constraints.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

The rapid transformation of digital commerce in the United States has placed unprecedented pressure on small and medium-sized businesses (SMBs) and creators, especially those located in remote or economically disadvantaged regions. While large platforms benefit from advanced data infrastructure and integrated advertising–incentive systems, SMBs continue to encounter structural limitations in audience targeting, attribution accuracy, and return-on-investment optimization. These disparities have been amplified by tightening privacy regulations, the deprecation of third-party cookies, and the proliferation of heterogeneous data governance frameworks across advertising ecosystems. As a consequence, SMBs increasingly operate within data-constrained environments where compliant data usage is prioritized but actionable signals for marketing optimization are significantly diminished.
At the technical level, digital advertising has entered a transition period marked by a shift from identity-based targeting toward privacy-preserving interest signals such as Topics and Protected Audience, as well as platform-controlled attribution frameworks including Attribution Reporting for the web and SKAdNetwork for mobile applications. Although these mechanisms enhance privacy guarantees, they also introduce fragmented reporting pipelines, increased latency, coarse-grained measurement granularity, and inconsistencies between web and app environments. Cross-channel data silos—historically alleviated by third-party cookies now reemerge as major bottlenecks for accurate cross-device and cross-platform marketing measurement.
To address these challenges, research attention has shifted toward federated learning (FL) and differential privacy (DP) as foundational technologies enabling compliant, high-fidelity measurement without exposing raw user-level data. FL enables collaborative model training across devices or enterprise boundaries, mitigating the privacy risks associated with centralized data aggregation. DP injects mathematically bounded noise into the training or reporting pipeline, offering quantifiable privacy guarantees that align with modern regulatory requirements. However, deploying FL + DP in real marketing environments presents significant complexities, including handling attribution delays, ensuring cross-channel consistency, reconciling event-level and aggregated reporting, and accommodating the operational constraints of SMB advertisers.
This study proposes a unified privacy-preserving incentive and marketing measurement framework that integrates Topics, Protected Audience, Attribution Reporting, and SKAdNetwork under a federated and differential privacy paradigm. The framework places compliant data usage at the core while providing an end-to-end solution for cross-channel measurement, incentive allocation, uplift estimation, and multi-touch attribution under strict privacy constraints. On the web side, the model leverages interest-based signals and site-defined protected audience cohorts, while on the mobile side it incorporates SKAdNetwork event-level reporting and privacy thresholds. A federated learning architecture harmonizes these heterogeneous signals, supported by DP-based noise calibration to ensure both privacy protection and measurement accuracy.
Furthermore, this work contributes a comparative evaluation pipeline that contrasts the proposed federated-DP framework with simpler, non-federated privacy-preserving baselines across three dimensions: (1) attribution accuracy and reporting delay, (2) uplift and optimization effectiveness, and (3) cross-channel consistency. Given the industry-wide deprecation of third-party cookies, our evaluation focuses on performance differentials within the privacy-preserving paradigm.Overall, this research advances the state of privacy-preserving marketing measurement, offering a technically feasible, regulatory-aligned, and SMB oriented solution for the emerging post-cookie era. It provides both theoretical and practical foundations for marketing ecosystems seeking to balance privacy protection with measurable advertising effectiveness in AI-powered digital commerce.

3. Method

This section formalizes the proposed federated and differentially private incentive marketing framework for cross-channel measurement in AI-powered digital commerce. As illustrated in Figure 1, the end-to-end system integrates client-side web and app privacy signals, unified channel encoders, a federated learning backbone trained under differential privacy, a multi-touch attribution and uplift module, and an incentive optimization layer for SMB advertisers. We first introduce the core entities and notation, then present the federated learning architecture that harmonizes web- and app-side measurement streams. Next, we describe the differentially private reporting mechanisms that ensure consistency between event-level modeling and privacy-preserving aggregation. Finally, we detail the uplift-based incentive allocation module that translates privacy-compliant measurement into economically meaningful optimization tools for SMB participants.

3.1. System Model and Notation

We consider a digital advertising ecosystem with users, advertisers, publishers, and a central orchestrator:
Users: the set of users is denoted by U .
Advertisers: the set of advertisers is denoted by A .
Publishers (web or app inventory owners): the set of publishers is denoted by P .
Channels: we denote channels by K , typically including web channels using Topics / Protected Audience and mobile channels using SKAdNetwork.
The central orchestrator (platform) is denoted by C .
For each user u U , we observe a sequence of ad-related interaction events across web and app channels,
Where:
x u , t R d is the context feature vector at time t (including device, placement, local engagement and coarse-grained metadata).
a u , t A is the action (which advertiser or campaign was shown; denotes no ad).
y u , t 0,1 is the binary outcome (e.g., conversion within a relevant time window).
k u , t K indicates the channel (e.g., web–Topics, web–Protected Audience, app–SKAdNetwork).
To integrate privacy-preserving web and app signals, we define channel-specific encoders:
For web events with Topics and Protected Audience (PA) signals,
z web * u , t = ψ * web ( x u , t , Topics * u , t , PA * u , t ) ,
Where ψ web ( ) is a deterministic feature mapping that transforms event-level context and interest signals into a representation suitable for federated modeling.
For app events with SKAdNetwork reporting,
z app * u , t = ψ * app ( x u , t , SKAN * u , t )
where SKAN * u , t captures SKAdNetwork postbacks and coarse-grained conversion values.
We write the unified representation as
z u , t = z web * u , t , if   k * u , t   is   a   web   channel ,   z app * u , t , if   k * u , t   is   an   app   channel .   For brevity, we denote the per-event tuple as
e u , t = ( z u , t , a u , t , y u , t , k u , t ) , E = u U E u .
Our goal is to learn a global model f θ with parameters θ that supports:
Outcome prediction: estimate the probability of conversion under different ad actions, p ˆ ( y u , t = 1 z u , t , a u , t ) = f θ ( z u , t , a u , t ) . Uplift estimation: estimate the incremental effect of showing an ad versus not showing an ad.
Multi-touch attribution (MTA): compute credit weights over the sequence of user touches.
Incentive allocation: translate uplift and attribution metrics into incentive scores and budget allocation while respecting privacy guarantees.
All of the above are implemented in a fully federated and differentially private fashion, mediated by the orchestrator C .
Federated Outcome and Uplift Modeling
Users (or small publisher/SMB clusters) act as federated clients. Let I denote the index set of clients, where each client i I holds a local dataset
D * i = ( z * j , a j , y j ) j = 1 n i
derived from E u : u U i and never leaves the device or local enclave.
We define the global empirical risk minimization (ERM) objective as
m i n θ ; L ( θ ) = i I n i n , L * i ( θ ) , L * i ( θ ) = 1 n i * j = 1 n i l ( f * θ ( z j , a j ) , y j )
where n = i I n i and l ( , ) is a prediction loss, typically the logistic loss
l ( y ˆ , y ) = ( y l o g y ˆ + ( 1 y ) l o g ( 1 y ˆ ) )
The model f θ can be factorized into a feature encoder and task heads,
f θ ( z , a ) = σ ( g θ head ( h θ enc ( z ) , a ) )
where h θ enc is a shared representation network, g θ head is a task-specific head, and σ ( ) is the logistic sigmoid.
To support incremental lift estimation, we employ a two-head structure with potential-outcome predictions,The individual-level uplift estimate is then
τ ( z ) = μ 1 ( z ) μ 0 ( z ) .
The orchestrator C runs a variant of FedAvg over discrete communication rounds t = 0,1 , 2 , .
At round t :
C maintains the current model θ ( t ) and selects a subset of clients S t I .
Each client i S t initializes its local model with θ ( t ) and performs E steps of local stochastic gradient descent (SGD),
θ i ( t , 0 ) = θ ( t ) , θ i ( t , e + 1 ) = θ i ( t , e ) η , g i ( t , e ) ,
where g i ( t , e ) is the gradient on a mini-batch and η is the learning rate.
After E steps we denote the final local parameters as θ i ( t + 1 ) .
The orchestrator aggregates the local updates to obtain the new global parameters,
θ ( t + 1 ) = i S * t n i * j S t n j , θ i ( t + 1 ) .
This procedure is integrated with differential privacy in the next subsection.

3.2. Differentially Private Measurement and Attribution

The central requirement of the proposed framework is that all model training, attribution statistics, and reporting are ( ϵ , δ ) -differentially private with respect to individual users or SMBs.

3.3. Differential Privacy Preliminaries

A randomized mechanism M that maps datasets to outputs satisfies ( ϵ , δ ) -differential privacy if for any pair of neighboring datasets D and D ' differing in a single user’s data and for any measurable set S ,
P r [ M ( D ) S ] e ϵ P r [ M ( D ' ) S ] + δ .
In our setting, mechanisms include:
The federated learning update mechanism (via DP-SGD).
The aggregate attribution and conversion reporting mechanisms (for web Attribution Reporting and SKAdNetwork-like summaries).

3.4. DP-SGD for Federated Learning

ATo guarantee differential privacy in training, we employ DP-FedAvg with Gaussian mechanism. At each round t, for each client i S t : For each example j in the local mini-batch B i ( t ) , compute the gradient g i , j ( t ) = θ l i , j ( θ ( t ) ) and clip its L2 norm:
g ˉ i , j ( t ) = g i , j ( t ) / m a x ( 1 , g i , j ( t ) 2 C )
The server collects the clipped gradient sums from clients, adds Gaussian noise, and performs the update:
g ( t ) = 1 S t ( i S t j B i ( t ) g ˉ i , j ( t ) + N ( 0 , σ 2 C 2 I ) )
θ ( t + 1 ) = θ ( t ) η g ( t )
where C is the clipping norm, σ is the noise multiplier calibrated to the target (ϵtrain, δtrain) budget using the Moments Accountant, and I is the identity matrix.

3.5. DP Aggregation for Cross-Channel Attribution

Let N a , k denote the true number of conversions attributed to advertiser a A on channel k K over a given reporting window. We model the DP reporting mechanism for the summary reports of Attribution Reporting and SKAdNetwork as a noisy aggregation,
N ˜ * a , k = N * a , k + Z a , k , Z a , k N ( 0 , σ rep 2 ) .
To support multi-dimensional breakdowns (e.g., region, device type, campaign), we index by a tuple b B and define
N a , k , b = u U t 1 [ event   ( u , t )   is   attributed   to   ( a , k , b ) ] ,
with noisy release
N ˜ * a , k , b = N * a , k , b + Z a , k , b , Z a , k , b N ( 0 , σ rep 2 ) .
The sensitivity of this query under user-level adjacency is bounded by a constant Δ , and the noise scale σ rep is chosen according to the Gaussian mechanism to achieve ( ϵ rep , δ rep ) -DP for reporting.
The final model is obtained by jointly optimizing a composite loss function that balances prediction accuracy, consistency with DP reports, and attribution alignment:
L t o t a l = L p r e d + λ 1 L c o n s + λ 2 L M T A
Here, L p r e d is the standard binary cross-entropy loss for conversion prediction. The hyperparameters λ 1 ​ and λ 2 control the trade-off between objectives and were set to 0.5 and 0.3 respectively, determined via grid search on a validation set. All parameters (θ,η) are updated simultaneously through backpropagation in each federated round.

3.6. Consistency Constraints Between Event-Level and Summary-Level Views

To ensure that event-level models and summary reports are consistent, we introduce a consistency regularization term. Let N ˆ a , k , b ( θ ) be the model-implied expected number of conversions for ( a , k , b ) under the current model:
N ˆ * a , k , b ( θ ) = * u U t E [ y u , t z u , t , a u , t , k u , t ; θ ] , 1 [ ( a u , t , k u , t , b u , t ) = ( a , k , b ) ] .
We define a consistency loss
L cons ( θ ) = a A k K b B w a , k , b ( N ˆ * a , k , b ( θ ) N ˜ * a , k , b ) 2 ,
where w a , k , b 0 are weights that down-weight cells with high DP noise.

3.7. Multi-Touch Attribution and Incentive Allocation

The system must allocate credit across multiple touches in a cross-channel user journey and turn these credits into incentive scores for SMBs in a way that remains compatible with DP constraints.

3.7.1. Path-Based Attribution Weights.

For each user u , we consider the ordered sequence of ad touches up to a conversion (or censoring time),
p u = ( ( a u , 1 , k u , 1 ) , ( a u , 2 , k u , 2 ) , , ( a u , L u , k u , L u ) ) ,
with L u touches.
We construct contextual scores s u , l for each touch,
s u , l = φ η ( h θ enc ( z u , l ) , a u , l , k u , l ) ,
where φ η is a small neural network with parameters η that maps the encoded representation and channel metadata to a scalar relevance score.
Attribution weights are then produced via a softmax over the path,
α u , l = e x p ( s u , l ) j = 1 L u e x p ( s u , j ) , l = 1 L u α u , l = 1 .
If user u converts ( y u = 1 ), the fractional credit assigned to touch ( a u , l , k u , l ) is α u , l . The path-level estimator for attributed conversions of ( a , k ) is
N ˆ * a , k MTA = * u U y u l = 1 L u α u , l 1 [ ( a u , l , k u , l ) = ( a , k ) ] .
During training, we encourage consistency between N ˆ * a , k MTA and the DP-reported N ˜ * a , k via an additional loss term,
L MTA ( θ , η ) = a A k K v a , k ( N ˆ * a , k MTA N ˜ * a , k ) 2 ,
with v a , k 0 tuning channel-level trust in the DP aggregation.

3.7.2. Uplift-Based Incentive Scores.

For each advertiser a A and for each incentive program c (e.g., a subsidy scheme or discount on platform fees), we define an uplift-based benefit estimate. Let U * a , c denote the set of users targeted by advertiser a under incentive configuration c . For each user u U * a , c , we obtain the uplift estimate,
τ u , a , c = τ ( z u , c ) = μ 1 ( z u , c ) μ 0 ( z u , c ) ,
where z u , c summarizes the user and campaign context under c .
The aggregated uplift for ( a , c ) is
τ ˆ * a , c = 1 | U * a , c | u U * a , c τ * u , a , c .
To maintain DP at the advertiser level, we apply a noisy release,
τ ˜ * a , c = τ ˆ * a , c + W a , c , W a , c N ( 0 , σ uplift 2 ) ,
where σ uplift is calibrated such that the mapping from user-level data to τ ˜ * a , c is ( ϵ * uplift , δ uplift ) -DP.
These τ ˜ a , c values form the core signals for incentive allocation.
Incentive Optimization Under Budget and Fairness Constraints
Let b a , c 0,1 be a binary decision variable indicating whether advertiser a receives incentive program c . Let κ c > 0 be the cost of assigning program c to a single advertiser and B > 0 be the total incentive budget for a given period.
We define a linear utility function based on DP uplift estimates,
U ( b ) = a A c b a , c , τ ˜ a , c .
The budget constraint is
a A c b a , c , κ c B .
To capture fairness toward SMBs, let A * SMB A denote the subset of SMB advertisers and define a minimum allocation level ρ [ 0,1 ] . We impose
* a A * SMB * c b a , c , κ c a A c b a , c , κ c ρ ,
ensuring that at least a fraction ρ of the incentive budget flows to SMBs.
The resulting optimization problem is
m a x b U ( b ) = a A c b a , c , τ ˜ * a , c ,   s . t . * a A c b a , c , κ c B ,   a A * SMB * c b a , c , κ c a A c b a , c , κ c ρ ,   b a , c 0,1 , a , c .
In practice, this integer program can be relaxed to b a , c [ 0,1 ] and solved via convex optimization, or approximated using greedy heuristics that preserve DP guarantees (since DP already resides in the upstream signals τ ˜ a , c rather than in the solver itself).

4. Valuation

This section empirically evaluates the proposed Federated–Differentially Private cross channel incentive marketing framework. Experiments focus on three core questions:
It is important to note that in the current privacy-centric ecosystem, a direct comparison with traditional, identity-based third-party cookie systems is neither feasible nor methodologically sound. Therefore, our evaluation is designed to assess whether the proposed federated-DP framework can outperform other viable, privacy-compliant measurement techniques, thereby establishing its utility in the post-cookie era.
Does the federated + DP model preserve accuracy across web and app channels despite noise constraints?
Does the proposed multi-touch attribution (MTA) mechanism increase consistency with privacy-preserving summary reports?
Does uplift-based incentive allocation improve SMB performance under realistic privacy budgets?
We conduct experiments on a combined dataset synthesized from web Topics/Protected Audience signals and app–SKAdNetwork-like postbacks. To ensure realistic modeling, web events are grouped into 5 Topics categories and 6 PA cohorts, whereas app events contain coarse-grained conversion values (0–5) consistent with SKAdNetwork behavior.

4.1. Model Performance Across Channels

We simulate a cross-channel advertising dataset reflecting post-cookie constraints. User journey sequences are generated using a Markov model calibrated against publicly available ad interaction logs. Web events are annotated with synthetic Topics (5 categories, assigned via weighted random sampling) and Protected Audience cohort IDs (6 cohorts). App events are paired with SKAdNetwork-like postbacks, containing coarse-grained conversion values (0-5) and source identifiers.
We first evaluate prediction and uplift accuracy across channels — Web–Topics, Web–Protected Audience (PA), and App–SKAdNetwork.As shown in Table 1.
The following table summarizes performance across three core metrics:
AUC for conversion prediction
Calibration Error (ECE)
Uplift RMSE for counterfactual uplift modeling

4.2. Attribution Consistency Under DP Summary Reporting

We next evaluate whether the proposed multi-touch attribution (MTA) system remains aligned with the differentially private summary reports from Attribution Reporting and SKAdNetwork.As shown in Table 2.
Our framework reduces attribution inconsistency by more than 56% relative to DP summary-only reporting.
Consistency losses significantly enhance alignment between event-level and summary-level measurement.
Cross-channel MTA is only feasible with unified federated representation learning, as seen in the results.

4.3. Incentive Allocation Outcomes for SMB Advertisers

Finally, we evaluate how uplift-based incentive allocation affects SMB campaign performance. As shown in Table 3.
Total incentive cost divided by the total number of incremental conversions (estimated via the uplift model). SMB Allocation Ratio: The proportion of the total incentive budget allocated to SMB advertisers.

5. Conclusion

This paper presents a unified federated and differentially private framework for cross-channel marketing measurement and incentive optimization in post-cookie digital ecosystems. Motivated by the growing fragmentation between web and mobile advertising environments—where Topics, Protected Audience, Attribution Reporting, and SKAdNetwork each impose distinct privacy and reporting constraints—we propose an end-to-end architecture that harmonizes heterogeneous privacy-preserving signals while supporting accurate attribution, uplift estimation, and incentive allocation for advertisers, particularly SMBs operating under limited data resources.
Technically, the framework combines federated learning, differential privacy, multi-touch attribution, and uplift-based incentive modeling into an integrated system. A unified representation encoder allows both web-side and app-side privacy sandbox signals to be embedded into a common latent space, reducing channel asymmetries and improving cross-environment generalization. Differentially private SGD ensures that all model updates adhere to strict privacy budgets, and DP-based reporting mechanisms align event-level predictions with coarse-grained summary reports from Attribution Reporting and SKAdNetwork. A consistency regularization module further links federated predictions with noisy aggregate statistics, narrowing the long-standing gap between platform-level and device-level measurement pipelines.
Empirical evaluations demonstrate that the proposed architecture yields substantial gains across three critical dimensions:
  • measurement utility, improving AUC, calibration, and uplift RMSE under realistic DP budgets;
  • attribution consistency, reducing discrepancies between model-derived multi-touch paths and DP summary-level conversions;
  • economic efficiency, delivering higher incremental lift and lower cost per incremental conversion in SMB-targeted incentive programs while satisfying fairness constraints.
Beyond technical contributions, this work has important practical implications. It shows that privacy-preserving advertising does not inherently preclude high-fidelity performance measurement or equitable incentive distribution. With appropriate algorithmic design—particularly unified federated learning, adaptive DP calibration, and uncertainty-aware attribution—it is possible to construct a scalable and regulator-aligned commercialization infrastructure capable of serving platforms, advertisers, and end users simultaneously. The framework therefore provides a viable blueprint for the next generation of trusted AI systems in digital commerce, where privacy guarantees, economic incentives, and measurement accuracy must coexist.
Nevertheless, several challenges remain. Federated learning at scale introduces optimization instability under heterogeneous device participation; differential privacy continues to impose noise ceilings on uplift estimation; and cross-channel temporal misalignment between Attribution Reporting and SKAdNetwork remains a structural barrier to perfect attribution harmonization. Future work should explore causal representation learning, asynchronous federated optimization with privacy accounting, and mechanism-design–aligned incentive policies that can reduce strategic manipulation while maintaining fairness for SMBs.
In conclusion, the proposed federated–DP measurement framework represents a significant step toward rebuilding marketing effectiveness in a privacy-centric era. By integrating regulatory-compliant data flows with robust attribution and incentive modeling, it establishes a principled, scalable foundation for trustworthy, AI-powered digital commerce.

References

  1. Xiao, Y., Du, J., Zhang, S., Zhang, W., Yang, Q., Zhang, D., & Kifer, D. (2025). Click without compromise: Online advertising measurement via per-user differential privacy. In Proceedings of the IEEE Symposium on Security and Privacy.
  2. Delaney, J., Ghazi, B., Harrison, C., Ilvento, C., Kumar, R., Manurangsi, P., Pal, M., Prabhakar, K., & Raykova, M. (2024). Differentially private ad conversion measurement. Proceedings on Privacy Enhancing Technologies, 2024(2).
  3. Du, J. (2024). Designing for user privacy: Integrating differential privacy into ad measurement systems in practice. In Proceedings of the 2024 USENIX Conference on Privacy Engineering Practice and Respect (PEPR ’24).
  4. Du, J., Ghazi, B., Ilvento, C., Kumar, R., Manurangsi, P., Pal, M., Prabhakar, K., & Raykova, M. (2025). PrivacyGo: Privacy-preserving ad measurement with multidimensional intersection. arXiv preprint.
  5. Sun, J.; Zhao, L.; Liu, Z.; Li, Q.; Deng, X.; Wang, Q.; Jiang, Y. Practical differentially private online advertising. Computers & Security 2022, 112, 102504. [Google Scholar]
  6. Lindell, Y.; Omri, E. A practical application of differential privacy to personalized online advertising. IACR Cryptology ePrint Archive 2011, 2011(152). [Google Scholar]
  7. Mouris, D.; Masny, D.; Trieu, N.; Sengupta, S.; Buddhavarapu, P.; Case, B. M. Delegated private matching for compute. Proceedings on Privacy Enhancing Technologies 2024, 2024(2), 49–72. [Google Scholar] [CrossRef]
  8. Mouris, D.; Sarkar, P.; Tsoutsos, N. G. PLASMA: Private, lightweight aggregated statistics against malicious adversaries. Proceedings on Privacy Enhancing Technologies 2024, 2024(3), 4–24. [Google Scholar] [CrossRef]
  9. Anonymous. (2020). Secure multiparty computation for private measurement of advertising lift. Technical Disclosure Commons.
  10. Zhong, K., Ma, F., & Angel, S. (2022). Ibex: Privacy-preserving ad conversion tracking and bidding. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (CCS ’22).
  11. Aksu, H., Aksu, H. H., Ghazi, B., Harrison, C., Kumar, R., Manurangsi, P., Pal, M., & Raykova, M. (2024). Summary report optimization in the Privacy Sandbox Attribution Reporting API. Proceedings on Privacy Enhancing Technologies, 2024(4).
  12. Ghazi, B., Harrison, C., Hosabettu, A., Kamath, P., Knop, A., Kumar, R., Leeman, E., Manurangsi, P., & Sahu, V. (2024). On the differential privacy and interactivity of Privacy Sandbox reports. arXiv preprint.
  13. Su, C.; Wei, J.; Lei, Y.; Li, J. A federated learning framework based on transfer learning and knowledge distillation for targeted advertising. PeerJ Computer Science 2023, 9, e1496. [Google Scholar] [CrossRef] [PubMed]
  14. Seyghaly, R.; Garcia, J.; Masip-Bruin, X. A comprehensive architecture for federated learning-based smart advertising. Sensors 2024, 24(12), 3765. [Google Scholar] [CrossRef]
  15. Seyghaly, R., Garcia, J., Masip-Bruin, X., & Mahmoodi Varnamkhasti, M. (2024). An optimized data architecture for smart advertising based on federated learning. In 2024 IEEE Symposium on Computers and Communications (ISCC). 2024.
  16. Chivukula, V. V. Use of federated learning for optimizing ad delivery platforms without exchanging user PII. International Journal of Science and Advanced Technology 2022, 12(3). [Google Scholar]
  17. Zhang, K.; Li, P. Federated learning optimizing multi-scenario ad targeting and investment returns in digital advertising. Journal of Advanced Computing Systems 2024, 4(8), 36–43. [Google Scholar] [CrossRef]
  18. Chivukula, V. V. The use of federated learning for digital advertising measurement. ESP Journal of Engineering & Technology Advancements 2022, 2(4), 161–162. [Google Scholar] [CrossRef]
  19. Kraft, L. Leveraging differential privacy for targeted advertisements. SSRN Electronic Journal. 2023. [Google Scholar] [CrossRef]
  20. Ullah, I.; Boreli, R.; Kanhere, S. S. Privacy in targeted advertising on mobile devices: A survey. International Journal of Information Security 2023, 22(3), 647–678. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overview of the Proposed Federated–DP Cross-Channel Measurement Framework. 
Figure 1. Overview of the Proposed Federated–DP Cross-Channel Measurement Framework. 
Preprints 199866 g001
Table 1. Cross-Channel Model Performance Under DP-FL Training.
Table 1. Cross-Channel Model Performance Under DP-FL Training.
Channel Type #Events AUC (↑) ECE (↓) Uplift RMSE (↓) DP Noise σ Avg. Client Participation (%)
Web – Topics 3,245,901 0.782 0.041 0.116 1.2 27.4%
Web – Protected Audience 1,982,334 0.768 0.052 0.129 1.2 25.9%
App – SKAdNetwork 4,156,442 0.804 0.038 0.112 1.0 31.2%
Combined (Unified FL) 9,384,677 0.816 0.035 0.104 1.1 28.3%
Table 2. Attribution Consistency Across Baseline and Proposed Method.
Table 2. Attribution Consistency Across Baseline and Proposed Method.
Method ACR (↓) Avg. Per-Advertiser Error (↓) #Advertisers Dimensionality of Reports Supports Cross-Channel MTA
Baseline: Last-Touch (Non-DP) 0.214 38.6 220 Low No
Baseline: Summary-Only DP Reports 0.167 29.3 220 Medium No
Proposed: DP-Federated MTA (Ours) 0.091 17.5 220 High Yes
Proposed + Consistency Regularization 0.072 14.2 220 High Yes
Table 3. Incentive Allocation Performance Under DP Constraints.
Table 3. Incentive Allocation Performance Under DP Constraints.
Method Incremental Lift ↑ CPIC (↓) SMB Allocation Ratio ↑ Budget Utilization (%) #Advertisers Receiving Incentives
Heuristic Rule-Based Allocation 9.3% $41.2 48.1% 92.4% 62
Baseline DP Aggregated Lift 12.7% $34.5 53.8% 96.0% 85
Proposed DP-FL Uplift Allocation (Ours) 18.4% $27.8 56.7% 99.1% 104
Proposed + Strong SMB Constraint 17.6% $28.9 61.4% 98.3% 112
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated