Preprint
Article

This version is not peer-reviewed.

Regulating Digital Insurance Platforms in the EU: Legal Frameworks and Future Directions

Submitted:

18 February 2025

Posted:

19 February 2025

You are already at the latest version

Abstract
The digital transformation of the insurance industry in the European Union (EU) creates both opportunities and regulatory challenges, requiring a balance between innovation, consumer protection, and market stability. This paper examines the evolving regulatory landscape for digital insurance platforms, focusing on the intersection of EU digital and insurance-specific regulations. The study explores how digital insurance platforms operate within the broader EU regulatory ecosystem by analysing key legislative milestones and legal frameworks. It assesses the implications of these frameworks for various stakeholders, including platform owners, policyholders, distributors, and emerging business models such as peer-to-peer insurance. The research further investigates the regulatory boundaries defining different digital insurance models and the complexities of ensuring compliance with business conduct rules in a digital environment. Findings highlight the tensions between fostering technological innovation and maintaining regulatory oversight, particularly in platform governance, robo-advisory services, and digital distribution. The paper concludes that while EU regulations seek to address these challenges, uncertainties persist in the classification and supervision of digital insurance platforms. It offers recommendations to enhance regulatory clarity, ensuring a framework supporting consumer protection and market development in the digital insurance landscape.
Keywords: 
;  ;  ;  
Subject: 
Social Sciences  -   Law

1. Introduction

The digital transformation of the European insurance sector has progressed unevenly across various markets, reflecting differing levels of adoption and innovation. Although digital distribution channels play a secondary role in the insurance distribution mix, particularly for life insurance products, their significance increases as customers rely on online tools to gather information and make comparisons (EIOPA, 2024, pp. 11-12). This shift underscores the transformative potential of digitalisation for customer engagement and future business opportunities, especially as young, educated, and high-income customers prefer purchasing insurance through digital channels (EIOPA, 2024, p.13).
The academic literature identifies three key areas of transformation in the insurance sector driven by digitalisation: (i) customer interaction, facilitated by social media, chatbots, and robo-advisors; (ii) process automation, which enhances efficiency in sales and claims settlement; and (iii) product innovation, enabling the development of offerings such as telematics and cyber insurance (Eling and Lehmann, 2018, pp. 366–370). Furthermore, research highlights that while the rise of the platform economy—an extension of broader digitalisation—will not eliminate the need for insurance intermediation, it will fundamentally reshape how those needs are addressed in the future (Stricker, Wagner, and Zeier Röschman, 2023, p. 19).
The adoption of artificial intelligence (AI) and other digital tools within the insurance sector is accelerating rapidly. Insurers anticipate significant growth in the use of chatbots, mobile applications, and online forms, with Generative AI expected to play a crucial role in customer service (EIOPA, 2024, p. 16). AI is increasingly integrated into digital platforms, fostering a symbiotic relationship in which these platforms create an environment for AI to function effectively. Meanwhile, AI enhances the capabilities, efficiency, and overall value of these platforms (Alt, 2021, pp. 233-237). This interplay emphasises the role of digital platforms as dynamic infrastructures facilitating interactions and transactions, not only within customer relationships but also across the insurance value chain (Nicoletti, 2021, p. 225 – 230; Braun and Jia, 2025).
To fully understand the regulatory landscape for digital insurance platforms, it is essential to view them within the wider context of digital platforms. The EU has adopted a horizontal, cross-sector strategy to regulate digital transformation, encompassing digital platforms. EIOPA has expressed concerns regarding the challenges posed by this dual-layered framework, which generates regulatory complexity, especially in relation to the AI Act. In its communication with EU co-legislators, EIOPA emphasised that the AI Act should complement, rather than replace, sector-specific insurance legislation, thereby ensuring alignment with the industry’s unique needs (EIOPA, 2022a, p. 2).
EIOPA’s concerns are similarly reflected in the Draghi report on EU competitiveness, submitted in September 2024, when considered from a broader perspective. Although it does not explicitly focus on insurance, the report critiques the EU’s cautious regulatory approach, noting the existence of over 100 digital regulations and 270 regulatory authorities, which may impede technological development (Draghi Report, Part A, p.30). It emphasises the need for a balanced regulatory framework to promote innovation in digital platforms, warning against the potential inhibiting effects of applying the General Data Protection Regulation (GDPR) to AI (Draghi Report, Part B, p. 79). Furthermore, the report highlights the necessity of investing in advanced technologies, such as AI, to enhance competitiveness and capitalise on future innovations (Draghi Report, Part B, p. 249).
The relationship between the newly established regulatory framework for digital platforms and existing sector-specific regulations remains insufficiently explored in academic literature. The introduction of this general framework, marked by ambition and complexity, has primarily occupied scholarly discourse, often at the expense of examining its intersections with pre-existing sectoral rules. However, this dual-layered regulatory approach has already raised significant concerns.
This study investigates the interplay between the platform regulatory framework and sector-specific insurance rules, highlighting how the insurance sector—a key industry in the EU—relies increasingly on digital platforms while navigating complex regulatory requirements. Accordingly, Section 2 reviews both general and insurance-specific regulatory sources that shape digital insurance platforms. Section 3 examines the legal status of these platforms in distributing insurance products, focusing mainly on cases where they benefit from exemptions under sectoral regulations. Section 4 analyses the applications of business conduct rules derived from digital platform regulations and insurance-specific standards. It focuses on robo-advice, comparison websites, and influencers assessing how the dual regulatory framework tackles emerging challenges in digital insurance distribution. Finally, Section 5 presents the study’s conclusions.

4. Compliance Challenges in Digital Insurance Platforms: Business Conduct Rules

This section examines the regulatory and legal implications of digital insurance platforms, focusing on the conduct rules under EU law—specifically, those that govern the relationship between distributors and customers on the platform—to ensure a comprehensive understanding of the regulatory framework aligned with protective objectives.
The IDD establishes rules of conduct to ensure that insurance intermediaries and undertakings act in the best interests of their customers (Article 17 of the ID). These rules also apply to digital insurance platforms involved in distribution, requiring that they prioritise customer needs over commercial interests. The IDD further introduces specific regulations regarding information disclosure, including pre-contractual information and product suitability assessments. Digital platforms must ensure these requirements are fulfilled, even when employing automated tools or AI-driven algorithms to interact with customers.
While primarily focused on prudential regulation, Solvency II indirectly affects business conduct by imposing governance and risk management requirements on insurers. Articles 41 to 49 highlight the significance of internal controls, including digital platforms for product distribution and policy administration. These rules are linked to those regarding product oversight and governance (POG), which mandate insurers (: manufacturers) to ensure that products are targeted towards the appropriate market, as stipulated by the POG provisions in Article 25 of the IDD and the implementing Commission Delegated Regulation (EU) 2017/2358 of 21 September 2017 (Marano, 2021c, p. 61). Moreover, platforms must provide suitability or appropriateness assessments for specific insurance products, particularly those classified as insurance-based investment products (IBIPs).
However, applying this regulatory framework to digital platforms can be challenging. Three areas will be explored: (i) distribution through platforms deemed exempt ancillary intermediaries, (ii) sales accompanied by advice, and (iii) the role of comparison websites and insurance influencers. The following three paragraphs will discuss these issues in the order presented.

4.1. Distribution by Exempted Ancillary Intermediaries

Concerns have been raised regarding exempt ancillary intermediaries (see para. 3.1.3), as the boundaries between insurance distribution and referral activities can become blurred (see para. 3.1.4). Justifying the exemption for digital platforms poses a challenge due to their vast scale and role as distribution tools. The distinction between insurance distribution and referral activity can also become ambiguous, especially when platforms provide interactive tools or recommendations, which may subject them to insurance distribution regulations.
EIOPA has sought to address this regulatory gap to prevent regulatory arbitrage by establishing supervisory expectations regarding product oversight and governance. These expectations include ensuring that distribution activities are adequately monitored across specific channels (e.g., ancillary intermediaries or distance selling) to verify that products align with the needs of their target markets (EIOPA, 2020, p. 15). Notably, these expectations should extend to exempted intermediaries, as the duty falls on insurers-manufacturers that are subject to the IDD. However, the indirect application of IDD rules remains constrained by insurers’ limited ability to enforce compliance on entities that are neither obligated nor supervised. This challenge is further compounded when digital platforms serve as key distribution channels for insurers, weakening their bargaining position and making it difficult to impose or effectively oversee regulatory requirements.
The potential shortcomings of insurance regulation are not effectively addressed by other rules applicable to digital platforms, as these frameworks primarily focus on the flow of information to customers rather than from customers.
The IDD establishes a fundamental mechanism for consumer protection by requiring distributors to assess customers’ demands and needs, thus ensuring that insurance products meet their specific requirements (Article 20). This process commences with customers providing relevant information regarding their financial situation, personal circumstances, and insurance needs, which can be gathered through direct interactions, questionnaires, or online forms.
In contrast, the DSA, DMA, and GDPR emphasise transparency and fairness in digital markets. However, they do not require distributors to collect information from customers. The DSA enforces transparency in product rankings and algorithmic decisions but regulates only the flow of information to customers, not from them. Similarly, the DMA addresses anti-competitive practices among gatekeeper platforms. Yet, it applies to a limited number of entities and does not impose any obligation to evaluate customer demands and needs. The GDPR guarantees clear data processing disclosures but does not regulate the information that distributors must gather. Consequently, these regulatory frameworks fail to ensure that digital insurance platforms actively seek and assess customer needs, a gap that the IDD primarily addresses.
In conclusion, regulatory gaps persist in digital insurance distribution provided by exempt ancillary intermediaries. EIOPA’s efforts to impose oversight remain limited as insurers struggle to enforce IDD requirements on unsupervised platforms. While the DSA, DMA, and GDPR address transparency and competition, ensuring that platforms evaluate customers’ demands and needs is outside their scope.

4.2. Regulating Sales with (Robo)Advice in Digital Platforms

Advice provided through digital platforms is another critical area requiring scrutiny. While these platforms can enhance accessibility and customer experience, the quality of advice must align with IDD standards. Automated advice tools, such as robo-advisors, must ensure that their algorithms do not prioritise products based on commercial interests at the expense of customer needs (EIOPA’s Consultative Expert Group on Digital Ethics in insurance, 2021).
IDD requires insurance intermediaries or undertakings to disclose to customers in good time before the conclusion of an insurance contract whether they provide advice about the insurance product sold (Article 18). The advice for all insurance products can be “basic” or based on a fair and personal analysis.
Where “basic” advice is offered, the insurance distributor must provide the customer with a personalised recommendation, outlining why a specific product would best satisfy the customer’s demands and needs (Article 20).
When an insurance intermediary informs the customer that it provides its advice based on a fair and personal analysis, it must base that advice on an evaluation of a sufficiently large number of insurance contracts available in the market to enable it to make a personalised recommendation, using professional criteria, regarding which insurance contract would be suitable to meet the customer’s needs (Article 20).
The reference to a “personal/personalised” recommendation does not imply that the distributor must provide it in person.
In general terms, the level of human oversight in AI should be proportionate to the risks, scale, and complexity of its use case, considering existing governance measures. When firms deploy automated models with minimal oversight, they should enhance explainability, data management, and system robustness, particularly for high-impact applications. Conversely, limited explainability can be offset by stronger human oversight and data management throughout the AI model lifecycle (EIOPA’s Consultative Expert Group on Digital Ethics in insurance, 2021, p. 49).
Instead, the reference to personal/personalised requires the advice to be focused on the specific demands and needs of the customer to whom it is directed and follow the “likelihood to need” approach instead of “likelihood to buy” (EIOPA’s Consultative Expert Group on Digital Ethics in insurance, 2021, p. 28). Therefore, a robo-advisor can advise without necessarily being supported by humans.
The literal content of the IDD supports this statement. The IDD focuses on analysing customers’ demands and needs. Recital 44 expressly states that, to avoid mis-selling cases, a demands-and-needs test should always accompany the sale of insurance products based on information obtained from the customer. Thus, the distributor must provide the customer with an output (the proposed product) derived from the analysis of the customer’s inputs as investigated by the distributor. Indeed, Recital 44 further states that any insurance product proposed to the customer should always be consistent with the customer’s demands and needs and presented in a comprehensible form to enable that customer to make an informed decision.
The sale with advice must offer added value compared to a sale without advice. Both depend on the flow of information from the customer. However, Recital 45 of the IDD explicitly links the duty to specify customers’ demands and needs to the “personalised” recommendation, which clarifies why a particular product is most suitable for the customer’s insurance requirements. Consequently, the added value of the advice lies in explaining why the product “best meets” the customer’s needs and demands. Meanwhile, a sale without advice is merely consistent with the customer’s demands and needs (Article 20(1)).
This conclusion aligns with the principle of technological neutrality, which advocates that laws, regulations, and policies should neither favour nor discriminate against particular technologies. This principle fosters innovation by allowing market participants to develop and adopt new technologies without encountering regulatory obstacles favouring established companies or outdated methods. It ensures that businesses compete based on efficiency, security, and customer benefits rather than advantages based on specific regulatory preferences.
The European Commission has consistently emphasised this principle to prevent legislation from stifling innovation while maintaining customer protection and market integrity. Consequently, multiple regulatory frameworks embody technological neutrality, including the DSA, GDPR, and DORA. The AI Act is a notable example of this neutral approach, focusing on risk-based assessments rather than banning specific AI applications outright.
However, it is essential to acknowledge that technology may not always be neutral. Regulators must balance neutrality and implementing targeted interventions, especially when specific technologies pose unique risks.
Article 14 of the AI Act highlights the necessity of human oversight in high-risk AI applications. It stipulates that these applications cannot operate without appropriate human control and accountability. This requirement ensures that human operators clearly understand how the AI system works, including its capabilities, limitations, and potential risks, enabling informed decision-making. Operators should be able to interrupt, disable, or override the system if it produces incorrect, harmful, or unlawful outputs (Staszczyk, 2024, p. 54-55). As a result, AI systems must integrate robust monitoring tools that allow for real-time human intervention when necessary (Mahler, 2024, 15; Enqvist, 2023, p. 520-528).
It is important to note that the AI Act does not mandate that every piece of advice given by AI be verified and approved by a human prior to delivery. AI tools are not required to undergo real-time human verification for each output, nor is a human operator expected to decide every outcome. Instead, human oversight ensures the system operates within legal and ethical boundaries. Rather than replicating or replacing the AI’s functionality, these oversight mechanisms are designed to validate compliance with legal standards and mitigate risks, ensuring that AI-generated outputs meet regulatory and ethical requirements.
The AI Act underscores the importance of transparency and consumer awareness. It requires individuals to be informed when they receive advice from an AI system rather than a human. Article 50 mandates that providers of AI systems engaging directly with individuals must disclose this interaction unless it is already evident to a knowledgeable, attentive, and prudent person based on the context. This obligation is particularly pertinent to the two categories of advice regulated by the IDD for insurance-based investment products: ongoing advice and independent advice.
All advice on insurance-based investment products must comply with Article 30(5)(2) of the IDD, which requires distributors to issue a clear pre-contractual statement explaining how their recommendation aligns with the customer’s preferences, objectives and other relevant characteristics.
EIOPA has explored ways to streamline advice in the context of digitalisation, aiming to provide a well-designed, low-cost solution for customers with straightforward needs and small investments, avoiding time-consuming fact-finding. Streamlined advice could integrate automated and traditional models (e.g., semi-automated or robo-advice alongside face-to-face or telephone-based services) while leveraging AI and open insurance systems to enhance personalisation and portability of suitability assessments. However, as digital selling methods (e.g., AI and algorithms) heighten risks related to pre-contractual information and the demands-and-needs process, introducing “streamlined advice” poses challenges. In AI-driven models, ensuring transparency requires disclosing the algorithm’s reasoning, selection criteria, and potential conflicts of interest to enhance consumer protection (EIOPA, 2022c, p 83-84).
Under Article 29 of the IDD, distributors advising on insurance-based investment products must inform customers whether they will conduct regular suitability assessments to ensure that recommended investments remain appropriate over time. Given the high costs associated with human advice, AI-driven tools—particularly robo-advisors—seek to lower barriers to accessing portfolio management services. These assessments consider changes in the customer’s financial situation, investment objectives, and market conditions, enabling clients to maintain long-term relationships with AI-based advisory models. This ongoing relationship underscores the need for a regulatory framework that ensures transparency and explainability in AI-driven decision-making. While entities managing robo-advice are not required to disclose their algorithm’s code to customers, they must provide clear explanations of the parameters used in decision-making, including their relative weights. This disclosure should clarify how the algorithm evaluates a customer’s personal needs, forming the foundation for the advice provided.
Alongside or as an alternative to offering ongoing advice, the IDD imposes strict conditions on independent advice. When an intermediary claims to provide independent advice, they must evaluate a sufficiently broad range of products from diverse providers to ensure the client’s needs are adequately met (Article 29(3)(3)). They cannot limit their assessment to products from entities with which they have close affiliations. Unlike impartial advice, which suggests objectivity, independent advice requires a genuine market-wide comparison. Algorithmic models must be programmed to meet this requirement, supervisory authorities must verify compliance, and customers should receive a statement demonstrating the effectiveness of the products’ selection.
Finally, in all cases of robo-advice, the provisions of Article 5(1)(c) of the AI Act—which prohibits unacceptable AI-enabled social scoring practices—must be carefully considered. A potentially relevant case concerning robo-advice is referenced in the Guidelines on Prohibited Artificial Intelligence Practices established under the AI Act, which the Commission adopted on 4 February 2025. In its guidance, the Commission explicitly cited the example of an insurance company that collects spending and other financial data from a bank, even when such information is unrelated to assessing candidates’ eligibility for life insurance. The AI system then analyzes this data to determine premium pricing or recommend whether to refuse coverage altogether, potentially leading to discriminatory or unfair outcomes.
Although this prohibition is expressly set out in the AI Act, the IDD already establishes the overarching principle that insurance distributors must act honestly, fairly, and professionally in the best interest of their customers. Consequently, a robo-advice system designed primarily to maximise the insurer/distributor’s profitability at the expense of customers’ interests would be inconsistent with this fundamental principle.

4.3. The Role of Comparison Websites, Fin-Influencers and Virtual Influencers

Comparison websites have long been an effective digital tool in the insurance sector (Marano, 2016). EIOPA has discussed the role of comparison websites in promoting fair competition (EIOPA, 2014). The authority issued a set of “good practices” that, while not legally binding, should be considered as complementary guidance alongside the relevant EU and national legislation or regulations (EIOPA, 2014). These good practices align with the Insurance Mediation Directive (IMD) framework. The subsequent IDD explicitly includes comparison websites if they meet the criteria to be classified as insurance distributors. However, it does not provide specific guidelines for these comparison websites. Nevertheless, the “good practices” issued under the IMD framework remain valuable for addressing the issues arising from the increasing use of AI tools by comparison websites.
The suggested “good practices” for presenting information and the criteria applied to determine the rankings underscore the significance of transparency in how algorithms prioritise or rank products and disclose any financial incentives or partnerships that may influence these rankings, thereby preventing conflicts of interest that could mislead customers.
Websites should not rely solely on price for comparisons. Instead, they should enable users to select and prioritise various product features, such as guarantees, exclusions, or limitation clauses, to ensure a balanced comparison tailored to individual preferences. If a comparison website does not provide all available quotes, it should clearly explain the criteria used to select the displayed products. This transparency helps users understand the basis of the comparison and fosters trust in the impartiality of the information provided. Additionally, comparison websites should disclose any commercial, contractual, or ownership relationships with insurance providers, including whether insurance companies pay for their display or inclusion on the site. This level of transparency allows users to assess the potential influence of these relationships on the comparison results (EIOPA, 2014).
These provisions align with those of the DSA and DMA, although they cover different areas. The DSA broadens the scope further, regulating all digital platforms connecting customers with goods, services, or content. Platforms must disclose key parameters behind ranking algorithms, particularly for large platforms that face additional obligations, such as conducting regular risk assessments. Similarly, the DMA emphasises the importance of fairness and transparency in ranking systems but applies these principles to gatekeepers across all digital markets, such as search engines or app stores. Gatekeepers must avoid self-preferencing, where their products or services are unfairly prioritised over competitors.
All three frameworks converge on the necessity of transparency, which fosters customer trust in digital services. Whether the service is a niche insurance comparison tool (EIOPA) or a global e-commerce platform (DMA/DSA), customers must comprehend how rankings are generated and whether commercial relationships impact them. They also seek to ensure fairness by preventing rankings from misleading or disadvantaging users. This alignment establishes a consistent expectation of transparency and fairness across digital services.
The AI Act does not explicitly address AI systems employed by comparison websites. The recalled Commission’s Guidelines on Prohibited Artificial Intelligence Practices established under the AI Act. emphasise that the prohibitions set out in Article 5(1)(a) and (b) of the AI Act complement Article 25(1) of the DSA, which prohibits the use of dark patterns in user interfaces. This provision aims to ensure that online platform providers do not mislead or manipulate users into actions that do not align with their genuine intentions. Dark patterns, when likely to cause significant harm, should be regarded as an example of manipulative or deceptive techniques within the meaning of Article 5(1)(a) of the AI Act.
Furthermore, AI systems used in life and health insurance for risk assessment and pricing are classified as high-risk. Consequently, insurance comparison websites that utilise AI to analyse and present insurance products must determine whether their AI systems fall into this high-risk category. This assessment is also necessary if the AI system on an insurance comparison website influences customer decisions or personalises recommendations. Due to its potential impact on customer rights and financial choices, such a system may be deemed high-risk.
The website must adhere to the regulations of the AI Act concerning high-risk AI systems in these contexts. These obligations include implementing a risk management framework, establishing data governance protocols, ensuring transparency, maintaining accurate records, providing essential information to users, and facilitating human oversight (Articles 9, 10, 12 13, 14, 29).
Digital transformation is continuously driving the evolution of distribution models, reshaping how insurance and financial products are offered and accessed. Comparison websites explicitly provide side-by-side evaluations of multiple insurance products, making it essential to ensure transparency in how they select and present the options that best align with customer needs. However, product selection and recommendation are not confined to these platforms alone. An alternative and increasingly influential model has emerged through financial influencers, or “fin-influencers,” who have, for some time now, played a growing role in shaping customer choices in the insurance and financial sectors (Hayes and Ben-Shmuel, 2024; Hamamci and Aren, 2024) and influence financial market performance of firms (Keasey, Lambrinoudakis, Masciahang, 2024).
Insurance comparison websites and fin-influencers may influence customer choices within the insurance market and be interconnected. Insurance influencers might guide their followers to comparison websites through affiliate marketing, referral links, or sponsorship agreements. Furthermore, they can produce content that explains insurance concepts and recommends tools (including comparison platforms) to assist customers in finding the best offers. However, it is essential to recognise that influencers can shape customer perceptions of insurance products, potentially biasing behaviour before visiting a comparison site. If fin-influencers misrepresent products or exaggerate certain providers based on partnerships, they could compromise the objectivity that comparison websites assert, leading to potentially distorted customer decisions.
Recognising the growing role of these influencers, the European Commission’s Retail Investment Strategy (RIS), proposed in May 2023, aims to enhance retail investor protection and ensure fair treatment. The strategy emphasises the need for marketing communications to be fair, clear, and not misleading. It includes content disseminated by these influencers. The RIS also seeks to modernise disclosure rules, develop benchmarks for evaluating financial products, and address potential conflicts of interest. To this purpose, the proposal requires the investment firm to provide the fin-influencers’ identity and contact information to competent authorities. These measures empower customers to make informed investment decisions aligned with their needs and preferences.
The future regulation of influencers in the insurance industry should align with current insurance distribution regulations. It has been previously observed that straightforward referral activities fall outside the scope of insurance distribution. Therefore, if influencers do not overstep the (narrow) boundary into distribution, they will only need to comply with new rules concerning communication and transparency in their messages aimed at customers.
Technological advancements have led to the emergence of virtual influencers—fully computer-generated digital characters designed to engage with the public via social media, chatbots, and various digital platforms. These characters develop unique identities and personalities through the use of AI, advanced graphics, and motion capture technology, which allows a human actor’s movements to be digitally transferred to the character, enhancing realism (Tranholm Mouritzen, Penttinen and Pedersen, 2023). Despite lacking a physical presence, virtual influencers can effectively communicate with followers, influence customer behaviour, and drive trends across various industries, including insurance and financial services (Mertens and Goetghebuer, 2024, p. 7-10). In this context, the recalled provision of the AI Act, which mandates that individuals interacting with chatbots be informed that they are engaging with artificial intelligence, is commendable.
However, the use of virtual influencers to promote insurance and financial products raises significant regulatory challenges that cannot be fully addressed under the EU Unfair Commercial Practices Directive (Mertens & Goetghebuer, 2024, pp. 19–34). One key challenge concerns the effectiveness of penalties for non-compliance with the rules expected to be introduced under the Retail Investment Strategy (RIS). Since these digital entities do not possess legal personality or personal liability, enforcing sanctions against them can be nearly impossible, especially if the legal entities behind these influencers are situated outside the EU or lack the financial resources to withstand penalties.
To address this issue, the RIS proposal to disclose the identities of fin-influencers should also encompass the identities of the companies or individuals responsible for creating or utilising the virtual fin-influencers. This information could aid in shifting regulatory responsibility to the entities that benefit from their promotional activities — such as insurance companies and intermediaries — ensuring that they remain accountable for any misleading or non-compliant practices associated with virtual fin-influencers. This approach would align with existing customer protection principles in the “real” world, reinforcing transparency, fairness, and accountability in digital promotions and instilling confidence in the integrity of the proposed solution.

5. Conclusions

The regulatory landscape for digital insurance platforms in the EU is rapidly evolving due to technological advancements and the necessity for strong customer protection and market stability. While significant progress has been made in regulating digital platforms, challenges remain in aligning horizontal digital regulations with sector-specific insurance rules. This misalignment creates legal ambiguities and compliance uncertainties.
The current landscape, shaped by a complex interplay of laws, reveals ongoing ambiguity regarding the legal status and compliance obligations of digital insurance platforms. Moreover, insurance regulation should reconsider the exemptions currently in place, considering the size and scale of digital platforms. As digital business models, such as peer-to-peer insurance and AI-driven platforms, continue to reshape the sector, regulators must balance cross-sector rules and tailored insurance regulations to ensure fairness, transparency, and customer protection.
Compliance with business conduct is becoming increasingly important, particularly in areas like distribution practices, robo-advice, and the role of influencers. A forward-looking regulatory approach should prioritize proportionality and technological neutrality, integrating horizontal regulations without imposing unnecessary constraints on insurance-specific rules. This perspective aligns with the emphasis of the Draghi report on fostering a competitive and innovation-friendly regulatory framework within the EU.
On 29 January 2025, the European Commission presented the Competitiveness Compass, building on the Draghi report’s analysis to revitalize Europe’s economic dynamism and foster growth. The Compass outlines three key pillars, supported by five horizontal enablers, including regulatory and administrative simplification. Ideally, this enabler should serve as a guiding principle in addressing most of the challenges identified in this study.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article

Acknowledgments

The author would like to express its appreciation to the valuable comments from XXX anonymous reviewers and to the assistance provided by the editorial team.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. AFM - Autoriteit Financiële Markten, 2024. Available online: https://www.afm.nl/~/prof+media/files/wet-regelgeving/beleidigungen/interpretaties/eng-interpretatie-groepsverzekering.pdf (Accessed on 17 February 2025).
  2. Alt, Robert, 2021, Electronic Markets on digital platforms and AI, Electronic Markets (2021), 31:233-241. Available online: https://doi.org/10.1007/s12525-021-00489-w. [CrossRef]
  3. Ayadi, Rym, and O’Brien, Christopher, The Future of Insurance Regulation and Supervision in the EU, Report of a CEPS Task Force.
  4. Aren.
  5. BaFin - Bundesanstalt für Finanzdienstleistungsaufsicht, 2023. Available online: https://www.bafin.de/SharedDocs/Veroeffentlichungen/DE/Aufsichtsmitteilung/2023/dl_2023_07_04_Aufsichtsmitteilung_Gruppenversicherungen.pdf?__blob=publicationFile&v=2. (Accessed on 17 February 2025).
  6. Braun, Alexander, Jia, Ruo, 2025, InsurTech: Digital technologies in insurance, The Geneva Papers on Risk and Insurance - Issues and Practice, https://doi.org/10.1057/s41288-024-00344-x. [CrossRef]
  7. Busch, Danny, 2024, The Future of Equivalence in the EU Financial Sector, European Business Organization Law Review, (2024) 25:3-23 https://doi.org/10.1007/s40804-023-00306-1. [CrossRef]
  8. Buttigieg, Christopher, Zimmermann, Beatriz Brunelli, 2024, The digital operational resilience act: challenges and some reflections on the adequacy of Europe’s architecture for financial supervision, ERA Forum (2024) 25:11–28, https://doi.org/10.1007/s12027-024-00793-w. [CrossRef]
  9. Chalmers, Damian, Davies, Gareth, and Monti, Giorgio, 2019, European Union Law, 4th edition, Cambridge University Press.
  10. Clausmeier, Dirk, 2023, Regulation of the European Parliament and the Council on digital operational resilience for the financial sector (DORA), International Cybersecurity Law Review (2023) 4:79–90, https://doi.org/10.1365/s43439-022-00076-5. [CrossRef]
  11. Communication from the Commission to the European Parliament, the Council, the European Central Bank, the European Economic and Social Committee and the Committee of the Regions, Equivalence in the area of financial services, 29 July 2019, COM (2019) 349 final. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52019DC0349 (Accessed on 17 February 2025).
  12. Denuit, Michel, Dhaene, Jan, and Robert, Christian Y., 2022, Risk-sharing rules and their properties,with applications to peer-to-peer insurance, Journal of Risk and Insurance, 2022; 89:615–667.
  13. Denuit, Michel, and Robert, Christian Y., 2021, Risk sharing under the dominant peer-to-peer property and casualty insurance business models, Risk Management Insurance Review, 2021; 24:181-205.
  14. Draghi report on EU competitiveness, 2024. Available online: https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en (Accessed on 17 February 2025).
  15. EIOPA 2024. Report on the Digitalisation of the European Insurance Sector. Available online: https://www.eiopa.europa.eu/document/download/6ca9e171-42b9-44d7-a2e6-beaf0134ecb8_en?filename=Report%20on%20the%20digitalisation%20of%20the%20European%20insurance%20sector.pdf (Accessed on 17 February 2025).
  16. EIOPA 2022a. EIOPA letter to co-legislators on the Artificial Intelligence Act. Available online: https://www.eiopa.europa.eu/system/files/2022-07/letter_to_co-legislators_on_the_ai_act.pdf (Accessed on 17 February 2025).
  17. EIOPA 2022b, Report on the application of the Insurance Distribution Directive (IDD). Available online: https://www.eiopa.europa.eu/system/files/2022-01/eiopa-bos-21-581_report_on_the_application_of_the_idd.pdf?utm_source=chatgpt.com (Accessed on 17 February 2025).
  18. EIOPA 2022c, Final report on technical advice to the European Commission regarding certain aspects relating to retail investor protection. Available online: https://www.eiopa.europa.eu/document/download/94eb7964-9dbd-41cb-a04d-10b907ba9a89_en?filename=Final%20Report%20-%20Technical%20advice%20on%20Retail%20Investor%20Protection.pdf ((Accessed on 17 February 2025).
  19. EIOPA 2020, EIOPA’s approach to the supervision of product oversight and governance. Available online: https://www.eiopa.europa.eu/publications/eiopas-approach-supervision-product-oversight-and-governance_en (Accessed on 17 February 2025).
  20. EIOPA 2019, Report on best practices on licensing, requirements, peer-to-peer insurance and principle of proportionality in an Insurtech context. Available online: https://register.eiopa.europa.eu/Publications/EIOPA%20Best%20practices%20on%20licencing%20March%202019.pdf (Accessed on 17 February 2025).
  21. EIOPA 2014, Report on Good Practices on Comparison Websites. Available online: https://register.eiopa.europa.eu/Publications/Reports/Report_on_Good_Practices_on_Comparison_Websites.pdf (Accessed on 17 February 2025).
  22. EIOPA’s Consultative Expert Group on Digital Ethics in insurance, 2021, Artificial Intelligence Governance Principles: Towards Ethical and Trustworthy Artificial Intelligence in the European Insurance Sector. Available online: https://www.eiopa.europa.eu/document/download/30f4502b-3fe9-4fad-b2a3-aa66ea41e863_en?filename=Artificial%20intelligence%20governance%20principles.pdf (Accessed on 17 February 2025).
  23. Eling, Martin, and Pankoke, David, 2016, Costs and Benefits of Financial Regulation: An Empirical Assessment for Insurance Companies, The Geneva Papers on Risk and Insurance – Issues and Practice, 2016, 41 (529-544).
  24. Eling, Martin, and Lehmann, Martin, 2018, The Impact of Digitialization on the Insurance Value Chain and the Insurability of Risks, The Geneva Papers on Risk and Insurance – Issues and Practice, 2018, 43, (359-396).
  25. Enqvist, Lena, 2023, “Human oversight” in the EU artificial intelligence act: what, when and by whom?, Law Innovation and Technology, 15:2, 508-535, DOI: 10.1080/17579961.2023.2245683. [CrossRef]
  26. Hayes, Adam S. and & Ben-Shmuel, Ambreen T., 2024, Under the finfluence: Financial influencers, economic meaning-making and the financialization of digital life, Economy and Society, 53:3, 478-503, https://doi.org/10.1080/03085147.2024.2381980. [CrossRef]
  27. Hamamci, Hatice Nayman, and Aren, Selim, 2024, The direct and indirect effects of financial influencer credibility on investment intention, Croatian Review of Economic, Business and Social Statistics 10(1): 57-69.
  28. Hurk, Arthur van den, 2024, Equivalence and Insurance, European Business Organization Law Review (2024) 25:209–228 https://doi.org/10.1007/s40804-023-00308-z. [CrossRef]
  29. Keasey, Kevin, Lambrinoudakis, Costas, Mascia, Danilo V., and Zhang, Zhengfa, The impact of social media influencers on the financial market peformance of firms, European Financial Management, 1-41. https://doi.org/10.1111/eufm.12513. [CrossRef]
  30. Kourmpetis, Stavros, 2023, Management of ICT Third Party Risk Under the Digital Operational Resilience Act, in L. Böffel and J. Schürger(eds.), Digitalisation, Sustainability, and the Banking and Capital Markets Union, EBI Studies in Banking and Capital Markets Law, https://doi.org/10.1007/978-3-031-17077-5_7 (pp 211-226). [CrossRef]
  31. Levantesi, Susanna, and Piscopo, Gabriella, 2021, Mutual peer-to-peer insurance: The allocation of risk, Journal of Co-operative Organization and Management, https://doi.org/10.1016/jcom2021. 100154. [CrossRef]
  32. Lima Rego, Margarida, 2025, The boundaries of the insurance contract: group insurance through the lens of the ECJ, in F. Petrosino (ed.), Insurance based investment products, between the market and policyholder protection. What responses from European Union law? Freie Universität Berlin 2025.
  33. Lima Rego, Margarida, and Campos Carvalho Joana, 2020, Insurance in today’s sharing economy: new challenges ahead or a return to the origins of insurance?, in P. Marano/ K. Noussia (eds.), InsurTech: a legal and regulatory view, AIDA Europe Research Series on Insurance Law and Regulation 1, Springer, pp. 27-47.
  34. Mahler, Tobias, 2024, Smart Robotics in the EU Legal Framework: The Role of the Machinery Regualtion, Oslo Law Review, Vol. 11, Issue 1: Special issue: AI and Robotics in Healthcare. Available online: https://www.idunn.no/doi/epdf/10.18261/olr.11.1.5 (Accessed on 17 February 2025).
  35. Marano, Pierpaolo, 2021a, The Global Relevance of the EU Single Market on Insurance After the Insurance Distribution Directive, Journal of International Business and Law, Vol. 21:31-67.
  36. Marano, Pierpaolo, 2021b, Management of Distribution Risks and Digital Transformation of Insurance Distribution—A Regulatory Gap in the IDD, Risks 2021, 9(8), 143; https://doi.org/10.3390/risks9080143. [CrossRef]
  37. Marano, Pierpaolo, 2021c, The Contribution of Product Oversight and Governance (POG) to the Single Market: A Set of Organisational Rules for Business Conduct, in P. Marano – K. Noussia (eds.), Insurance Distribution Directive: A Legal Analysis, Springer, 55-74.
  38. Marano, Pierpaolo, 2019, Navigating Insurtech: The digital intermediaries of insurance products and customer protection in the EU, Maastricht journal of European and comparative law, 26:2, 294-315. DOI: 10.1177/1023263x19830345. [CrossRef]
  39. Marano, Pierpaolo, 2016, The EU Regulation on Comparison Websites of Insurance Products, in P. Marano – I. Rokas - P. Kochenburger (eds.), The “Dematerialized” Insurance. Distance Selling and Cyber Risks from an International Perspective, Springer, 59-84.
  40. McGee, Andrew, The Single Market in Insurance. Breaking Down the Barriers, Ashgate.
  41. Mertens, Floris and Goetghebuer, Julie, 2024, Virtual Reality, Real Responsibility: The Regulatory Landscape for Virtual Influencers, Financial Law Institute Working Paper Series 2024-02, Available at SSRN: https://ssrn.com/abstract=4718820 or http://dx.doi.org/10.2139/ssrn.4718820. [CrossRef]
  42. Nemeczek, Heinrich, 2024, Third-Country Regime and Equivalence: FinTechs, European Business Organization Law Review (2024) 25:145–165. https://doi.org/10.1007/s40804-024-00310-z. [CrossRef]
  43. Nicoletti, Bernardo, 2021, Insurance 4.0., Palgrave Studies in Financial Services Technology, https://doi.org/10.1007/978-3-030-58426-9_8. [CrossRef]
  44. Ostrowska, Marta, Regulation of InsurTech: Is the Principle of Proportionality an Answer?, Risks, 9: 185. https://doi.org/10.3390/ risks9100185. [CrossRef]
  45. Sharma Paul, Cadoni, Paolo, 2010, Solvency II: A New Regulatory Frontier, in C. Kempler, M. Flamée, C. Yang and P. Windels (eds), Global Perspectives on Insurance Today, Palgrave McMillan, pp. 53 – 66).
  46. Staszcyk, Piotr, 2024, Navigating the AI landscape in the EU: fostering innovation while upholding ethical principles, in M. Balcerzack and J. Kapelanska-Pregowska (eds.), Artificial Intellginge and International Human Rights Law. Developing Standards for a Changing World, Edward Elgar. Available online: https://www.elgaronline.com/edcollbook-oa/book/9781035337934/9781035337934.xml (Accessed on 17 February 2025).
  47. Stricker, Lukas, Wagner, Joël, and Zeier Roschmann, Angela, 2023, The Future of Insurance Intermediation in the Age of the Digitial Platform Economy, Journal of Risk and Financial Management 16:381. https://doi.org/10.3390/jrfm16090381. [CrossRef]
  48. Tranholm Mourtizen, Simone Lykke, Penttinen, Valeria, and Pedersen, Susanne, 2023, Virtual influencer marketing: the good, the bad and the unreal, European Journal of Marketing 58(4) DOI:10.1108/EJM-12-2022-0915. [CrossRef]
  49. Van Hulle, Karel, 2019, Solvency II Requirements for EU Insurers. Solvency II is good for you, Intersentia.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated