Preprint
Article

This version is not peer-reviewed.

The Ownership of AI Art: Cultural Sustainability, Ethical Authorship, and Museum Governance in AI-Assisted Artistic Practices

Submitted:

29 January 2026

Posted:

30 January 2026

You are already at the latest version

Abstract
This study examines how AI-assisted artistic practices reshape authorship, cultural ownership, and museum governance through the lens of cultural sustainability. Drawing on qualitative methods including literature analysis, expert interviews, and exhibition case studies, it explores emerging ethical challenges related to data provenance, creative agency, and institutional responsibility. The findings reveal hybrid forms of authorship that disrupt conventional intellectual property frameworks and highlight museums’ growing role as mediators between technological innovation and cultural preservation. While AI-driven exhibitions expand accessibility and engagement, they also risk cultural homogenization. The study offers strategic insights for policymakers and cultural institutions on fostering ethical, inclusive, and sustainable AI integration in artistic practice.
Keywords: 
;  ;  ;  ;  

1. Introduction

The rapid maturation of Generative Artificial Intelligence (GenAI) has catalyzed a significant transformation in artistic production, reshaping creative processes as well as the conceptual and institutional frameworks that support cultural practice. Recent advances in generative models, including diffusion-based image systems, generative adversarial networks, and multimodal text-to-image architectures, have enabled machines to produce visually and conceptually sophisticated artworks that rival human-created outputs [1,2]. Unlike earlier forms of computational assistance, which primarily functioned as extensions of human intention, contemporary Generative AI systems increasingly participate in decision-making processes that influence aesthetic form, style, and meaning. This shift challenges established distinctions between tool and creator, originality and derivation, and human agency and machinic autonomy, thereby unsettling foundational assumptions in art history, intellectual property law, and museum governance [3].
From a legal perspective, the emergence of AI-assisted and AI-generated art exposes structural limitations within existing copyright regimes. Copyright law in most jurisdictions remains grounded in anthropocentric notions of authorship, originality, and creative intent, presupposing that protected works must originate from human creative agency [4]. Courts in the United States, the United Kingdom, and the European Union have largely maintained this position, reaffirming that works generated without meaningful human input are ineligible for copyright protection. The 2025 decision by the United States Court of Appeals, which denied copyright protection for fully AI-generated artworks, exemplifies the persistence of this doctrinal approach. While such rulings aim to safeguard the centrality of human creativity, they also reveal an increasing misalignment between legal frameworks and contemporary creative practices, where authorship is often distributed across complex human and technical assemblages.
Scholarly research in human-computer interaction and computational creativity has long questioned the assumption that creativity is an exclusively human attribute. Early studies emphasized the role of algorithms in extending human creative capacity, yet more recent work conceptualizes creativity as a relational and emergent phenomenon that arises through interaction between humans, machines, and data infrastructures [2,5]. It has been argued that Generative AI systems introduce forms of novelty that cannot be fully reduced to human intention, particularly when outputs are produced through probabilistic inference rather than deterministic rules [1]. This reconceptualization complicates linear models of authorship and ownership, raising questions about attribution, responsibility, and accountability in AI-assisted artistic practice [6].
Beyond authorship, Generative AI introduces broader socio-cultural implications that bear directly on questions of cultural sustainability. On one hand, AI technologies are widely celebrated for democratizing access to creative tools, lowering barriers to artistic production, and supporting the preservation and revitalization of cultural heritage through digitization and algorithmic reconstruction [7]. On the other hand, critical scholarship highlights the risks associated with cultural homogenization, extractive data practices, and algorithmic bias, particularly when training datasets disproportionately reflect dominant cultural norms [8,9]. These dynamics raise pressing ethical concerns regarding whose cultures are represented, whose aesthetic traditions are reproduced, and whose creative labor remains unacknowledged within AI-generated outputs.
Museums and cultural institutions occupy a pivotal role in mediating these tensions between technological innovation and cultural stewardship. As custodians of cultural memory and arbiters of artistic legitimacy, museums increasingly function as governance actors that shape how AI-assisted artworks are interpreted, legitimized, and valued. The integration of AI-generated or AI-assisted works into exhibitions and collections compels institutions to address complex questions of authenticity, provenance, curatorial authority, and public trust [10,11]. International policy discourse reinforces these concerns. UNESCO emphasizes that while AI can enhance curatorial practice, audience engagement, and collection management, it also risks blurring distinctions between human-created heritage and algorithmically generated cultural expressions, potentially undermining institutional credibility and shared cultural memory [12].
Despite the growing body of conceptual and normative literature on AI and creativity, empirical research examining how cultural institutions operationalize governance frameworks in response to AI-assisted art remains limited. There is insufficient understanding of how museums translate abstract ethical principles into concrete institutional policies, how ownership and authorship claims are negotiated in curatorial practice, and how cultural sustainability is enacted under algorithmically mediated conditions. Addressing this gap, the present study examines AI-Creative 2025 [13] as a case study to investigate the intersection of cultural sustainability, ethical authorship, and museum governance in a real-world AI art ecosystem. Through document analysis and qualitative case study methods, this research explores how institutional norms, legal discourses, and cultural values are articulated and negotiated in contemporary AI-assisted artistic production.
The study in this paper draws on AI-assisted heritage digitization initiatives conducted under the Digital Dunhuang Project, led by the Dunhuang Academy, which integrates machine learning techniques into mural imaging, color simulation, and virtual cave reconstruction [14,15].

2. Literature Review

2.1. Artificial Intelligence and Creativity

Academic discussions of Artificial Intelligence (AI) and creativity initially framed computational systems as tools that augment human artistic production [16]. Early work in generative art and computational creativity emphasized algorithmic systems as extensions of human intention, with creativity ultimately residing in the human designer who defined rules, parameters, and aesthetic goals [2,17]. Within this framework, AI systems were understood as facilitators of variation and efficiency rather than independent creative agents. However, developments in Generative AI have produced digital artworks that look increasingly “creative” [18].
Advances in machine learning, particularly in deep neural networks and generative models, have challenged this instrumental understanding of creativity. Elgammal [1] suggests that contemporary generative systems produce outputs that cannot be fully anticipated or directly traced to human intention, particularly when models learn aesthetic patterns from large-scale datasets. Creativity, in this sense, becomes an emergent property of interactions between human input, algorithmic inference, and training data rather than a solely human attribute.
This shift is supported by research in human-computer interaction, which conceptualizes creativity as a relational and processual phenomenon. Rezwana and Ford [5] demonstrate that co-creative AI systems actively shape artistic decision-making by introducing constraints, affordances, and generative possibilities that influence both process and outcome. These findings complicate linear models of authorship and support hybrid frameworks in which the creative agency is distributed across human and technical actors.

2.2. Authorship, Ownership, and Copyright in AI-Generated Art

The increasing autonomy of Generative AI systems has created substantial tension within existing intellectual property regimes [19,20]. Copyright law in most jurisdictions remains grounded in anthropocentric assumptions that equate authorship with human originality and intentionality. Legal scholars consistently note that current frameworks struggle to accommodate creative works produced through human–machine collaboration [21].
Samuelson [4] documents how recent judicial decisions in the United States reaffirm that works generated without meaningful human creative input are ineligible for copyright protection. These rulings reflect institutional efforts to preserve doctrinal stability rather than assessments of AI’s creative capacity. Similar debates are present in the United Kingdom and the European Union, where policymakers continue to grapple with the legal status of AI-generated outputs.
Ethical scholarship expands this discussion beyond legal attribution to address data governance and responsibility. Crawford and Paglen [9] demonstrate that Generative AI systems rely on extensive datasets composed of cultural materials that are frequently collected without consent, raising concerns about invisible labor, cultural appropriation, and unequal value extraction. Gunkel [3] argues that an exclusive focus on legal authorship obscures the broader sociotechnical infrastructures that shape creative production, including platform governance and institutional power.
Together, these studies suggest that authorship in AI-assisted art should be understood as a governance issue involving ethical accountability, transparency, and cultural justice rather than a purely legal question.

2.3. Cultural Sustainability and Algorithmic Mediation

Cultural sustainability refers to the maintenance, promotion, and protection of cultural knowledge systems, heritage, identity, and traditions, to ensure their preservation for the future [22]. The relationship between culture and sustainability has been studied within a conceptual framework [23]. With its increasing priority, cultural sustainability has become an important strategy for libraries and museums to survive external changes [24].
With rapid developments in Generative AI technology, cultural sustainability scholarship provides a critical framework for evaluating the long-term implications of AI-assisted creativity. Scholars emphasize that sustainable cultural development requires balancing innovation with the preservation of diversity, contextual meaning, and community agency [7]. Within this perspective, technology is not inherently beneficial but must be assessed in relation to its cultural and social consequences.
AI technologies have been widely promoted for their potential to support cultural heritage preservation through digitization, restoration, and reconstruction. However, critical research warns that algorithmic systems may also contribute to cultural homogenization. O’Neil [8] demonstrates that data-driven systems tend to reproduce existing inequalities, particularly when datasets reflect historical imbalances in representation. In cultural contexts, this leads to concerns that AI-generated outputs may privilege dominant aesthetics while marginalizing less documented traditions. It can be argued that without deliberate governance interventions, AI-mediated cultural production risks standardizing aesthetic expression in ways that undermine cultural plurality. These concerns highlight the need for participatory and inclusive approaches to AI deployment that prioritize ethical data practices and community involvement.

2.4. Museums, Institutional Governance, and AI

Museums and cultural institutions occupy a central role in mediating the relationship between AI innovation and cultural sustainability [25]. Museological scholarship emphasizes that museums actively construct cultural meaning through curatorial narratives and institutional policies rather than merely preserving objects [10]. Digital technologies have already transformed museums into sites of knowledge production and public engagement [26], and Generative AI intensifies these transformations [27].
Cameron [11] argues that the integration of AI-generated or AI-assisted works challenges museums’ authority over authenticity, provenance, and interpretation. When institutions exhibit AI-generated art, they implicitly endorse specific narratives about creativity and technology, shaping public trust and institutional legitimacy.
Some research highlights that AI adoption also reshapes internal governance structures. Harrison et al. [28] note that museums increasingly require policies addressing data ethics, intellectual property, and collaborative authorship. Simon [29] emphasizes participatory governance as essential for maintaining public trust in technologically mediated cultural environments. International policy frameworks reinforce these concerns, with UNESCO calling for cultural institutions to align AI use with principles of transparency, cultural diversity, and human rights [12].
Despite this growing body of normative guidance, empirical research examining how museums operationalize these governance principles in AI art contexts remains limited.

2.5. Research Gap

The literature identifies three intersecting dynamics. Firstly, Generative AI reshapes creativity through hybrid and distributed authorship. Secondly, legal and ethical frameworks struggle to address ownership and responsibility in AI-mediated cultural production. Thirdly, museums assume expanded governance roles as mediators between technological innovation and cultural sustainability. While these debates are well theorized, there remains a lack of empirical research examining how these dynamics intersect in practice. This study addresses this gap through a case-based analysis of AI-Creative 2025 [13], contributing empirical insight into how authorship, governance, and cultural sustainability are negotiated within an institutional AI art ecosystem.

3. Materials and Methods

3.1. Research Design and Epistemological Orientation

This study adopts a qualitative, interpretive research design to examine how AI-generated and AI-assisted artistic practices challenge existing copyright frameworks and how cultural institutions respond through governance mechanisms in pursuit of cultural sustainability. The research is grounded in the assumption that authorship, ownership, and responsibility in AI art are not purely technical or legal categories, but socially and institutionally constructed concepts that emerge through practice. As such, the study does not seek to measure the aesthetic quality or technical performance of AI systems, but to understand how meaning, legitimacy, and accountability are produced and stabilized within specific institutional contexts.
Qualitative inquiry is particularly suited to this research because the core issues under investigation—hybrid authorship, ethical responsibility, and sustainable cultural governance—are normative and context-dependent. These phenomena cannot be adequately captured through quantitative indicators alone, but require close attention to discourse, institutional decision-making, and interpretive practices. The research, therefore, prioritizes depth over breadth and analytical generalization over statistical representativeness, consistent with established qualitative research traditions in cultural studies and museum research [30,31].

3.2. Analytical Framework

The analysis is guided by an Ethical-Cultural-Institutional (ECI) framework developed through engagement with literature on AI ethics, cultural sustainability, and museum governance. This framework provides an integrated lens for examining how AI-assisted artistic practices operate simultaneously at ethical, cultural, and institutional levels.
Ethically, the ECI framework covers how authorship and responsibility are attributed in situations where creative agency is distributed across humans, algorithms, and datasets. Culturally, it examines how AI-generated outputs affect the sustainability of heritage representation, particularly in relation to contextual integrity and the risk of aesthetic homogenization. Institutionally, it focuses on the governance mechanisms through which museums, research centers, and cultural organizations mediate legal uncertainty, regulate data use, and maintain public trust.
Rather than treating these dimensions as discrete, the ECI framework conceptualizes them as mutually constitutive. Ethical attribution practices are shaped by institutional norms; cultural sustainability depends on governance structures; and institutional authority is legitimized through ethical and cultural claims. This framework, therefore, enables a holistic analysis of AI-assisted art as a socio-technical and socio-cultural practice.
This study addresses three research questions:
  • how ethical authorship and responsibility are articulated in AI-assisted artistic practices characterized by hybrid human–machine creativity;
  • how such practices influence cultural sustainability, particularly regarding contextual integrity and representational diversity; and
  • how museums and cultural institutions operationalize governance frameworks to manage legal, ethical, and cultural uncertainties associated with AI-assisted art.
Together, these questions structure the analysis within an ECI framework.

3.3. Case Study Strategy and Empirical Scope

The study employs a comparative case study strategy to examine how the ECI dimensions are enacted in different institutional settings. Case studies are particularly appropriate for investigating emerging phenomena characterized by legal ambiguity and rapid technological change, as they allow for close analysis of practice within real-world contexts.
Two primary empirical cases were selected. The first focuses on AI-assisted projects that draw upon digitized Dunhuang cultural heritage materials, including mural imagery and manuscript archives. Dunhuang represents a critical case due to the non-renewable nature of its cultural resources and the heightened ethical and institutional sensitivity surrounding their reuse. The increasing application of AI techniques—such as image reconstruction, color simulation, and generative visual extrapolation—to Dunhuang datasets raises acute questions about authorship, ownership, and cultural consent. These questions are intensified by the transnational circulation of Dunhuang materials and the long-standing governance structures developed to protect their integrity.
The second case examines AI-Creative 2025, an experimental AI art initiative organized by the Digital Humanities Laboratory at Peking University [13]. This case provides a contrasting context in which AI-generated and AI-assisted artworks are explicitly framed as research-led creative experiments. Unlike heritage-based projects, AI-Creative 2025 operates within a university governance structure that requires disclosure of dataset provenance, degrees of human intervention, and algorithmic processes. This makes it possible to analyze how hybrid authorship and ethical accountability are articulated in the absence of clear legal recognition for AI-generated works.
Together, these two cases enable comparative analysis between a heritage-protection-oriented context and an experimental academic context, illuminating how different institutional mandates shape responses to AI-assisted creativity.

3.4. Data Collection and Analytical Procedures

Empirical material was collected between 2020 and 2025 through a combination of document analysis, project and exhibition analysis, and semi-structured expert interviews. Document analysis focused on copyright rulings, international policy guidelines on AI and culture, museum and institutional governance documents, and project-level policies governing data use and attribution. These documents were selected for their direct relevance to AI-generated art, cultural heritage reuse, and institutional responsibility.
In addition, curatorial texts, exhibition narratives, and public-facing project descriptions associated with the selected cases were analyzed to examine how AI involvement was communicated to audiences and how authorship and responsibility were framed. This material provided insight into institutional strategies for managing public understanding and trust.
To document and project analysis, this study employed semi-structured interviews with heritage researchers from the Dunhuang Academy, digital heritage technology specialists, and museum curators involved in AI-assisted cultural projects. The interviews explored institutional decision-making processes, perceptions of authorship and responsibility, and governance practices surrounding the application of AI in heritage and artistic contexts. To ensure ethical compliance and protect participant confidentiality, individual identities were not disclosed, and all findings are reported in aggregated form. Interviews explored decision-making processes around attribution and disclosure, perceptions of legal uncertainty, and institutional approaches to balancing technological innovation with cultural responsibility. All qualitative material was analyzed using thematic analysis following Braun and Clarke’s approach [32]. An initial coding structure followed the ECI framework, while additional themes were identified inductively through close reading of the data.
Cross-case comparison was employed to identify recurring patterns and points of divergence between the Dunhuang and AI-Creative 2025 cases, allowing the analysis to move beyond description towards conceptual insight.

3.5. Validity, Limitations, and Ethical Considerations

Analytical validity was enhanced through triangulation across multiple data sources, including policy documents, institutional texts, and interview material. Comparing findings across two distinct institutional contexts further strengthened analytical robustness by revealing how similar challenges are addressed through different governance arrangements.
The study has several limitations. As a qualitative inquiry, it does not aim to produce statistically generalizable findings. Access to proprietary datasets and internal institutional deliberations was necessarily limited, meaning that the analysis relies on publicly available documentation and participant testimony. Nevertheless, this limitation reflects broader conditions of opacity surrounding AI systems and is itself indicative of the governance challenges under investigation.
Ethical considerations were central to the research design, particularly related to the use of culturally sensitive heritage materials. The study does not reproduce or evaluate AI-generated visual outputs but focuses on governance practices and interpretive frameworks. Care was taken to respect institutional confidentiality and to avoid misrepresentation of culturally significant materials, especially in the Dunhuang case.

4. Results: AI-Assisted Artistic Practices Using the Ethical-Cultural-Institutional (ECI) Framework

This study examines AI-assisted artistic practices through an Ethical-Cultural-Institutional (ECI) framework to analyze their implications for authorship, cultural sustainability, and museum governance. Drawing on document analysis and multiple case studies, including AI-Creative 2025 and AI-assisted Dunhuang heritage initiatives, the framework enables a structured comparison between policy discourse, curatorial decision-making, and artistic practice. The analysis is organized around three dimensions: ethics, culture, and institutions.

4.1. Ethical Dimension: Authorship and Creative Agency

From an ethical perspective, AI-assisted art and other creative output raise unresolved questions concerning authorship and creative responsibility [33]. Document analysis confirms that existing copyright frameworks and institutional policies continue to privilege human authorship, with AI-generated outputs remaining outside formal legal protection. These frameworks assume that creativity is grounded in human intention and accountability.
The Dunhuang-related case studies complicate this assumption. In AI-assisted projects drawing on Dunhuang mural imagery, artists and researchers employ generative models trained on digitized frescoes to explore stylistic reconstruction, color restoration, and speculative visual interpretation. While human experts determine research questions, training parameters, and interpretive boundaries, AI systems contribute materially to the generation of visual outcomes. This distributed process makes singular attribution difficult.
Institutional responses to this challenge resemble those observed in AI-Creative 2025. Curatorial and project documentation typically attribute authorship to human creators or research teams while explicitly disclosing the role of AI systems and datasets. Interviews indicate that this approach is intended to maintain ethical accountability, particularly given the cultural and religious significance of Dunhuang heritage. Ethical authorship in this context is therefore framed not as exclusive creative ownership but as responsible stewardship, emphasizing transparency, consent, and respect for cultural source material.

4.2. Cultural Dimension: Sustainability and Representation

The cultural dimension examines how AI-assisted practices affect the sustainability and representation of heritage. Policy documents and heritage research literature often highlight AI’s potential to support preservation through high-resolution digitization, damage simulation, and virtual reconstruction, particularly for fragile sites such as the Mogao Caves in Dunhuang [34]. In this context, AI is positioned as a technical means of extending access while reducing physical intervention.
However, the Dunhuang case studies also reveal cultural risks. Generative models trained on mural datasets may reproduce visual motifs without fully capturing their historical, religious, or ritual meanings. When outputs are decontextualized or aesthetically stylized, there is a risk of reducing Dunhuang imagery to decorative or generic cultural symbols. This concern was frequently raised in curatorial interviews, particularly regarding public-facing exhibitions and creative reinterpretations.
To address these risks, institutions involved in Dunhuang-related AI projects adopt specific cultural safeguards. These include limiting training datasets to curated or scholarly-validated materials, embedding historical interpretation within exhibition narratives, and consulting domain experts in Buddhist art and Silk Road history. Such practices demonstrate that cultural sustainability in AI-assisted heritage work depends on institutional mediation that prioritizes contextual integrity over technical novelty.

4.3. Institutional Dimension: Governance and Mediation

The institutional dimension focuses on how museums and heritage organizations govern AI-assisted practices. Document analysis identifies an increasing emphasis on ethical AI governance, data responsibility, and transparency in international heritage guidelines [35]. These principles are particularly salient in heritage contexts where institutions act as custodians of irreplaceable cultural resources.
In the case of Dunhuang, governance mechanisms are especially pronounced. Institutions overseeing AI-assisted Dunhuang projects establish clear protocols regarding dataset ownership, permissible uses of digitized heritage materials, and review processes for AI-generated outputs. These protocols are designed to prevent misuse, commercial exploitation without consent, and misrepresentation of culturally sensitive content.
Exhibitions and public programs related to Dunhuang further reflect institutional mediation. Curatorial strategies explicitly explain how AI systems were used, what decisions were made by human experts, and where interpretive limits were imposed. Educational materials encourage visitors to view AI-generated outputs as interpretive tools rather than authoritative reconstructions. Through these practices, institutions translate abstract ethical and legal principles into concrete governance arrangements, reinforcing public trust and institutional legitimacy. Taken together, the Dunhuang case demonstrates how ethical authorship, cultural sustainability, and institutional governance are closely intertwined in AI-assisted heritage practice. Rather than serving as a neutral application of technology, AI becomes embedded within existing cultural responsibilities, requiring careful institutional oversight to balance innovation with preservation.
Comparable AI-assisted virtual heritage practices can also be observed in international initiatives such as CyArk [36], where machine learning supports digital documentation and preservation rather than autonomous cultural production. At the Tate Modern art gallery in London, for example, AI-informed artworks such as those presented in algorithmic art exhibitions demonstrate institutional governance by clarifying authorship, dataset provenance, and curatorial responsibility.

4.4. Cross-Case Integration: Analysis Within the ECI Framework

Table 1 summarizes how selected AI-assisted art initiatives engage with ethical, cultural, and institutional considerations. A comparative analysis, however, reveals significant variations in how these dimensions are prioritized and enacted across different contexts. Rather than exemplifying a single model of sustainable AI practice, the cases reflect various context-specific governance approaches shaped by institutional mandates, cultural sensitivities, and modes of public engagement.
Across all cases, ethical authorship is expressed through hybrid attribution that acknowledges both human and algorithmic contributions. Yet the normative framing of this hybridity differs. In research-oriented and experimental settings such as AI-Creative 2025, hybrid authorship is conceptualized as a condition of co-creative practice. Ethical responsibility is maintained primarily through procedural transparency, including disclosure of dataset provenance, algorithmic processes, and degrees of human intervention. By contrast, heritage-focused initiatives such as the Digital Dunhuang Project and CyArk adopt a more conservative attribution model, positioning AI explicitly as an assistive tool rather than a creative agent. Here, ethical responsibility resides predominantly at the institutional level, reflecting custodial obligations associated with cultural and historically sensitive materials.
Variations are particularly evident in approaches to cultural sustainability. Heritage-based projects emphasize contextual integrity, prioritizing historical accuracy, scholarly validation, and cultural sensitivity over generative experimentation. In the Digital Dunhuang Project, for instance, AI-assisted reconstruction and visualization operate within established heritage governance frameworks that regulate dataset selection, interpretive scope, and modes of representation. These constraints act as sustainability mechanisms, mitigating risks of cultural homogenization or decontextualization. Conversely, gallery-based AI experiments and academic exhibitions allow greater aesthetic and conceptual flexibility, pursuing cultural sustainability through curatorial framing, interpretive mediation, and audience education rather than procedural restrictions.
Institutional governance further differentiates the cases. Heritage institutions formalize governance through explicit policies on data ownership, ethical reuse, and review of AI-generated outputs, reflecting the non-renewable nature of cultural heritage and the ethical stakes of algorithmic transformation. Experimental and academic contexts are generally less prescriptive, relying instead on transparency, reflexivity, and participatory engagement. While these approaches afford greater creative latitude, they function as sustainability-oriented mechanisms by reinforcing accountability and institutional legitimacy.
This comparative analysis indicates that ethical authorship, cultural sustainability, and institutional governance are interdependent rather than discrete dimensions. Practices prioritizing cultural sustainability tend to adopt structured governance and restrained attribution, whereas experimental practices rely more on interpretive and procedural safeguards to manage cultural risk. This highlights the analytical utility of the ECI framework in demonstrating that sustainable AI-assisted artistic practice emerges from context-sensitive alignments among technological innovation, cultural responsibility, and institutional authority, rather than from universally applicable technical solutions.

5. Discussion and Conclusion

This study investigates how AI-assisted artistic practices are altering established understandings of authorship, cultural sustainability, and museum governance. Adopting a qualitative research design, the study combines document analysis of legal texts, policy frameworks, and curatorial guidelines with comparative case studies of AI-assisted exhibitions and heritage-oriented initiatives. This approach allows for close examination of how abstract ethical principles and legal norms are interpreted and enacted within concrete institutional settings. The analysis is structured around an Ethical-Cultural-Institutional (ECI) framework, which highlights three interrelated dynamics.
Firstly, at the ethical level, AI-assisted creation complicates conventional notions of authorship and creative responsibility. Rather than producing a simple displacement of human authorship, the cases examined reveal negotiated forms of hybrid authorship in which human artists, curators, and institutions retain responsibility while acknowledging the generative contribution of algorithmic systems. This negotiated authorship attribution exposes the limits of existing copyright regimes and underscores the role of institutional practice in shaping ethical recognition beyond formal legal definitions.
Secondly, at the cultural level, AI demonstrates both preservative and disruptive potential. While AI technologies support digitization, virtual reconstruction, and expanded public access to cultural heritage, they also risk flattening cultural differences when outputs are detached from historical, religious, or social contexts. The findings show that cultural sustainability is not an automatic outcome of technological adoption but depends on curatorial mediation, contextual interpretation, and selective control over training data and representational frameworks.
Thirdly, at the institutional level, museums and cultural organizations emerge as key sites where ethical and cultural tensions are managed. Through acquisition criteria, exhibition narratives, and audience-facing interpretation, institutions translate legal and ethical principles into practical governance mechanisms. These practices shape how AI-assisted works are legitimized, explained, and evaluated, reinforcing the museum’s role as a mediator between technological innovation and public trust.
Across cases, the study demonstrates that the successful integration of AI into artistic and heritage practices relies on coordinated attention to ethical responsibility, cultural specificity, and institutional governance. Rather than treating AI as a neutral creative instrument, the findings suggest that sustainable adoption requires explicit recognition of hybrid authorship, culturally grounded curatorial strategies, and transparent institutional decision-making. The study contributes practical insights for policymakers and cultural institutions seeking to integrate AI in ways that support ethical accountability, cultural integrity, and long-term sustainability in the arts.

Author Contributions

Conceptualization, H.B. and J.B.; methodology, H.B.; formal analysis, H.B.; investigation, H.B.; resources, writing—original draft preparation, H.B. and J.B.; writing—review and editing, J.B. and H.B.; project administration, J.B.; funding acquisition, H.B. and J.B. Both authors have read and agreed to the published version of the manuscript.

Funding

Jonathan Bowen is funded by the UK Universities Superannuation Scheme (USS2381362).

Data Availability Statement

Data available on request due to privacy and ethical reasons.

Acknowledgments

The Dunhuang Academy provided indirect travel support and funding that initiated this research. Jonathan Bowen received additional support and funding from Museophile Limited. All figures in this paper are by the authors.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Elgammal, A. (2019). AI is blurring the definition of artist. American Scientist, 107(1), 18–21. [CrossRef]
  2. McCormack, J., Gifford, T., & Hutchings, P. (2019). Autonomy, authenticity, authorship and intention in computer-generated art. In A. Ekárt, A. Liapis, & M. L. Castro Peña (Eds.), Computational Intelligence in Music, Sound, Art and Design (Lecture Notes in Computer Science, Vol. 11453, pp. 35–50). Springer. [CrossRef]
  3. Gunkel, D. J. (2020). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology, 22, 307–320. [CrossRef]
  4. Samuelson, P. (2023). Generative AI meets copyright. Science, 381(6654), 158–161. [CrossRef]
  5. Rezwana, J. & Ford, C. (2025). Human-Centered AI Communication in Co-Creativity: An initial framework and insights. In: C&C ‘25: Proceedings of the 2025 Conference on Creativity and Cognition, pp. 651–665. [CrossRef]
  6. Dignum, V. (2022). Responsible Artificial Intelligence – from Principles to Practice. arXiv:2205.10785, Computers and Society. [CrossRef]
  7. Loach, K. & Rowley, J. (2021). Cultural sustainability: A perspective from independent libraries in the United Kingdom and the United States. Journal of Librarianship and Information Science, 54(1). [CrossRef]
  8. O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group. [CrossRef]
  9. Crawford, K. & Paglen, T. (2021). Excavating AI: The politics of images in machine learning training sets. AI & Society: Journal of Knowledge, Culture and Communication, 36, 1105–1116. [CrossRef]
  10. Parry, R. (2010). Museums in a Digital Age. Routledge, Taylor & Francis Group. [CrossRef]
  11. Cameron, F. R. (2021). The Future of Digital Data, Heritage and Curation in a More-than-Human World. Routledge, Taylor & Francis Group. [CrossRef]
  12. UNESCO (2025). Report of the Independent Expert Group on Artificial Intelligence and Culture. Independent Expert Group, UNESCO. Available online: https://www.unesco.org/en/mondiacult/themes/artificial-intelligence-and-culture (accessed on 26 January 2026).
  13. AI-Creative (2025). 4th Annual International Conference on Digital Humanities for East Asia Classics. Peking University Center for Digital Humanities Research (PKUDH), Beijing, China. Available online: http://ai-creative-2025.pkudh.net (accessed on 26 January 2026).
  14. Dunhuang Academy. Available online: https://www.dha.ac.cn (accessed on 26 January 2026).
  15. Bao, H. & Bowen, J. P. (2025) From Material Conservation to Digital Presence: Reconstructing Visitors’ Heritage Experience and Meaning-Making through Digital Dunhuang. Heritage, 8(12), 534. [CrossRef]
  16. Mazzone, M. & Elgammal, A. (2019). Art, Creativity, and the Potential of Artificial Intelligence. Arts, 8(1), 26. [CrossRef]
  17. Giannini, T. & Bowen, J. P. (Eds.) (2024) The Arts and Computational Culture: Real and Virtual Worlds. Springer Series on Cultural Computing. Springer, Cham. [CrossRef]
  18. Zhou, E. & Lee, D. (2024). Generative artificial intelligence, human creativity, and art. PNAS Nexus, 3(3), pgae052. [CrossRef]
  19. Smits, J. & Borghuis, T. (2022). Generative AI and Intellectual Property Rights. In: Custers, B., Fosch-Villaronga, E. (Eds.), Law and Artificial Intelligence: Regulating AI and Applying AI in Legal Practice, pp. 323–344. Information Technology and Law Series, vol 35. T.M.C. Asser Press, The Hague. [CrossRef]
  20. Thongmeensuk, S. (2024). Rethinking copyright exceptions in the era of generative AI: Balancing innovation and intellectual property protection. The Journal of World Intellectual Property, 27(2), 278–295. [CrossRef]
  21. Mizrahi, S. (2024). Following Generative AI Down the Rabbit Hole: Redefining Copyright’s Boundaries in the Age of Human-Machine Collaborations. University of Ottawa, Canada. [CrossRef]
  22. Meireis, T. & Rippl, G. (Eds.) (2019). Cultural Sustainability: Perspectives from the Humanities and Social Sciences. Routledge, Taylor & Francis Group.
  23. Soini, K. & Dessein, J. (2016). Culture-Sustainability Relation: Towards a Conceptual Framework. Sustainability, 8(2), 167. [CrossRef]
  24. Loach, K., Rowley, J., & Griffiths, J. (2017) Cultural sustainability as a strategy for the survival of museums and libraries. International Journal of Cultural Policy, 23(2), 186–198. [CrossRef]
  25. Corsini, F., Annesi, N., & Frey, M. (2025). The role of AI in museums’ journey towards sustainable development: Socio-technical imaginaries of a cultural and organizational transformation. Technovation, 149, 103280. [CrossRef]
  26. Giannini T. & Bowen, J. P. (2025) Global Cultural Conflict and Digital Identity: Transforming Museums. Heritage, 6(2), 1986–2005. [CrossRef]
  27. Giannini, T. & Bowen, J. P. (Eds.) (2019) Museums and Digital Culture: New Perspectives and Research. Springer Series on Cultural Computing. Springer, Cham. [CrossRef]
  28. Harrison, R., et al. (2020). Heritage Futures: Comparative Approaches to Natural and Cultural Heritage Practices. UCL Press. [CrossRef]
  29. Simon, N. (2016). The Art of Relevance. Museum 2.0. Available online: https://artofrelevance.org (accessed on 26 January 2026).
  30. Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12(2), 219–245. [CrossRef]
  31. Yin, R. K. (2018). Case Study Research and Applications: Design and Methods (6th ed.). Sage Publications.
  32. Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [CrossRef]
  33. Adebiyi, O. I. & Adeusi, O. C. (2025) Examining legal and ethical frameworks for protecting intellectual property rights in AI-generated content across creative industries. World Journal of Advanced Research and Reviews, 26(3), 1553–1561. [CrossRef]
  34. Liu, Y. et al. (2026). Bridging Ancient Art and Modern Technology: AI-Driven Storytelling of Dunhuang Mogao Grottoes. In: Intelligent Human Systems Integration (IHSI 2026): Disruptive and Innovative Technologies. AHFE International. [CrossRef]
  35. Pansini, S. et al. (2023). Design of an Ethical Framework for Artificial Intelligence in Cultural Heritage. In: IEEE International Symposium on Ethics in Science, Technology and Engineering (ETHICS). IEEE. [CrossRef]
  36. CyArk. Available online: https://www.cyark.org (accessed on 26 January 2026).
Table 1. Comparison of Ethical, Cultural, and Institutional dimensions.
Table 1. Comparison of Ethical, Cultural, and Institutional dimensions.
Case Ethical Authorship Cultural Sustainability Institutional Governance
AI-Creative 2025 Hybrid attribution: “human-guided AI creations” Use of culturally specific datasets, contextual explanations Curatorial protocols for AI role, visitor engagement
Digital Dunhuang Project Attribution clarifies AI-assisted reconstruction Digital reconstruction of endangered motifs, virtual tours Policy guidelines on AI use, ethical data sourcing
Gallery AI Experiments
CyArk – AI-assisted Digital Heritage Documentation
Artists retain primary authorship, AI as a tool Interactive workshops promoting local cultural narratives Exhibition guidelines, acquisition review, audience feedback loops
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated