1. Introduction
The global transition toward smart cities has profoundly reshaped contemporary urban governance by infusing digital intelligence into planning, management, and decision-making systems [
1,
2]. This transformation is driven not only by technological modernization, but also by the pursuit of sustainability, institutional effectiveness, and inclusive governance. Within this evolving landscape, artificial intelligence (AI) has emerged as a strategic resource for urban systems, enabling advanced analytics, predictive modeling, and decision-support mechanisms to address the growing complexity of metropolitan development [
3,
4]. Recent reviews highlight that AI is becoming a foundational enabler of sustainable smart city development, particularly through data-intensive planning, optimization of urban services, and integrated urban management [
5,
6]. Nevertheless, despite rapid technological diffusion, the role of AI in strengthening participatory urban planning and meaningful citizen engagement remains insufficiently theorized and empirically examined, particularly outside experimental or highly resourced urban contexts [
7,
8,
9].
A substantial body of smart city scholarship continues to conceptualize AI through a predominantly technocentric lens, emphasizing operational efficiency, performance optimization, and administrative control [
3,
6]. While such approaches have delivered tangible gains in municipal service delivery and urban management, they also risk reinforcing top-down governance structures and marginalizing citizens’ roles in shaping urban futures. When AI systems are incorporated within administrative workflows without transparent participatory mechanisms, citizen engagement risks being reduced to passive data provision or narrowly defined consultation [
15,
18]. As a result, participation may function symbolically rather than substantively, thereby weakening democratic accountability and diminishing the legitimacy of planning outcomes [
9,
10].
Participatory urban planning, by contrast, is grounded in the normative assumption that inclusive and deliberative decision-making enhances policy legitimacy, responsiveness to local needs, and institutional accountability [
11,
12]. Decades of planning theory and practice demonstrate that participation quality—defined by influence, feedback, and institutional responsiveness—is as critical as participation access. Without clear mechanisms linking public input to planning outcomes, engagement risks becoming procedural rather than transformative [
10,
12]. In smart city environments, digital participation platforms can broaden reach and diversify voices, but only when they are intentionally designed to support deliberation, co-design, and iterative feedback, rather than one-way information flow [
3,
9,
13].
Within this context, AI occupies a dual and contested position in urban governance. On the positive side, it offers significant potential to enhance participatory planning by structuring and synthesizing large-scale citizen input, supporting interactive visualization of planning trade-offs, and enabling more adaptive and responsive planning cycles aligned with sustainability objectives [
3,
4]. However, on the negative side, AI-assisted planning raises critical ethical and governance challenges, including algorithmic opacity, accountability gaps, data privacy concerns, and the risk of excluding digitally vulnerable groups [
14,
15]. These challenges are particularly acute in metropolitan regions characterized by socio-economic diversity and uneven digital access, where AI-enabled participation may inadvertently privilege technologically confident populations and undermine representativeness [
16,
17]. Moreover, emerging evidence indicates that public trust in AI-enabled governance is not an automatic outcome of technological adoption, but is shaped by perceived transparency, fairness, usefulness, and credibility of the institutions deploying AI systems [
18,
19].
Despite growing recognition of these issues, there remains a notable lack of empirically grounded research examining how AI-enabled participatory planning is experienced by citizens and stakeholders, and how institutional readiness shapes whether AI supports inclusive governance or reinforces technocratic planning models [
7,
8,
9]. Existing studies frequently emphasize adoption trajectories and system performance, with limited attention to how AI reshapes participation mechanisms, power relations, and trust in everyday planning practice [
9,
14].
In response to this gap, the present study examines AI-enabled participatory urban planning for sustainable smart cities, using the empirical case of Dammam Metropolitan Area, Saudi Arabia. As a rapidly urbanizing region undergoing significant digital transformation within a national smart city and sustainability agenda, Dammam provides a critical context for assessing AI integration into participatory planning processes. The study investigates stakeholder perceptions of AI awareness and use, institutional and technical readiness, participation quality, and trust-related concerns. By conceptualizing AI as a governance instrument embedded within institutional arrangements and shaped by social acceptance, this research contributes to debates on smart urban governance, participatory planning, and sustainability. It also offers transferable insights for metropolitan regions seeking to align AI-driven innovation with inclusive, accountable, and trust-oriented urban planning practices [
3,
13,
19].
2. Literature Review
The rapid diffusion of smart city initiatives has significantly reshaped contemporary urban governance by prioritizing data-driven management, interconnected urban systems, and technology-enabled decision-making [
1,
2,
3,
20,
21]. Smart city frameworks increasingly characterize urban development as a systemic process in which digital technologies support coordination, efficiency, and evidence-informed policymaking across sectors. Within this transformation, AI has emerged as a pivotal capability, enabling predictive analytics, pattern recognition, automation, and advanced decision-support across planning and service domains [
3,
4,
5,
22,
23]. Recent reviews highlight AI’s growing relevance in scenario evaluation, spatial analysis, and the management of complex urban externalities, while also identifying persistent challenges related to governance capacity, organizational readiness, and institutional coordination [
6,
7,
8,
9,
24].
Despite these advances, a substantial share of smart city scholarship continues to conceptualize AI through a technocentric lens that emphasizes efficiency, optimization, and control [
3,
8,
19], often underplaying democratic governance, civic inclusion, and the social consequences of algorithmic decision-making. Nevertheless, a growing body of literature conceptualizes urban AI as a socio-technical governance phenomenon shaped by institutional arrangements, values, and political choices rather than technological capability alone [
25,
26,
27]. This shift reflects broader concerns that AI-driven urban systems, if left unexamined, may reinforce existing power asymmetries and elevate managerial rationalities over participatory governance.
Studies of smart urban management often portray AI as an enabler of administrative automation, service optimization, and accelerated policy execution [
18,
28]. Empirical case studies indicate that municipalities increasingly adopt AI tools to improve operational performance and coordination in response to urban complexity and fiscal constraints [
39]. However, adoption-oriented research consistently highlights significant implementation barriers—including institutional fragmentation, skill shortages, data governance challenges, and managerial uncertainty—within public-sector organizations constrained by accountability, procurement, and regulatory requirements [
7,
29,
30,
31]. These findings reinforce the argument that AI’s urban value cannot be assessed solely through technical performance indicators but must also be evaluated in terms of governance outcomes such as transparency, legitimacy, accountability, and public value creation [
27,
32].
Urban planning theory has long emphasized citizen participation as a cornerstone of legitimate and effective governance. Foundational studies stress deliberation, inclusion, and shared responsibility as essential to shaping sustainable and socially responsive urban futures [
11,
12,
13]. Subsequent scholarship demonstrates that participation quality—measured by influence, institutional responsiveness, and feedback—is as critical as participation access itself [
10,
33]. Within smart city contexts, digital participation tools have expanded engagement channels and increased the scalability of public input [
9,
34]. Nonetheless, empirical evidence consistently indicates that many digital participation mechanisms remain consultative or symbolic, often collecting citizen views without meaningful integration into decision-making processes and thereby reinforcing top-down governance structures [
13,
35]. This disconnect highlights a persistent participation paradox in smart cities, whereby technological capacity to engage citizens advances faster than institutional willingness or ability to share decision-making authority.
Emerging literature is now exploring how AI could reshape participatory urban planning by enabling new forms of interaction between citizens and planning institutions. Reviews suggest that AI can synthesize large volumes of public input, enhance visualization of planning alternatives, and enable more adaptive and responsive planning processes through predictive and pattern-recognition capabilities [
3,
4,
6]. Furthermore, empirical studies indicate that citizens tend to value AI-enabled tools—such as interactive dashboards, participatory mapping, and scenario visualization—that enhance transparency, communication, and collective understanding, rather than those that replace human judgment in planning decisions [
36,
37]. Nonetheless, critical scholarship cautions that AI may amplify exclusion, reproduce inequalities, and consolidate institutional power unless accompanied by explainability, accountability, and procedural safeguards [
14,
15,
38].
These concerns are particularly relevant in urban contexts characterized by extensive data and asymmetric institutional power, where algorithmic decision-making can become opaque and difficult to contest [
39,
40,
41]. Thus, the effectiveness of AI-enabled participation is closely associated with institutional capacity and governance arrangements. Research across digital governance and smart city studies emphasizes that regulatory clarity, interdepartmental coordination, organizational culture, and skilled human capital shape the design, implementation, and embedding of AI in planning practice [
7,
29,
30,
31]. Weak institutional readiness often leaves AI initiatives fragmented or confined to technical units, limiting their relevance for participatory planning and democratic accountability [
28,
31]. Evidence from digital government research further shows that citizens’ willingness to engage with AI-enabled platforms is influenced by perceived value, credibility, and institutional performance rather than technological novelty alone [
42,
43].
Public trust emerges as an additional critical dimension shaping AI-enabled participatory governance. Studies consistently demonstrate that citizens are more likely to engage with digital planning platforms when decision-making processes are perceived as transparent, fair, and responsive [
19,
36]. Conversely, concerns related to surveillance, data privacy, opaque algorithms, and potential misuse of information can undermine trust and discourage participation, particularly among marginalized and digitally vulnerable groups [
14,
15,
16]. Thus, trust operates both as a precondition for engagement and as an outcome of meaningful participation, creating a reinforcing dynamic wherein inclusive processes strengthen institutional legitimacy over time [
33,
34,
35].
Synthesizing these strands, existing scholarship supports an integrated view of AI-enabled participatory urban planning as a socio-technical governance system rather than a technology adoption exercise. Smart city research highlights AI’s potential to enhance planning intelligence and sustainable urban management [
1,
2,
3]. Participatory planning theory emphasizes deliberation, responsiveness, and accountability as foundations of legitimacy [
11,
12,
13]. Critical AI governance scholarship underscores ethical risks related to opacity, exclusion, and rights-based concerns [
14,
15,
16,
17,
18,
19]. Institutional studies further demonstrate that governance capacity mediates whether AI fosters inclusive participation or reinforces technocratic models [
7,
8,
29,
30,
31,
32]. Accordingly, this study adopts an integrated conceptual framework (
Figure 1) linking AI capabilities, institutional capacity, participatory mechanisms, and public trust, providing an analytical foundation for examining how AI-enabled participatory planning can contribute to more inclusive, accountable, and sustainability-oriented smart urban governance.
3. Methodology
This study adopts a mixed-methods research design to examine how AI-enabled participatory urban planning functions as a socio-technical and institutionally mediated governance process in the context of sustainable smart cities. Mixed-methods approaches are particularly suitable for investigating complex governance phenomena that integrate technological systems, institutional capacity, and citizen perceptions, as they allow for the combination of quantitative generalization with qualitative depth and interpretation [
44,
45,
46]. Consistent with the conceptual framing developed in the literature section, the methodological design reflects the premise that AI is not a neutral technological tool but a governance instrument whose participatory impacts are shaped by institutional readiness and public trust [
32,
41]. By integrating quantitative and qualitative evidence, the study enables triangulation across data sources, thereby strengthening the robustness and interpretive validity of the findings [
47,
48].
The empirical focus of the research is the Dammam Metropolitan Area (DMA), comprising the cities of Dammam, Al Khobar, and Dhahran (
Figure 2). DMA is one of Saudi Arabia’s most rapidly urbanizing and economically strategic metropolitan regions, prioritized for smart city development under Saudi Vision 2030 [
49,
50]. The region has already begun deploying AI- and IoT-enabled initiatives, including smart traffic management, GIS-based zoning systems, digital municipal services, and environmental monitoring platforms [
51,
52]. Thus, it offers a particularly relevant setting for examining how AI adoption intersects with participatory planning processes in a metropolitan environment characterized by rapid growth, institutional transformation, and expanding digital infrastructure.
Data collection relied primarily on a cross-sectional survey administered to a purposively selected group of stakeholders with direct or indirect experience in urban planning, governance, or smart city initiatives. Survey-based approaches are widely used in smart city and participatory governance research to assess perceptions, readiness, and behavioral intentions across heterogeneous actor groups [
9,
15]. A purposive sampling strategy was employed to ensure representation from municipal planners and engineers, government officials, private-sector professionals, academics, and residents with prior engagement in municipal or digital participation initiatives. This approach is appropriate for governance research requiring specialized institutional knowledge rather than population-wide representativeness [
53,
54]. Out of 400 invitations distributed, 260 valid responses were obtained, yielding a 65% response rate consistent with similar smart city and governance studies in the Gulf region [
50,
55,
56,
57].
The survey instrument was explicitly structured around the study’s conceptual framework, linking AI capabilities, institutional readiness, citizen participation, and public trust. Questionnaire items were organized into five interrelated constructs: citizen participation and engagement, awareness and use of AI tools, institutional readiness, perceived benefits of AI, and perceived risks and ethical concerns. These constructs reflect core themes in participatory planning theory, AI governance, and smart city research [
11,
12,
41,
58]. All items were measured using five-point Likert scales—a format well established in governance and digital transformation research for capturing attitudes, perceptions, and institutional conditions [
16,
42]. To ensure content validity and conceptual coherence, the instrument was developed based on established literature, refined through expert review with practitioners and academics, and pilot tested to improve clarity and reliability [
59,
60].
Quantitative analysis focused on examining relationships among the framework’s core dimensions. Descriptive statistics were used to assess the respondents’ levels of AI awareness, participation, institutional readiness, and trust. Reliability analysis using Cronbach’s alpha evaluated internal consistency across constructs. Correlation and regression analyses were employed to explore associations between institutional readiness, AI awareness, participation quality, and trust, and to assess the predictive influence of institutional and technological factors on support for AI-enabled participatory planning. This analytical strategy aligns with prior studies examining readiness gaps and governance dynamics in smart city contexts [
7,
31] and directly reflects the hypothesized relationships articulated in the conceptual framework [
27,
32].
To complement the quantitative findings, the study incorporated a qualitative component through thematic analysis of open-ended survey responses. This approach allowed the respondents to articulate institutional barriers, participation challenges, and trust-related concerns in their own terms, providing contextual nuance and depth beyond closed-ended methods. A reflexive thematic analysis was conducted following Braun and Clarke’s [
61] six-stage framework, supported by qualitative analysis software. Coding was performed independently by three researchers, achieving strong inter-coder agreement (κ = 0.81), which enhanced analytical rigor. Emergent themes were systematically mapped onto the conceptual framework’s dimensions, AI-enabled governance, institutional capacity, citizen participation, and public trust, echoing challenges identified in smart city and AI ethics scholarship [
7,
14,
15,
16].
Ethical considerations were observed throughout the research process. Participation was voluntary, informed consent was obtained from all the respondents, and no personally identifiable information was collected. Data were stored securely in accordance with established standards for social research ethics [
62]. The study’s cross-sectional design limits causal inference and the purposive sampling approach constrains statistical generalizability. These limitations are common in empirical research on smart city governance and participatory planning. Hence, they do not undermine the study’s contribution to understanding how AI-enabled participatory urban planning functions as an institutionally mediated and trust-dependent governance process in rapidly transforming metropolitan contexts [
31,
32,
33].
4. Results
4.1. Demographic and Professional Background
This subsection presents the demographic and professional characteristics of the respondents, as summarized in
Table 1. The selected variables—professional role, years of experience, sectoral affiliation, educational attainment, and familiarity with smart city concepts—reflect the multi-actor governance environment emphasized in the smart city and participatory planning literature and are analytically relevant for interpreting engagement with AI-enabled participatory urban planning [
1,
11,
32]. Prior research indicates that perceptions of AI adoption, participation quality, and institutional trust vary significantly across institutional positions, professional backgrounds, and levels of exposure to digital governance initiatives [
9,
13].
In terms of professional roles, the respondent pool represented a balanced mix of key stakeholder groups involved in urban governance and planning. Academics and researchers constituted the largest share of the respondents (27.31%), followed closely by municipal planners (26.54%) and local government officials (25.00%). Citizens and residents accounted for 14.23% of the sample, while respondents from other professional backgrounds represented 6.92%. This composition aligns with the multi-actor governance perspective emphasized in smart city research, which stresses that AI-enabled urban planning outcomes are shaped by interactions between institutional actors, technical experts, and civic stakeholders rather than by technology alone [
13,
32]. The inclusion of both decision-makers and citizens provides a robust basis for assessing participation quality, institutional mediation, and trust dynamics.
The distribution of professional experience underscores the sample’s capacity to evaluate institutional readiness and governance capacity. Approximately 31.92% of the respondents reported 5–10 years of experience, 25.38% reported 11–15 years, 24.62% had more than 15 years of experience, while 18.08% had less than five years of experience. The mean experience score (M = 2.57, SD = 1.05) indicates a predominance of mid- to senior-level professionals. This level of experience is particularly relevant for assessing AI-enabled planning systems, as institutional learning, organizational routines, and governance capacity shape how AI technologies are adopted and operationalized in public-sector contexts [
7,
29,
31].
Sectoral affiliation highlights the interdisciplinary scope of smart city governance represented in the sample. The respondents were primarily affiliated with urban planning (26.15%), waste management (25.38%), and oil and gas (18.08%), followed by academia (16.54%) and construction (10.00%), with a smaller share (3.85%) representing other sectors. This cross-sectoral composition reflects the integrated nature of AI-enabled urban governance, where planning decisions intersect with infrastructure systems, service provision, and economic activities [
1,
2,
3]. Such diversity supports a more holistic assessment of AI adoption and participatory practices beyond a single departmental or sectoral perspective.
With respect to educational attainment, the respondent pool demonstrated relatively high levels of formal education. More than one quarter of the respondents held a master’s degree (26.15%), 21.54% held a doctorate, and 14.23% reported professional certification, while 22.69% held a bachelor’s degree. Only 15.38% reported high school education or lower. The mean educational level (M = 2.97, SD = 1.28) indicates strong analytical and digital capacity among the respondents. This factor is particularly relevant given evidence that digital literacy and analytical competence facilitate meaningful engagement with AI-supported governance tools and participatory platforms [
12,
16].
The respondents’ familiarity with smart city concepts and awareness of local smart city initiatives provide essential contextual grounding for subsequent analyses. A majority of the respondents (62.16%) reported familiarity with the concept of smart cities, while 37.84% reported no familiarity. Awareness of ongoing smart city initiatives was more uneven: 28.85% reported being aware, 43.08% indicated partial awareness, and 28.08% reported no awareness. This variation is consistent with findings in the smart city literature showing that exposure to digital transformation initiatives differs across stakeholder groups and shapes perceptions of AI use, participation opportunities, and trust in urban governance processes [
9,
19,
36].
Overall, the demographic and professional profile of the respondents demonstrates that the sample was diverse, experienced, and analytically well-equipped to evaluate AI-enabled participatory urban planning from institutional, technical, and civic perspectives. This strengthens the interpretive validity of the empirical findings presented in the subsequent sections and supports the study’s broader framing of AI as a governance tool within multi-actor and institutionally mediated urban planning processes.
4.2. Citizen Engagement in Urban Planning
This subsection examines patterns of citizen engagement in urban planning using the indicators summarized in
Table 2. Consistent with participatory planning theory and smart governance research, the selected variables capture both the extent of engagement—through participation history and frequency—and the quality of engagement, including perceived influence, outcome effectiveness, accessibility, and trust. In addition, the analysis incorporates the respondents’ digital participation capacity and willingness to engage under improved conditions, reflecting the participatory dimension of the conceptual framework, which emphasizes that participation quality and institutional responsiveness are central to meaningful AI-enabled urban governance [
11,
12,
13].
The first indicator, prior participation in municipal planning processes, shows that most respondents (74.52%) have engaged in formal planning or governance activities, while 25.48% reported no prior involvement. This finding suggests that formal participation channels exist within the planning system, consistent with smart city literature on expanded opportunities for civic engagement through institutional and digital mechanisms [
9,
34]. However, as emphasized in participatory planning research, participation exposure alone does not guarantee meaningful engagement—a distinction that becomes evident when examining participation frequency and perceived influence [
10].
Patterns of participation frequency provide further insight into how engagement is institutionalized. Nearly half of the respondents (48.26%) reported occasional participation (one to two times per year), while 25.48% indicated frequent engagement (quarterly or more). By contrast, 26.25% reported never participating. The predominance of occasional engagement suggests that participation mechanisms may be episodic rather than continuous, limiting opportunities for sustained dialogue and collaborative learning between citizens and planning authorities. This pattern mirrors findings in smart city participation studies, which note that engagement is often project-based or consultative rather than embedded within routine planning cycles [
13,
35].
Perceptions of participatory effectiveness provide a clearer insight into engagement outcomes. A clear majority of the respondents (64.23%) either strongly disagreed or disagreed that planners genuinely consider citizen input, while only 23.46% agreed or strongly agreed. The low mean score (M = 2.17, SD = 1.42) reflects limited perceived responsiveness and reinforces long-standing critiques of tokenistic participation, where engagement processes exist but have minimal influence on planning outcomes [
11,
57]. Such perceptions are particularly consequential in AI-enabled planning contexts, where opaque decision-making processes may further weaken perceived accountability.
Closely related to perceived influence is trust in participation outcomes. A substantial proportion of the respondents (62.95%) disagreed or strongly disagreed that participation leads to meaningful change, compared with only 24.70% who expressed agreement. The mean score (M = 2.63, SD = 1.12) indicates limited confidence in the tangible impact of engagement, highlighting a trust deficit that hinders sustained participation. This finding is consistent with governance research indicating that trust erodes when participatory processes lack feedback mechanisms and clear links between public input and final decisions [
9,
33].
Perceptions of the availability of participation opportunities were more ambivalent. While 34.80% of the respondents agreed or strongly agreed that engagement opportunities are sufficient, a large share (44.80%) remained neutral and 20.40% expressed disagreement. This distribution suggests uneven awareness, accessibility, or reach of participatory mechanisms—an issue frequently identified in smart city research, where participation infrastructure exists but is not evenly experienced across communities or stakeholder groups [
10,
19].
In contrast, the respondents demonstrated relatively strong digital participation capacity. More than half (53.20%) expressed confidence in using digital tools to participate in urban planning, reflected in a relatively high mean score (M = 3.53, SD = 0.98). This finding aligns with studies highlighting increasing digital literacy among urban stakeholders and suggests a favorable foundation for AI-enabled participatory platforms [
1,
2,
3,
42]. This capacity is reinforced by the respondents’ willingness to engage more actively if easier digital tools are introduced, with 53.60% expressing agreement and a similarly high mean score (M = 3.55, SD = 1.09). Together, these findings indicate latent participatory demand that could be activated through improved platform design, usability, and institutional responsiveness.
Taken together, the results reveal a participation paradox of relatively high exposure to participation mechanisms and strong digital readiness coexisting with low perceived influence, limited outcome effectiveness, and weak trust. This misalignment underscores a core insight of the conceptual framework articulated in the literature: participation frequency and technological access alone are insufficient to achieve participatory urban planning without institutional mechanisms that ensure responsiveness, feedback, and accountability. The findings suggest that AI-enabled participatory planning should prioritize participation quality over participation volume, leveraging AI not merely to expand engagement channels but to strengthen deliberation, transparency, and trust within urban planning processes [
11,
12,
13].
4.3. Awareness and Use of AI Tools
This subsection examines the respondents’ awareness, interaction, perceived benefits, and concerns related to the use of AI in urban governance, drawing on the indicators summarized in
Table 3. These variables correspond to the AI capability dimension of the conceptual framework, which positions AI not as an autonomous driver of participation but as a facilitative technology whose governance effects are mediated by institutional readiness and public trust [
3,
32].
The findings indicate a high level of conceptual awareness of AI in the urban governance context. A substantial majority of the respondents (79.62%) reported being aware of AI applications, while only 20.38% indicated no awareness. This pattern aligns with recent smart city research documenting growing familiarity with AI among urban stakeholders, even in contexts where implementation remains uneven or fragmented [
4,
5,
6,
32]. Yet, high awareness does not translate into widespread experiential engagement. Only 40.38% of the respondents reported direct interaction with AI-based urban systems, while 36.54% reported no interaction and 23.08% remained unsure. The relatively low mean score (M = 1.83, SD = 0.78) reflects a clear gap between awareness and use, reinforcing findings from AI governance studies that conceptual understanding often outpaces practical exposure in public-sector environments [
7,
8,
9,
31].
Preferences regarding AI tools that support citizen engagement provide further insight into how AI is perceived as a participatory enabler. The respondents showed a clear preference for tools that enhance transparency and collective sense-making, such as sentiment analysis of public feedback (27.36%) and interactive planning maps (21.46%), while more automated interfaces, including AI chatbots (8.02%), were viewed less favorably. This pattern is consistent with participation scholarship suggesting that citizens value technologies that facilitate understanding, dialogue, and visibility of planning processes rather than those that replace human interaction or decision authority [
36,
37].
Perceptions of AI’s benefits for participatory governance further reinforce this interpretation. The respondents most frequently associated AI with improved transparency and accountability (22.76%) and efficient data processing (20.69%), followed by enhanced public participation (17.24%). The overall mean benefit score (M = 3.53, SD = 1.52) reflects cautious optimism, echoing prior research showing that AI is more positively received when it is perceived to strengthen openness, responsiveness, and decision support rather than administrative control [
18].
At the same time, concerns related to AI deployment remain pronounced. The most frequently cited risks include mistrust in technology (24.88%), exclusion of digitally vulnerable populations (24.38%), and lack of transparency in decision-making processes (22.89%). These concerns closely mirror critical AI governance literature warning that opaque algorithms, persistent digital divides, and weak accountability mechanisms can undermine democratic participation and exacerbate social inequality [
12,
13,
14,
38]. The relatively high mean concern score (M = 3.46, SD = 1.37) demonstrates that acceptance of AI-enabled planning remains conditional on governance safeguards rather than technological capability alone.
Overall, the findings reveal a clear tension between perceived AI potential and its limited participatory use in practice. While the respondents recognized AI’s value in enhancing transparency and engagement, they noted that limited interaction, unresolved governance concerns, and trust deficits constrain its effectiveness as a participatory tool. These results provide empirical support for the conceptual framework’s central proposition that AI’s contribution to participatory urban planning is mediated by institutional readiness and public trust, rather than by awareness or technological availability alone.
4.4. Institutional and Technical Readiness
This subsection examines institutional and technical readiness for integrating AI into participatory urban planning, drawing on the indicators summarized in
Table 4. These variables correspond to the institutional capacity dimension of the conceptual framework, which mediates the relationship between AI deployment and participatory outcomes by shaping how technologies are governed, operationalized, and experienced by citizens [
7,
8,
9,
32].
Overall assessments of municipal readiness indicate moderate to low confidence in current institutional capacity. A substantial majority of the respondents (63.70%) rated readiness to integrate AI into participatory planning as moderate to very poor, while only 36.30% perceived it as good or excellent. The mean score (M = 3.07, SD = 1.27) suggests that while foundational structures may exist, institutional capacity remains fragmented and uneven. This pattern aligns with prior research showing that public-sector AI adoption often lags behind strategic ambition due to organizational inertia, fragmented responsibilities, and governance complexity [
30,
31,
32].
Human capital emerges as a particularly significant constraint. More than half of the respondents (52.67%) disagreed or strongly disagreed that municipal staff possess adequate training to use AI tools effectively, reflected in a low mean score (M = 2.58, SD = 1.02). This finding reinforces evidence from smart city and AI governance research indicating that skills gaps, limited institutional learning, and insufficient capacity-building substantially constrain AI implementation in urban governance contexts [
14,
57].
Perceptions of interdepartmental coordination further underscore institutional fragmentation. Nearly half of the respondents reported insufficient coordination across municipal departments to support AI deployment, with a mean score of M = 2.96 (SD = 1.23). Fragmented governance structures were widely identified as barriers to integrated and participatory use of digital technologies, particularly where AI initiatives remain siloed within technical units rather than embedded across planning and decision-making processes [
7,
13].
Evaluations of regulatory clarity were more ambivalent. While 41.89% of the respondents agreed that regulations guiding AI use are in place, a large neutral share (35.18%) expressed uncertainty regarding how such regulations are implemented or enforced in practice. This ambiguity reflects broader debates in AI governance, where formal frameworks may exist without clear operational guidance or institutional accountability at the municipal level [
32,
57].
In contrast, the respondents expressed comparatively greater confidence in citizens’ digital literacy. A majority (58.89%) agreed that citizens are capable of engaging with AI-enabled tools, reflected in a relatively high mean score (M = 3.64, SD = 1.08). However, this optimism must be interpreted cautiously and in conjunction with the exclusion concerns identified earlier in the analysis, echoing digital divide literature that warns against assuming uniform digital capacity across socio-demographic groups [
16].
Finally, perceptions that AI could enhance trust in urban decision-making were moderately positive (M = 3.65, SD = 1.08), suggesting that trust gains are possible but conditional. This finding directly supports the conceptual framework’s proposed feedback loop between institutional performance, participation quality, and public trust, indicating that AI can contribute to legitimacy only when embedded within transparent, coordinated, and inclusive governance arrangements [
33].
Overall, the findings underscore institutional readiness as a central bottleneck in AI-enabled participatory urban planning. While regulatory foundations and citizens’ digital capacity provide a necessary baseline, they remain insufficient in the absence of staff training, interdepartmental coordination, and organizational integration capable of translating AI deployment into meaningful participatory outcomes.
4.5. Thematic Analysis of Open-Ended Responses
To complement the quantitative findings presented in the previous subsections, a thematic analysis of open-ended responses was conducted to capture deeper insights into how stakeholders interpret AI-enabled participatory urban planning in practice. Following Braun and Clarke’s [
60] reflexive thematic analysis framework, 214 substantive responses were coded using NVivo 12. Three researchers independently generated initial codes, compared interpretations, and iteratively refined a shared codebook. Thematic saturation was reached after 34 responses, and inter-coder reliability was strong (κ = 0.81), indicating analytical robustness. The qualitative findings provide contextual depth to the survey results and illuminate how institutional readiness, participation quality, and trust dynamics identified in the quantitative analysis are experienced and articulated by stakeholders.
Five interrelated themes emerged from the analysis: governance and regulatory reform, digital transformation and smart tools, capacity building and organizational culture, citizen participation and transparency, and sustainability-oriented service improvement. Collectively, these themes reinforce the conceptual framework’s central proposition that AI functions as a governance enabler whose participatory impact is mediated by institutional arrangements rather than technological availability alone [
32].
Governance and regulatory reform emerged as the most prominent theme, accounting for 41% of the coded segments. The respondents consistently emphasized the need for clearer legal frameworks, streamlined mandates, and coherent multi-level governance structures to support AI-enabled planning. Calls for a “clear legal framework,” a “national transition roadmap,” and mechanisms to reduce jurisdictional overlap reflect concerns also evident in the quantitative assessment of regulatory ambiguity and institutional fragmentation. These perspectives align closely with smart governance literature highlighting that effective digital transformation requires coherent regulatory architectures and clarified decision-making rights across all levels of government [
40,
41].
The second theme, digital transformation and smart tools (31%), reflects a predominantly positive orientation toward AI as an enabler of more responsive and transparent urban governance. The respondents suggested the use of mechanisms such as AI-powered dashboards, IoT-based asset monitoring, blockchain-enabled licensing, and immersive visualization tools like virtual reality to enhance understanding and accountability. At the same time, many emphasized that digitalization should extend beyond front-end platforms and be supported by organizational integration and process redesign. This mirrors earlier findings reporting strong AI awareness but limited practical use, reinforcing the socio-technical nature of AI adoption emphasized in the literature [
3,
7,
8,
9].
Capacity building and organizational culture formed a third major theme (27%), reinforcing the quantitative evidence of human capital constraints. The respondents emphasized the urgent need to upskill municipal staff, develop internal AI competencies, and institutionalize learning through innovation labs, cross-sector training, and partnerships with academic institutions. Several participants also stressed the need for leadership development and change-management programs to foster a data-driven and participatory mindset. These insights corroborate governance research showing that institutional learning and organizational culture often determine the pace and quality of AI adoption more than technological sophistication alone [
41].
Citizen participation and transparency emerged as a closely related theme (24%), offering qualitative depth to the participation paradox identified in
Section 4.2. The respondents advocated for interactive platforms that allow citizens to submit feedback, vote on neighborhood projects, track municipal performance indicators, and receive visible responses to their inputs. At the same time, concerns were raised about the exclusion of digitally vulnerable groups, with calls for multilingual, accessible, and inclusive interfaces. These perspectives reinforce participatory governance scholarship, emphasizing that meaningful engagement hinges on feedback loops, accessibility, and trust-building mechanisms rather than participation volume alone [
11,
12,
13,
57].
A cross-cutting theme of sustainability-oriented service improvement (22%) highlights that stakeholders view AI not only as a planning or governance tool but also as a means to advance environmental stewardship and quality-of-life outcomes. Suggested applications included AI-optimized waste collection, smart parking powered by renewable energy, urban agriculture incentives, and green infrastructure monitoring. These priorities align with global trends toward data-informed ecological urbanism and reinforce the sustainability dimension of the study’s conceptual framework, linking participatory planning with long-term resilience and environmental accountability [
14].
Collectively, the thematic findings demonstrate strong convergence with the quantitative results and underscore that stakeholders perceive AI less as a standalone technology and more as a catalyst for comprehensive institutional reform. Across themes, technical solutions are consistently embedded within broader demands for legal clarity, organizational learning, inclusive participation, and trust-oriented governance. This synthesis directly supports the conceptual framework developed in the literature, confirming that AI-enabled participatory urban planning depends on the alignment of technological capability, institutional capacity, and public trust. Addressing these interlocking dimensions is therefore essential for translating AI adoption into meaningful, inclusive, and sustainable urban governance outcomes in the Dammam Metropolitan Area.
4.6. Integrative Synthesis of Findings in Relation to the Conceptual Framework
Taken together, the empirical findings from
Section 4.1,
Section 4.2,
Section 4.3,
Section 4.4 and
Section 4.5 provide strong and internally consistent support for the conceptual framework presented in
Figure 1, which positions AI as a governance enabler whose participatory effects are mediated by institutional capacity and reinforced through public trust. Across quantitative and qualitative analyses, the results converge on a central insight: AI does not independently generate meaningful participation in urban planning; rather, its governance value emerges through its interaction with institutional readiness, participation quality, and trust-building mechanisms.
The demographic and professional profile of the respondents provides an analytically robust foundation for this interpretation. The diversity, experience, and relatively high educational qualification of the sample suggest that perceptions of AI-enabled planning are informed by substantial institutional exposure and professional engagement. This supports the framework’s assumption that evaluations of AI are shaped by actors embedded within governance systems, rather than by abstract technological imaginaries. Differences in roles, experience, and sectoral affiliation further illuminate variations in perceptions of readiness, participation, and trust, reinforcing the framework’s multi-actor governance logic.
Findings on citizen engagement reveal a clear participation paradox that aligns directly with the conceptual model. While formal participation channels and digital readiness are relatively well established, perceived influence, outcome effectiveness, and trust remain weak. This confirms the framework’s emphasis on participation quality over participation frequency, demonstrating that expanded engagement opportunities, digital or otherwise, are insufficient without institutional mechanisms that translate input into visible and accountable decisions. Thus, AI-enabled tools, in this context, represent latent potential rather than realized participatory transformation.
The analysis of AI awareness and use further substantiates the framework’s mediating logic. High levels of conceptual awareness coexist with limited direct interaction, highlighting an awareness–use gap that mirrors institutional and governance constraints. Stakeholders value AI primarily when it enhances transparency, sense-making, and accountability, while expressing reservations about automation that could displace human judgment or obscure decision-making processes. These findings reinforce the framework’s treatment of AI as a facilitative technology whose participatory contribution depends on governance design and trust conditions, rather than on technical capability alone.
Institutional and technical readiness emerges as the most decisive mediating factor in the framework. Fragmented coordination, skill gaps, and regulatory ambiguity constrain the translation of AI adoption into participatory outcomes. While the respondents expressed confidence in citizens’ digital capacity and cautious optimism regarding AI’s potential to enhance trust, these benefits remain conditional on institutional performance. This directly supports the framework’s proposition that institutional capacity mediates the relationship between AI deployment and participation, shaping whether AI reinforces technocratic practices or enables inclusive governance.
Thematic insights from open-ended responses provide qualitative confirmation and deepen this synthesis. Stakeholders view AI as a lever for broader institutional reform rather than a standalone solution, emphasizing governance coherence, capacity building, inclusive participation, and sustainability-oriented outcomes. These themes mirror the framework’s dynamic structure, in which institutional arrangements condition participatory processes and trust functions as both an outcome and a reinforcing mechanism. Importantly, trust is perceived to be cumulative and relational, strengthened through transparency, feedback, and responsiveness, rather than automatically generated by technological innovation.
Overall, the integrated findings validate the conceptual framework’s core assumptions and causal logic. AI-enabled participatory urban planning operates as a socio-technical governance system in which institutional capacity shapes participatory quality, participatory experiences influence trust, and trust in turn conditions the legitimacy and sustainability of AI-supported planning. This synthesis demonstrates that realizing the participatory promise of AI in smart cities requires not only technological investment, but also sustained institutional reform, capacity development, and trust-oriented governance practices.
5. Discussion
This study examined how AI can enhance participatory urban planning in the context of sustainable smart cities, with particular attention to the mediating role of institutional capacity and the reinforcing function of public trust. The empirical findings strongly support the conceptual framework developed in the literature, confirming that AI-enabled participation is best understood as a socio-technical governance process, rather than a technology-driven outcome. Across quantitative and qualitative evidence, the results demonstrate that AI’s participatory value depends less on awareness or technical availability and more on institutional readiness, participation quality, and trust-oriented governance arrangements.
A central contribution of this study is the empirical validation of the participation paradox identified in prior smart city and participatory planning scholarship. Consistent with earlier findings [
9,
10,
13], the results indicate that formal participation channels and digital engagement capacity are relatively well established, while perceived influence, outcome effectiveness, and trust remain weak. This misalignment reinforces long-standing critiques of tokenistic participation, where engagement mechanisms exist but do not meaningfully shape planning decisions [
11,
12]. In AI-enabled contexts, this gap becomes particularly salient, as algorithmic systems risk amplifying opacity and distancing citizens further from decision-making if institutional responsiveness is not strengthened [
15].
The findings related to AI awareness and use deepen this interpretation. High levels of conceptual awareness coexist with limited direct interaction, revealing a pronounced awareness–use gap. This pattern mirrors results of public-sector AI governance studies showing that strategic discourse and policy ambition often outpace operational implementation [
7,
31]. Importantly, the respondents expressed a clear preference for AI tools that enhance transparency, sense-making, and accountability—such as interactive maps and sentiment analysis—over automated interfaces that replace human judgment. This aligns with recent evidence suggesting that citizens are more receptive to AI when it supports deliberation and visibility rather than control or automation [
18,
36,
37].
Institutional and technical readiness emerges as the most decisive factor shaping AI’s contribution to participatory planning. Fragmented governance structures, skill gaps, limited interdepartmental coordination, and regulatory ambiguity collectively constrain AI’s participatory impact. These findings are consistent with institutional theories of digital government, which emphasize that organizational capacity, learning, and coordination mediate the effectiveness of digital innovation [
29,
30,
31,
32]. The respondents’ confidence in citizens’ digital literacy and cautious optimism about AI’s potential to enhance trust remain conditional on institutional performance. This supports the framework’s proposition that AI does not automatically build trust; rather, trust emerges when AI is embedded in transparent, accountable, and inclusive governance arrangements [
19,
33].
The thematic analysis of open-ended responses provides qualitative confirmation of these dynamics and highlights stakeholders’ perception of AI as a lever for institutional reform rather than a standalone technological solution. Calls for regulatory clarity, capacity building, organizational learning, and inclusive participation reflect a shared understanding that technical tools must be situated within coherent governance architectures to deliver public value. These insights align with critical AI governance scholarship, which cautions that without ethical safeguards, institutional accountability, and attention to power asymmetries, AI risks reinforcing exclusion and inequality instead of democratizing decision-making [
14,
27,
38].
The sustainability-oriented themes identified in the qualitative findings extend the discussion by linking participatory AI governance to long-term environmental and quality-of-life outcomes. Stakeholders’ emphasis on AI applications for waste management, green infrastructure, and resource efficiency resonates with emerging research on data-informed ecological urbanism and sustainable smart cities [
1,
2,
3]. This reinforces the argument that participatory planning and sustainability are mutually reinforcing: inclusive governance enhances legitimacy and compliance, while sustainability goals provide a normative anchor for AI deployment beyond efficiency and control.
Taken together, the discussion underscores three key theoretical and practical implications. First, it advances smart city scholarship by empirically demonstrating that institutional capacity mediates the relationship between AI and participation, moving beyond technocentric adoption narratives. Second, it contributes to participatory planning theory by showing how digital and AI-enabled tools can both enable and undermine participation depending on governance design. Third, it highlights trust as a dynamic outcome of participatory experience rather than a precondition for technology adoption, reinforcing calls for transparency, feedback, and accountability in AI governance [
18,
19,
20,
21].
Overall, the findings suggest that realizing the participatory promise of AI in smart cities requires sustained investment not only in digital infrastructure, but also in institutional reform, capacity development, and trust-oriented governance practices. In the absence of these conditions, AI risks reinforcing technocratic planning models rather than enabling inclusive, sustainable, and democratically legitimate urban futures.
6. Conclusions, Policy Implications, Limitations, and Future Research
6.1. Conclusions
This study examined the role of AI in advancing participatory urban planning for sustainable smart cities, drawing on evidence from the Dammam Metropolitan Area. By integrating quantitative survey results with qualitative thematic insights, the analysis demonstrates that AI’s contribution to participation is neither automatic nor technology-driven, but institutionally mediated and trust-dependent. These findings consistently validate the conceptual framework developed in the literature: AI functions as a governance enabler whose participatory effects materialize only when institutional capacity is adequate and when participation processes generate credibility, feedback, and accountability.
The empirical results reveal a persistent participation paradox. While AI awareness and digital readiness among stakeholders is relatively high, perceived influence, outcome effectiveness, and trust in participatory processes remain limited. This gap underscores that expanding participation channels or deploying AI tools alone will not translate into meaningful engagement. Instead, participation quality, defined by responsiveness—transparency, and decision traceability—emerges as the decisive factor shaping trust and sustained engagement. Institutional readiness, particularly in terms of skills, coordination, and regulatory clarity, is identified as the primary bottleneck constraining AI-enabled participatory planning.
Qualitative evidence reinforces these conclusions, showing that stakeholders perceive AI less as a standalone technological solution and more as a lever for broader institutional reform, including governance coherence, capacity building, inclusive participation, and sustainability-oriented service improvement. Taken together, the findings position AI-enabled participatory urban planning as a socio-technical governance system in which technology, institutions, participation, and trust are dynamically interlinked.
6.2. Policy Implications
The findings carry several important policy implications for cities pursuing AI-driven smart and sustainable urban development.
First, institutional capacity building should precede or accompany AI deployment. Investments in AI infrastructure must be matched with sustained training programs for municipal staff, cross-departmental coordination mechanisms, and organizational learning systems. Without these foundations, AI initiatives risk remaining fragmented, symbolic, or confined to technical units.
Second, participation quality should be prioritized over participation volume. Policymakers should design AI-enabled platforms that emphasize deliberation, feedback loops, and transparency, rather than focusing solely on expanding the number of engagement channels. Tools such as interactive planning dashboards, participatory mapping, and sentiment analysis are more likely to build trust when they visibly connect public input to planning outcomes.
Third, regulatory clarity and governance coherence are essential. Clear legal frameworks, defined decision rights, and alignment across national and municipal levels can reduce institutional uncertainty and enhance accountability in AI-assisted planning. Regulatory guidance should extend beyond high-level principles to provide operational standards for explainability, data governance, and ethical safeguards.
Fourth, trust must be treated as a policy outcome, not a precondition. Trust in AI-enabled planning develops through transparent processes, inclusive design, and demonstrated responsiveness. Policies should therefore embed monitoring, evaluation, and public communication mechanisms that make decision processes and AI-supported outcomes understandable and contestable.
Last, sustainability goals should anchor AI-enabled participation. Aligning AI applications with environmental stewardship and quality-of-life outcomes—such as resource efficiency, green infrastructure, and service optimization—can enhance the legitimacy of AI adoption and strengthen public support for long-term urban transformation.
6.3. Study Limitations and Future Research Directions
This study has several limitations. First, it adopts a cross-sectional design, which limits the ability to draw causal inferences or assess how perceptions and institutional readiness evolve over time. Second, the use of purposive sampling constrains statistical generalizability beyond comparable metropolitan and governance contexts, although it is appropriate for examining institutional and stakeholder dynamics. Third, the analysis relies on self-reported perceptions, which may be subject to social desirability and recall bias. Fourth, the study assesses perceived rather than observed use of AI-enabled participatory tools, leaving room for future work to examine actual behavioral outcomes and planning decisions.
Building on these limitations, several avenues for future research emerge. Longitudinal studies could track how institutional capacity, participation quality, and trust evolve as AI initiatives mature, providing stronger causal insights into governance dynamics. Comparative research across cities or national contexts—particularly between Global South and Global North settings—would help identify contextual conditions under which AI-enabled participation succeeds or fails. Future studies could also integrate behavioral data, such as platform usage logs or planning outcomes, to complement perception-based measures. In addition, further research can explore equity and inclusion dimensions of AI-enabled participation, particularly regarding digitally vulnerable groups and marginalized communities. Finally, experimental or pilot-based research evaluating specific AI tools in real planning processes could offer actionable evidence on how design choices influence participation quality, trust formation, and sustainability outcomes.
In conclusion, this study demonstrates that the participatory promise of AI in smart cities is fundamentally a governance challenge rather than a technological one. Realizing this promise requires sustained institutional reform, capacity development, and trust-oriented policy design to ensure that AI supports inclusive, accountable, and sustainable urban futures.